Infrared Technology, Volume. 44, Issue 11, 1210(2022)

Double-Branch DenseNet-Transformer Hyperspectral Image Classification

Xinwei LI and Tian YANG
Author Affiliations
  • [in Chinese]
  • show less

    To reduce the training samples of hyperspectral images and obtain better classification results, a double-branch deep network model based on DenseNet and a spatial-spectral transformer was proposed in this study. The model includes two branches for extracting the spatial and spectral features of the images in parallel. First, the spatial and spectral information of the sub-images was initially extracted using 3D convolution in each branch. Then, the deep features were extracted through a DenseNet comprising batch normalization, mish function, and 3D convolution. Next, the two branches utilized the spectral transformer module and spatial transformer module to further enhance the feature extraction ability of the network. Finally, the output characteristic maps of the two branches were fused and the final classification results were obtained. The model was tested on Indian pine, University of Pavia, Salinas Valley, and Kennedy Space Center datasets, and its performance was compared with six types of current methods. The results demonstrate that the overall classification accuracies of our model are 95.75%, 96.75%, 95.63%, and 98.01%, respectively when the proportion of the training set of Indian pines is 3% and the proportion of the training set of the rest is 0.5%. The overall performance was better than that of other methods.

    Tools

    Get Citation

    Copy Citation Text

    LI Xinwei, YANG Tian. Double-Branch DenseNet-Transformer Hyperspectral Image Classification[J]. Infrared Technology, 2022, 44(11): 1210

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category:

    Received: Aug. 1, 2022

    Accepted: --

    Published Online: Feb. 4, 2023

    The Author Email:

    DOI:

    Topics