Journal of Infrared and Millimeter Waves, Volume. 42, Issue 6, 824(2023)

Research on hyperspectral image classification method based on deep learning

Bin ZHANG1, Liang LIU2, Xiao-Jie LI1, and Wei ZHOU1、*
Author Affiliations
  • 1Aviation Operations and Service Institute,Naval Aviation University,Yantai 264000,China
  • 2Coastal Defense College,Naval Aviation University,Yantai 264000,China
  • show less
    Figures & Tables(9)
    The hyperspectral image classification network of SST
    The specific structure of the Spectral-Spatial Attention Module
    The SST hyperspectral image classification network architecture adopted for the experiments
    Feature classification results for the IP dataset (a) pseudo-colour composite image, (b) surface real data, (c) DPyResNet method, (d) SSRN method, (e) ContextNet method, (f) method proposed in this paper, (g) classification legend
    Feature classification results for the UP dataset (a) pseudo-colour composite image, (b) surface real data, (c) DPyResNet method, (d) SSRN method, (e) ContextNet method, (f) method proposed in this paper, (g) classification legend
    • Table 1. SST module network parameters

      View table
      View in Article

      Table 1. SST module network parameters

      LayerLayer Type
      WQConv3d(i=1,o=24,k=(1×1×7),s=(1,1,2),p=(1,1,0))
      WKConv3d(i=1,o=24,k=(3×3×7),s=(1,1,2),p=(1,1,0))
      WVConv3d(i=1,o=24,k=(3×3×7),s=(1,1,2),p=(1,1,0))
      FFN

      Conv3d(i=24,o=24,k=(1×1×7),s=1,p=(0,0,3))

      BatchNorm3d(24)

      ReLU

      Conv3d(i=24,o=24,k=(1×1×7),s=1,p=(0,0,3))

      BatchNorm3d(24)

    • Table 2. Pooled residual network parameters

      View table
      View in Article

      Table 2. Pooled residual network parameters

      Model TypeModule TypeLayer Type
      PoolRes1Pool

      Maxpool

      Avgpool

      Conv2d(i=1,o=1,k=(3×3),s=1,p=(1,1))

      Conv2d(i=1,o=1,k=(3×3),s=1,p=(1,1))

      Sigmod

      ResBlock1

      Conv3d(i=24,o=24,k=(1×1×7),s=1,p=(0,0,3))

      BatchNorm3d(24)

      ReLU

      Conv3d(i=24,o=24,k=(1×1×7),s=1,p=(0,0,3))

      BatchNorm3d(24)

      PoolRes2Pool

      Maxpool

      Avgpool

      Conv2d(i=1,o=1,k=(3×3),s=1,p=(1,1))

      Conv2d(i=1,o=1,k=(3×3),s=1,p=(1,1))

      Sigmod

      ResBlock2

      Conv3d(i=24,o=24,k=(3×3×1),s=1,p=(1,1,0))

      BatchNorm3d(24)

      ReLU

      Conv3d(i=24,o=24,k=(3×3×1),s=1,p=(1,1,0))

      BatchNorm3d(24)

      PoolRes3Pool

      Maxpool

      Avgpool

      Conv2d(i=1,o=1,k=(3×3),s=1,p=(1,1))

      Conv2d(i=1,o=1,k=(3×3),s=1,p=(1,1))

      Sigmod

      ResBlock3

      Conv3d(i=24,o=24,k=(3×3×1),s=1,p=(1,1,0))

      BatchNorm3d(24)

      ReLU

      Conv3d(i=24,o=24,k=(3×3×1),s=1,p=(1,1,0))

      BatchNorm3d(24)

    • Table 3. Comparison results of classification performance metrics OA (%), AA (%) and Kappa coefficient (κ) of different classification methods on IP dataset

      View table
      View in Article

      Table 3. Comparison results of classification performance metrics OA (%), AA (%) and Kappa coefficient (κ) of different classification methods on IP dataset

      ClassTrainingTestClassical ModelsDeep Neural Networks
      MLRRFSVMLSTMResNetContextNetMs-3DNetENL-FCNDPyResNetSSRNSST
      144215.45±0.02328.46±0.06151.22±0.19069.11±0.09098.66±0.01888.78±0.08066.67±0.47197.56±0.00094.69±0.07657.78±0.42399.07±0.013
      2142128673.77±0.00656.63±0.02481.22±0.03774.22±0.01687.85±0.02098.19±0.00575.94±0.08093.15±0.00093.83±0.04098.37±0.01298.73±0 .006
      38374751.14±0.02748.42±0.01365.82±0.01371.49±0.03092.71±0.00795.37±0.02881.39±0.00797.59±0.00089.30±0.00397.47±0.01098.83±0.006
      42321443.97± 0.05133.49±0.02557.75±0.04160.72±0.04195.43±0.04697.04±0.02188.63±0.06391.55±0.00093.51±0.05599.12±0.009998.63±0.020
      54843583.52±0.03485.21±0.02590.04±0.01487.51±0.01598.23±0.01597.78±0.01595.61±0.05497.47±0.00099.26±0.00497.79±0.01399.03±0.006
      67365794.82±0.00992.64±0.02796.25±0.00694.77±0.01597.98±0.01198.60±0.00896.78±0.02699.24±0.00098.52±0.00798.50±0.01098.55±0.009
      722641.33±0.1862.67±0.03873.33±0.01985.33±0.09492.98±0.09990.35±0.098100.00±0.000100.00±0.00083.08±0.17866.67±0.47190.70±0.193
      84743198.53±0.00697.67±0.01597.98±0.00697.83±0.00995.06±0.01497.76±0.02689.51±0.09197.44±0.00097.63±0.02296.45±0.029100.00±0.000
      92185.56±0.0459.26±0.09450.00±0.04553.70±0.13960.83±0.28386.90±0.10266.67±0.47172.22±0.00066.66±0.47156.25±0.41878.34±0.091
      109787565.41±0.04160.91±0.04773.87±0.01873.68±0.02596.05±0.01396.08±0.01887.41±0.07094.74±0.00093.77±0.02998.33±0.00997.49±0.002
      11245221080.37±0.01587.88±0.01982.90±0.01284.93±0.02493.32±0.04197.35±0.00476.69±0.09695.61±0.00089.78±0.04099.08±0.00599.26±0.002
      125953455.68±0.00741.26±0.03074.91±0.04373.68±0.02586.65±0.07794.00±0.01288.65±0.03697.00±0.00083.43±0.10798.46±0.00998.03±0.002
      132018597.66±0.00590.09±0.04096.94±0.02198.74±0.00582.16±0.07695.01±0.0399.78±0.00397.83±0.00098.19±0.021100.00±0.00099.59±0.005
      14126113995.61±0.00495.46±0.01493.82±0.01096.22±0.00495.39±0.01698.49±0.01490.06±0.08799.12±0.00096.00±0.02198.63±0,01099.38±0.005
      153834856.00±0.04541.11±0.02960.42±0.04460.04±0.02990.96±0.12794.10±0.03188.21±0.04492.80±0.00091.22±0.04099.24±0.00598.08±0.009
      1698484.92±0.02079.37±0.03091.27±0.05490.87±0.02294.73±0.03893.57±0.04698.53±0.021100.00±0.00070.90±0.38895.63±0.06296.94±0.023
      OA1018923176.23±0.00872.98±0.00682.00±0.00682.13±0.00492.44±0.00696.98±0.00683.44±0.06096.15±0.05491.47±0.02998.38±0.00498.67±0.001
      AA65.23±0.01959.41±0.00577.36±0.01979.53±0.00591.19±0.02594.96±0.00386.91±0.08495.21±0.02894.14±0.00691.11±0.08096.63±0.014
      κ0.7266±0.0100.6862±0.0070.7941±0.0070.7954±0.0040.9137±0.0060.9655±0.0070.8082±0.0700.9560±0.0300.9020±0.0340.9815±0.0050.9849±0.002
    • Table 4. Comparison results of classification performance metrics OA (%), AA (%) and Kappa coefficient (κ) of different classification methods on UP dataset

      View table
      View in Article

      Table 4. Comparison results of classification performance metrics OA (%), AA (%) and Kappa coefficient (κ) of different classification methods on UP dataset

      ClassTrainingTestClassical ModelsDeep Neural Networks
      MLRRFSVMLSTMResNetContextNetMs-3DNetENL-FCNDPyResNetSSRNSST
      1663596892.30±0.00491.11±0.00794.30±0.00895.47±0.00596.82±0.02399.56±0.00299.36±0.00199.98±0.00098.35±0.01799.85±0.00199.91±0.001
      218641678596.18±0.00398.11±0.00397.65±0.00296.91±0.00298.59±0.00899.85±0.00299.80±0.000100.00±0.00098.76±0.00899.98±0.00099.98±0.001
      3209189072.75±0.01367.71±0.01481.26±0.01878.01±0.01190.01±0.06199.19±0.00198.02±0.01799.68±0.00094.22±0.03499.68±0.00399.75±0.002
      4306275889.28±0.00288.20±0.00694.63±0.00494.92±0.00799.32±0.00399.80±0.00299.71±0.00198.94±0.00099.20±0.00599.92±0.00199.97±0.004
      5134121199.42±0.00398.93±0.00299.20±0.00299.26±0.00399.81±0.00099.91±0.00199.94±0.000100.00±0.00099.72±0.00399.94±0.00099.9±0.003
      6502452777.45±0.00572.14±0.02290.58±0.00887.85±0.01299.41±0.00299.75±0.00399.43±0.00399.87±0.00098.52±0.00699.95±0.00199.95±0.005
      7133119755.69±0.04375.68±0.01785.71±0.01180.23±0.00796.90±0.01798.37±0.02299.18±0.005100.00±0.00097.37±0.004100.00±0.000100.00±0.000
      8368331487.04±0.00489.64±0.01388.20±0.00388.49±0.00892.00±0.04498.48±0.00897.13±0.00599.69±0.00097.37±0.00498.28±0.01599.02±0.005
      99485399.77±0.00199.77±0.00299.84±0.00199.88±0.00198.88±0.01299.26±0.00599.74±0.002100.00±0.00084.51±0.07199.39±0.00399.94±0.006
      OA42733850389.87±0.00190.41±0.00194.19±0.00293.45±0.00197.38±0.00799.57±0.00199.35±0.00199.76±0.00299.60±0.00199.77±0.00199.87±0.004
      AA85.54±0.00486.81±0.00292.38±0.00391.23±0.00196.86±0.00599.35±0.00299.15±0.00299.70±0.00297.05±0.01099.66±0.0199.83±0.005
      κ0.8646±0.0010.8710±0.0020.9229±0.0020.9130±0.0010.9652±0.0090.9943±0.0010.9913±0.0020.9972±0.00196.69±0.0060.9969±0.010.9983±0.006
    Tools

    Get Citation

    Copy Citation Text

    Bin ZHANG, Liang LIU, Xiao-Jie LI, Wei ZHOU. Research on hyperspectral image classification method based on deep learning[J]. Journal of Infrared and Millimeter Waves, 2023, 42(6): 824

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Research Articles

    Received: Jan. 6, 2023

    Accepted: --

    Published Online: Dec. 26, 2023

    The Author Email: Wei ZHOU (yeaweam@163.com)

    DOI:10.11972/j.issn.1001-9014.2023.06.016

    Topics