Chinese Journal of Liquid Crystals and Displays, Volume. 40, Issue 4, 630(2025)

Neural architecture search combined with efficient attention for hyperspectral image classification

Haisong CHEN1, Kang ZHANG2, Haoran LÜ2, Aili WANG2、*, and Haibin WU2
Author Affiliations
  • 1School of Integrated Circuit,Shenzhen Polytechnic University,Shenzhen 518055,China
  • 2Heilongjiang Province Key Laboratory of Laser Spectroscopy Technology and Application,Harbin University of Science and Technology,Harbin 150080,China
  • show less
    Figures & Tables(20)
    Network structure of EA-NAS
    Operators of attention
    Modular convolution operators
    Search process
    Classification results of KSC dataset
    Classification results of PU dataset
    Optimal structure of cell on PU dataset
    Optimal structure of cell on KSC dataset
    Loss curves
    Accurcy curves
    Feature map of no-moducular convolution operator t-SNE
    Feature map of modular convolution operator t-SNE
    • Table 1. Details of KSC dataset

      View table
      View in Article

      Table 1. Details of KSC dataset

      No.ClassColorSample
      Total5 211
      1Scrub761
      2Willow243
      3Palm256
      4Pine252
      5Broadleaf161
      6Hardwood229
      7Swap105
      8Graminoid431
      9Spartina520
      10Cattail404
      11Salt419
      12Mud503
      13Water927
    • Table 2. Details of PU dataset

      View table
      View in Article

      Table 2. Details of PU dataset

      No.ClassColorSample numbers
      Total42 776
      1Asphalt6 631
      2Meadows18 649
      3Gravel2 099
      4Trees3 064
      5Painted metal sheets1 345
      6Bare Soil5 029
      7Bitumen1 330
      8Self-Blocking Bricks3 682
      9Shadows947
    • Table 3. Parameter settings of each operator in the search space

      View table
      View in Article

      Table 3. Parameter settings of each operator in the search space

      序号算子名称参数设置
      1CBAM注意力算子
      2FusedMB_conv_3_3Conv(3×3)-SE-Conv(1×1)
      3FusedMB_conv_3_5Conv(3×5)-SE-Conv(1×1)
      4FusedMB_conv_3_7Conv(3×7)-SE-Conv(1×1)
      5MB_conv_3_3Conv(1×1)-DSepConv(3×3)-SE-Conv(1×1)
      6MB_conv_3_5Conv(1×1)-DSepConv(3×3)-SE-Conv(1×1)
      7MB_conv_3_7Conv(1×1)-DSepConv(3×3)-SE-Conv(1×1)
    • Table 4. Comparison of classification accuracy in KSC dataset

      View table
      View in Article

      Table 4. Comparison of classification accuracy in KSC dataset

      No.RBF-SVM3D-CNNResNet3D Auto-CNNHNASEA-NAS
      192.71±0.6894.07±1.3197.04±2.6692.50±7.1499.43±1.2299.96±0.06
      284.62±4.0292.12±3.7586.83±7.8696.31±4.8293.58±1.6398.46±3.06
      373.64±7.9489.73±7.5485.53±8.0794.07±6.1095.29±6.74100±0.00
      454.33±8.6280.75±8.3678.67±9.0295.92±5.3295.76±2.1996.95±4.43
      561.46±10.3380.86±4.1767.70±2.1497.04±2.3496.47±4.2496.26±3.57
      665.51±8.2184.74±7.3393.52±6.3198.88±2.7393.10±7.4899.82±0.13
      776.52±3.3793.56±10.2195.12±7.7094.54±10.9198.23±0.3099.54±0.91
      886.20±5.7194.78±1.0795.09±3.4894.53±6.9198.37±1.6299.60±0.52
      988.71±2.0396.76±1.2197.14±2.2184.77±10.9199.86±0.34100±0.00
      1096.72±4.0498.91±0.2399.51±0.9498.85±1.6199.66±1.0099.40±1.19
      1196.31±1.6299.32±0.7699.81±0.54100±0.0099.71±0.7799.53±0.67
      1293.56±2.4998.12±1.8696.33±2.1297.96±5.2599.18±1.2699.87±0.88
      1399.88±0.0299.64±0.3798.83±1.9899.18±1.99100±0.00100±0.00
      OA/%87.88±0.9794.89±0.2593.72±2.0094.61±1.7397.90±0.8799.50±0.21
      AA/%82.92±2.0992.37±0.4891.63±2.6895.26±0.9296.49±0.2699.20±0.42
      K×10086.27±1.3594.66±0.6193.00±2.2393.98±1.9496.89±0.9799.44±0.23
    • Table 5. Comparison of classification accuracy in PU dataset

      View table
      View in Article

      Table 5. Comparison of classification accuracy in PU dataset

      No.RBF-SVM3D-CNNResNet3D Auto-CNNHNASEA-NAS
      181.14±4.9894.94±1.4196.94±1.8192.87±4.4892.10±4.4296.92±1.12
      283.93±1.3196.97±0.7598.92±1.4198.82±0.7998.79±0.6598.07±1.67
      357.56±15.1173.81±11.0292.76±4.8591.17±5.9893.76±2.3495.85±4.94
      494.37±3.1975.44±12.5499.75±0.1591.28±10.1692.73±4.3499.52±0.42
      595.44±3.1098.73±0.1499.80±0.2193.96±5.9492.16±5.8799.56±0.48
      681.02±5.9180.32±5.4398.68±1.0599.09±0.9499.10±0.8198.05±0.91
      769.77±10.9468.62±14.1596.04±4.3488.71±5.9290.94±4.7596.40±1.89
      871.73±6.0180.91±3.2289.52±5.1989.07±4.7889.83±4.2491.24±2.67
      999.86±0.0597.23±5.0398.98±0.8485.03±7.2286.92±6.4393.55±1.17
      OA/%82.16±2.0186.85±3.0797.36±0.8594.85±0.8995.11±0.8796.81±0.26
      AA/%78.09±4.5785.77±4.0296.82±0.7592.18±1.6492.89±0.7796.35±0.32
      K×10075.21±4.0984.16±4.0696.49±1.1391.84±4.7493.81±0.7796.03±0.34
    • Table 6. Comparison of parameter count and runtime of different methods on two datasets

      View table
      View in Article

      Table 6. Comparison of parameter count and runtime of different methods on two datasets

      MethodsKSCPU
      参数量/MTrain time/minTest time/s参数量/MTrain time/minTest time/s
      3D Auto-CNN0.1219.417.720.1911.225.79
      HNAS2.7031.638.942.7315.358.89
      EA-NAS1.3311.579.471.5212.7712.48
    • Table 7. Ablation experiment

      View table
      View in Article

      Table 7. Ablation experiment

      AttentionConvolution BlockPolyOA/%AA/%K×100
      92.3089.7889.96
      96.3496.2396.33
      96.3396.5496.84
      96.8196.3596.03
    • Table 8. Modular convolutional operator ablation experiment

      View table
      View in Article

      Table 8. Modular convolutional operator ablation experiment

      SE卷积算子CBAM卷积算子FusedMB卷积算子MB卷积算子OA/%AA/%K×100
      94.6195.5395.47
      92.3089.7889.96
      96.3396.5496.84
      96.8196.3596.03
    Tools

    Get Citation

    Copy Citation Text

    Haisong CHEN, Kang ZHANG, Haoran LÜ, Aili WANG, Haibin WU. Neural architecture search combined with efficient attention for hyperspectral image classification[J]. Chinese Journal of Liquid Crystals and Displays, 2025, 40(4): 630

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category:

    Received: Aug. 28, 2024

    Accepted: --

    Published Online: May. 21, 2025

    The Author Email: Aili WANG (aili925@hrbust.edu.cn)

    DOI:10.37188/CJLCD.2024-0254

    Topics