Optics and Precision Engineering, Volume. 31, Issue 17, 2598(2023)

Lightweight deep global-local knowledge distillation network for hyperspectral image scene classification

Yingxu LIU1... Chunyu PU1, Diankun XU2, Yichuan YANG1 and Hong HUANG1,* |Show fewer author(s)
Author Affiliations
  • 1Key Laboratory of Optoelectronic Technology and Systems of the Education Ministry of China, Chongqing University, Chongqing400044, China
  • 2Measurement and Control Technology and Instrument major, College of Optoelectronic Engineering, Chongqing University, Chongqing400044, China
  • show less
    Figures & Tables(13)
    Framework of proposed Lightweight Deep Globle-local Knowledge Distillation(LDGLKD) network
    Detail structure of layer l in ViT model
    Overall build process for OHID-SC dataset
    Examples of scene in constructed OHID-SC dataset
    Examples and numbers of scene in HSRS-SC dataset
    OAs with respect to different temperatures
    Effect of KD coefficient α on OAs
    Ablation experiment results
    Confusion matrix results of LDGLKD network
    OAs with different training data percentages for comparison methods
    • Table 1. Steps of LDGLKD algorithm

      View table
      View in Article

      Table 1. Steps of LDGLKD algorithm

      算法:轻量化深度全局-局部知识蒸馏
      训练过程:

      阶段1:以小的知识蒸馏系数α进行训练

      输入:训练集图像Iin,温度T,知识蒸馏系数α,教师模型ViT,学生模型VGG16,教师模型函数集合ftea(),学生模型函数集合fstu()

      输出:经过优化的教师模型与学生模型;

      1:Fteaftea(Iin)

      2:Fstufstu(Iin)

      3:LKDT2×KLdiv(QstuT,QteaT)QstuTQteaT公式(18)和(19)计算;

      4:LfirαLKD+(1-α)(LCtea+LCstu)

      5:以Lfir为损失函数对教师模型以及学生模型进行共同优化;

      阶段2:以大的知识蒸馏系数α进行训练
      输入:训练集图像Iin,温度T,知识蒸馏系数α,教师模型ViT,学生模型VGG16,教师模型函数集合ftea(),学生模型函数集合fstu()

      输出:经过优化的学生模型;

      6:Fteaftea(Iin)

      7:Fstufstu(Iin)

      8:LKDT2×KLdiv(QstuT,QteaT)QstuTQteaT公式(18)和(19)计算;

      9:Lsec2αLKD+(1-2α)LCstu

      10:以Isec为损失函数对学生模型进行优化,并停止对教师模型的训练;

      测试阶段:

      场景分类

      输入:测试集图像,Iintest,学生模型VGG16,学生模型函数集合,fstu()

      输出:预测标签Ylabel

      11.Fstufstu(Iintest)

      12.Ylabelsoftmax(Fstu)

      13.生成每一张测试集图像的语义标签Ylabel
    • Table 2. Results of comparison algorithm [OA±STD]

      View table
      View in Article

      Table 2. Results of comparison algorithm [OA±STD]

      MethodOHID-SCHSRS-SC
      ResNet10175.86±1.4195.95±0.26
      ResNet1880.34±0.3696.43±0.47
      GoogleNet58.17±3.8087.81±0.04
      EffcientNet83.81±0.6697.21±0.64
      VGG1681.78±0.5095.13±0.22
      SKAL-R55.14±0.4296.08±0.15
      SKAL-V59.80±0.4396.01±0.24
      ACRNet-R60.39±0.1895.75±0.07
      ACRNet-M65.02±0.1497.96±0.41
      LDGLKD91.62±0.2097.96±0.04
    • Table 3. Running time of comparison algorithms

      View table
      View in Article

      Table 3. Running time of comparison algorithms

      MethodDatasetOHID-SCHSRS-SC
      ResNet101Train952
      Test45126
      ResNet18Train849
      Test19121
      GoogleNetTrain950
      Test27123
      EfficientNetTrain1051
      Test56125
      VGG16Train752
      Test15129
      SKAL-RTrain511
      Test1434
      SKAL-VTrain515
      Test1536
      ACRNet-RTrain24
      Test513
      ACRNet-MTrain25
      Test514
      LDGLKDTrain843
      Test13108
    Tools

    Get Citation

    Copy Citation Text

    Yingxu LIU, Chunyu PU, Diankun XU, Yichuan YANG, Hong HUANG. Lightweight deep global-local knowledge distillation network for hyperspectral image scene classification[J]. Optics and Precision Engineering, 2023, 31(17): 2598

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Information Sciences

    Received: Jan. 3, 2023

    Accepted: --

    Published Online: Oct. 9, 2023

    The Author Email: HUANG Hong (hhuang@cqu.edu.cn)

    DOI:10.37188/OPE.20233117.2598

    Topics