Laser & Optoelectronics Progress, Volume. 60, Issue 22, 2228006(2023)

Classification Based on Hyperspectral Image and LiDAR Data with Contrastive Learning

Shihan Li1,2,3,4, Haiyang Hua1,2、*, and Hao Zhang1,2,3,4
Author Affiliations
  • 1Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, Liaoning , China
  • 2Key Laboratory of Opto-Electronic Information Processing, Chinese Academy of Sciences, Shenyang 110016, Liaoning , China
  • 3Institute for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, Liaoning , China
  • 4University of Chinese Academy of Sciences, Beijing 100049, China
  • show less
    Figures & Tables(24)
    Framework of the proposed method. (a) Contrastive learning; (b) fine tune
    Convolution network module
    Densenet block
    Transformer Encoder block
    Multi-head attention mechanism
    Houston 2013 dataset. (a) HSI data; (b) LiDAR data; (c) ground-truth label
    Trento dataset. (a) HSI data; (b) LiDAR data; (c) ground-truth label
    Classification results on the Houston 2013 dataset
    Classification results on the Trento dataset
    Classification results on the Houston 2013 dataset when u=5. (a) EndNet; (b) CCRNet; (c) two brance CNN;(d) MAHiDFNet; (e) ours; (f) ground-truth label
    Classification results on the Houston 2013 dataset when u=10. (a) EndNet; (b) CCRNet; (c) two brance CNN; (d) MAHiDFNet; (e) ours; (f) ground-truth label
    Classification results on the Houston 2013 dataset when u=15. (a) EndNet; (b) CCRNet; (c) two brance CNN;(d) MAHiDFNet; (e) ours; (f) ground-truth label
    Classification results on the Trento dataset when u=2. (a) EndNet; (b) CCRNet; (c) two brance CNN; (d) MAHiDFNet; (e) ours; (f) ground-truth label
    Classification results on the Trento dataset when u=3. (a) EndNet; (b) CCRNet; (c) two brance CNN; (d) MAHiDFNet; (e) ours; (f) ground-truth label
    Classification results on the Trento dataset when u=4. (a) EndNet; (b) CCRNet; (c) two brance CNN; (d) MAHiDFNet; (e) ours; (f) ground-truth label
    • Table 1. Overall classification accuracy on the Houston 2013 dataset

      View table

      Table 1. Overall classification accuracy on the Houston 2013 dataset

      Algorithmu=5u=10u=15u=20u=25u=50u=100
      EndNet54.3868.9474.1277.3380.6685.5491.36
      CCRNet47.9765.8777.9582.2884.3893.2494.25
      two brance CNN57.7475.2577.6081.6084.5889.8993.80
      MAHiDFNet55.1275.0082.9386.0885.6789.9591.21
      ours78.4788.8790.6091.3693.7595.2696.77
    • Table 2. Overall classification accuracy on the Trento dataset

      View table

      Table 2. Overall classification accuracy on the Trento dataset

      Algorithmu=2u=3u=4u=5u=6u=9u=12
      EndNet58.7073.0281.1176.6379.6885.2988.20
      CCRNet67.8072.0070.5078.9083.6486.2493.94
      two brance CNN78.0689.2190.7394.6396.3496.7295.48
      MAHiDFNet79.7784.9692.8095.8798.1399.0998.98
      ours88.1297.197.1997.5498.3099.0999.23
    • Table 3. Average time of each epoch of training

      View table

      Table 3. Average time of each epoch of training

      DatasetEndNetCCRNettwo brance CNNMAHiDFNetcontrast learningfine tuneours
      Houston 20130.130.4036.456.803.820.443.37
      Trento0.200.2026.282.060.940.390.72
    • Table 4. Overall classification accuracy of contrastive learning ablation experiments on the Houston 2013 dataset

      View table

      Table 4. Overall classification accuracy of contrastive learning ablation experiments on the Houston 2013 dataset

      Conditionu=5u=10u=15u=20u=25u=50u=100
      without contrast learning63.6272.6981.5885.1485.7193.1393.41
      with contrast learning78.4788.8790.6091.3693.7595.2696.77
    • Table 5. Overall classification accuracy of contrastive learning ablation experiments on the Trento dataset

      View table

      Table 5. Overall classification accuracy of contrastive learning ablation experiments on the Trento dataset

      Conditionu=2u=3u=4u=5u=6u=9u=12
      without contrast learning83.5393.3692.8195.5495.3898.2298.45
      with contrast learning88.1297.1297.1997.5498.3099.0999.23
    • Table 6. Overall classification accuracy on the Houston 2013 dataset with different data modality

      View table

      Table 6. Overall classification accuracy on the Houston 2013 dataset with different data modality

      Data modalityu=5u=10u=15u=20u=25u=50u=100
      HSI59.7972.7778.6382.8184.1090.6393.95
      LiDAR45.4258.0762.9567.3870.3278.5285.22
      HSI+ LiDAR78.4788.8790.6091.3693.7595.2696.77
    • Table 7. Overall classification accuracy on the Trento dataset with different data modality

      View table

      Table 7. Overall classification accuracy on the Trento dataset with different data modality

      Data modalityu=2u=3u=4u=5u=6u=9u=12
      HSI91.3985.9787.6292.2991.8694.4495.42
      LiDAR78.6981.6385.3585.8383.6587.1787.21
      HSI+ LiDAR88.1297.1297.1997.5498.3099.0999.23
    • Table 8. Overall classification accuracy of different sample sizes

      View table

      Table 8. Overall classification accuracy of different sample sizes

      Datasets=8s=9s=10s=11s=12s=13s=14s=15
      Houston 201387.9789.8988.3490.6090.7293.1391.1494.10
      Trento97.7497.2898.0697.5497.4997.4797.7397.00
    • Table 9. Overall classification accuracy of different reduction dimensions

      View table

      Table 9. Overall classification accuracy of different reduction dimensions

      Datasetk=1k=3k=5k=7k=9k=11
      Houston 201374.9487.4491.0590.6092.2293.03
      Trento96.2596.4597.5997.5496.9397.39
    Tools

    Get Citation

    Copy Citation Text

    Shihan Li, Haiyang Hua, Hao Zhang. Classification Based on Hyperspectral Image and LiDAR Data with Contrastive Learning[J]. Laser & Optoelectronics Progress, 2023, 60(22): 2228006

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Remote Sensing and Sensors

    Received: Jan. 30, 2023

    Accepted: Mar. 13, 2023

    Published Online: Nov. 6, 2023

    The Author Email: Haiyang Hua (c3i11@sia.cn)

    DOI:10.3788/LOP230540

    Topics