Laser & Optoelectronics Progress, Volume. 60, Issue 2, 0210004(2023)

Tree Species Recognition Using Combined Attention and ResNet for Unmanned Aerial Vehicle Images

Zhiyang Xu1,2,3, Qiao Chen1,2、*, and Yongfu Chen1,2
Author Affiliations
  • 1Research Institute of Forest Resource Information Techniques, Chinese Academy of Forestry, Beijing 100091, China
  • 2Key Laboratory of Forestry Remote Sensing and Information System, National Forestry and Grassland Administration, Beijing 100091, China
  • 3East China Inventory and Planning Institute, National Forestry and Grassland Administration, Hangzhou 310019, Zhejiang , China
  • show less
    Figures & Tables(8)
    ECA mechanism block
    Basic unit of ECA-ResNet
    Overall structure of proposed network
    Results of single tree crown segmentation and recognition in five test circular samples
    • Table 1. Training and independent test results of the model in single-tree crown image with different patch sizes

      View table

      Table 1. Training and independent test results of the model in single-tree crown image with different patch sizes

      SchemeOptimizerTraining dataValidation dataIndependent test data
      Accuracy /%KappaLossAccuracy /%KappaLossAccuracy /%Kappa
      I(variable size)SGD97.980.96380.097395.730.94490.192977.270.7056
      II(32×32 pixel)SGD93.250.91310.271186.750.82930.492068.180.5966
      III(64×64 pixel)SGD98.980.98690.101296.600.95950.167885.610.8140
      IV(96×96 pixel)SGD97.980.97690.055395.140.93880.183881.820.7629
      V(128×128 pixel)Adam95.190.93810.200096.150.95050.145379.550.7332
    • Table 2. Confusion matrix of independent test dataset

      View table

      Table 2. Confusion matrix of independent test dataset

      Recognized speciesGround true species
      AlnusOther broad-leavesCunninghamiaLiriodendronPinus
      Alnus261000
      Other broad-leaves018422
      Cunninghamia133811
      Liriodendron001150
      Pinus201016
      PA /%89.6681.8286.3683.3384.21
      UA /%96.3069.2386.3693.7584.21
    • Table 3. Performance comparison of different models (single tree crown image clip dataset with 64×64 pixel)

      View table

      Table 3. Performance comparison of different models (single tree crown image clip dataset with 64×64 pixel)

      NetworkTraining dataValidation dataIndependent test data
      Accuracy /%KappaLossAccuracy /%KappaLossAccuracy /%Kappa
      VGG1695.380.94050.178394.230.92580.209075.000.6804
      ResNet1894.360.92510.152993.800.92030.317677.270.7036
      ResNet3495.840.94640.212693.800.92010.272279.550.7356
      ResNet5096.350.94420.150594.800.93010.202480.300.7452
      ResNet10196.690.96020.127894.440.92840.234869.700.6130
      ResNet15296.410.95670.122994.870.93390.226465.910.5641
      resnext50_32x4d96.770.95830.099994.440.92850.184572.730.6501
      densenet12196.400.95360.154493.160.91190.291173.480.6557
      MobileNetV286.410.82540.477188.890.85710.359965.910.5555
      SqueezeNet84.010.79390.572686.750.82860.413968.940.5948
      ECA-ResNet98.980.98690.101296.600.95950.167885.610.8140
    • Table 4. Performance comparison of CNN model before and after improvement

      View table

      Table 4. Performance comparison of CNN model before and after improvement

      SchemeOperationECATraining accuracy /%Validation accuracy /%Test accuracy /%FLOPs /GbitParameter

      Speed /

      (frame·s-1

      I(ResNet50)Before reduced96.3594.8080.303.827235182774.59
      IIBefore reduced96.5794.5183.093.832235183254.99
      III(ECA-ResNet)After reduced98.9896.6085.613.015198866975.45
    Tools

    Get Citation

    Copy Citation Text

    Zhiyang Xu, Qiao Chen, Yongfu Chen. Tree Species Recognition Using Combined Attention and ResNet for Unmanned Aerial Vehicle Images[J]. Laser & Optoelectronics Progress, 2023, 60(2): 0210004

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Image Processing

    Received: Sep. 14, 2021

    Accepted: Nov. 10, 2021

    Published Online: Jan. 6, 2023

    The Author Email: Qiao Chen (Chengqiqo@163.com)

    DOI:10.3788/LOP212527

    Topics