Chinese Optics, Volume. 17, Issue 3, 538(2024)

Indistinguishable points attention-aware network for infrared small object detection

Bo-xiao WANG1, Yan-song SONG1,2、*, and Xiao-na DONG1,2
Author Affiliations
  • 1The School of Electro-Optical Engineering, Changchun University of Science and Technology, Changchun 130000, China
  • 2Institute of Space Photoelectronic Technology, Changchun University of Science and Technology, Changchun 130000, China
  • show less
    Figures & Tables(14)
    Indistinguishable points attention-aware network architecture
    (a) Centre point offset to boundary; (b) point-based region proposal module
    ROC curves for methods on (a) the NUDT-SIRST dataset and (b) the IRDST dataset
    Different scene detection results of different methods, with the same target zoomed in the same color box
    3D visualization of detection results for each method
    Visualization of regional proposals. (a) Original maps; (b) groundtruths; (c) centroid heatmap and regional proposal boundaries
    Visualization of indistinguishable points distribution. (a) UAV target; (b) point target; (c) aircraft target
    • Table 1. Hyperparameter settings of traditional algorithms

      View table
      View in Article

      Table 1. Hyperparameter settings of traditional algorithms

      传统算法超参数设置
      Top-hatNhood=ones(5)
      LEFh=0.2,α=0.5, P=9
      AADCDD内窗口尺寸={3, 5, 7, 9},外窗口尺寸=19
      TLLCM窗口尺寸={3, 5, 7, 9},k=9
    • Table 2. Comparison of quantitative results of different methods on NUDT-SIRST and IRDST datasets

      View table
      View in Article

      Table 2. Comparison of quantitative results of different methods on NUDT-SIRST and IRDST datasets

      检测算法NUDT-SIRSTIRDST
      mAPF值(Pre,Rec)mAPF值(Pre,Rec)
      Top-hat1.50.3599(0.2850,0.4884)0.70.0088(0.0045,0.4107)
      LEF6.40.1151(0.0748,0.2498)2.50.1219(0.0686,0.5470)
      AADCDD1.60.1490(0.3838,0.0924)1.40.0705(0.0521,0.1090)
      TLLCM16.50.0724(0.0479,0.1476)6.10.1881(0.1254,0.3759)
      ALCNet69.30.7595(0.7035,0.8251)46.50.5929(0.5461,0.6486)
      DNANet86.90.8645(0.9070,0.8259)62.10.6697(0.712 4,0.6319)
      RDIAN82.40.890 00.899 00.881 160.00.7102(0.7092,0.7113
      本文方法87.40.8935(0.8923,0.894863.40.7056(0.7183,0.6935)
    • Table 3. Average inference times of a single image for deep learning methods (s)

      View table
      View in Article

      Table 3. Average inference times of a single image for deep learning methods (s)

      检测算法NUDT-SIRSTIRDST
      ALCNet0.1040.166
      DNANet0.0890.259
      RDIAN0.0650.114
      本文算法0.0990.121
    • Table 4. Comparison of different region proposal modules

      View table
      View in Article

      Table 4. Comparison of different region proposal modules

      建议数量基于点的区域建议RPN
      mAPF值mAPF值
      100087.90.892786.20.8425
      25687.50.896285.80.8412
      12887.40.893585.20.8406
      6486.00.890184.50.8397
    • Table 5. Detection results of different point selection strategies

      View table
      View in Article

      Table 5. Detection results of different point selection strategies

      选点策略mAP
      均匀选点86.7
      k=1,γ=0.0086.9
      k=3,γ=0.7587.4
      k=10,γ=1.0085.8
    • Table 6. Fusion results of different features at indistinguishable points

      View table
      View in Article

      Table 6. Fusion results of different features at indistinguishable points

      细粒度特征粗糙掩码位置嵌入mAP
      85.5
      85.8
      87.4
    • Table 7. Results of different refinement strategies

      View table
      View in Article

      Table 7. Results of different refinement strategies

      细化方案mAP
      CNN(16×16)85.5
      MLP(16×16)86.2
      细化掩码边界模块(S=3)87.4
      细化掩码边界模块(S=6)87.6
    Tools

    Get Citation

    Copy Citation Text

    Bo-xiao WANG, Yan-song SONG, Xiao-na DONG. Indistinguishable points attention-aware network for infrared small object detection[J]. Chinese Optics, 2024, 17(3): 538

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Original Article

    Received: Oct. 11, 2023

    Accepted: Dec. 5, 2023

    Published Online: Jul. 31, 2024

    The Author Email:

    DOI:10.37188/CO.2023-0178

    Topics