Chinese Journal of Lasers, Volume. 52, Issue 2, 0204004(2025)

Sub‐Pixel Level Self‐Supervised Convolutional Neural Network for Rapid Speckle Image Matching

Lin Li, Peng Wang*, Yue Li, Haotian Wang, Luhua Fu, and Changku Sun
Author Affiliations
  • State Key Laboratory of Precision Measurement Technology and Instruments, Tianjin 300372, China
  • show less
    Figures & Tables(20)
    Schematic diagram of the binocular measurement system based on laser speckle projector
    Convolutional neural network architecture
    Gray distribution pattern of speckle feature points
    Main modules in the backbone. (a) Main downsampling modules; (b) main upsampling modules
    Dynamic depth separable convolution module
    Dynamic convolution block
    Synthetic speckle dataset. (a) Bright condition; (b) dark condition
    Real training dataset. (a) Bright and positive angle-of-view; (b) dark and positive angle-of-view; (c) bright and side angle-of-view; (d) bright and positive angle-of-view with shift change; (e) bright and positive angle-of-view with rotation transformation; (f) bright and positive angle-of-view with angle rotation and scaling transformation
    Experimental system
    Training process. (a) Changes in loss; (b) changes in precision; (c) changes in recall
    Speckle feature point extraction and matching results of the model for each training. (a) Extraction and matching results of pre-trained model in dark condition; (b) extraction and matching results of pre-trained model in bright condition; (c) extraction and matching results of the model after one round of training in dark condition; (d) extraction and matching results of the model after one round of training in bright condition; (e) extraction and matching results of the model after two rounds of training in dark condition; (f) extraction and matching results of the model after two rounds of training in bright condition
    Matching results of each model on real speckle dataset, where the top three rows show dark environmental conditions and the bottom two rows show bright environmental conditions. Each row of helmets has different views
    Measurement experiment of ladder blocks. (a) Collected image by left camera; (b) collected image by right camera; (c) single frame reconstruction of point-cloud of ladder block; (d) reconstruction results of ladder block at different positions
    Measurement experiment of marble plane. (a) Collected image by left camera; (b) collected image by right camera;
    Measurement experiment of helmet. (a) Collected image by left camera; (b) collected image by right camera; (c) reconstruction results
    Measurement experiment of workbench. (a) Handheld measurement system; (b) collected image by left camera; (c) collected image by right camera; (d) reconstruction results
    Comparison between standard convolution and dynamic convolution. (a) Rotation of 90°; (b) rotation of 180°; (c) rotation of 270°
    • Table 1. Performance evaluation of training models (error threshold ϵ=2 pixel)

      View table

      Table 1. Performance evaluation of training models (error threshold ϵ=2 pixel)

      Training ModelDetector metricsDescriptor metrics
      RepeatabilityMLENN mAPMatching score
      Model after pre-training0.54380.88730.43570.2613
      Model after one round of training0.68210.76410.55840.3265
      Model after two rounds of training0.76610.68220.61370.3748
    • Table 2. Comparison of different speckle feature point detection methods (error threshold ϵ=2 pixel)

      View table

      Table 2. Comparison of different speckle feature point detection methods (error threshold ϵ=2 pixel)

      MethodMatching time /sTotal numberCorrect numberMean accuracy /%
      SIFT0.249811619.75
      ORB0.0481400
      D2-Net4.78257814.04
      DISK0.011216277.87
      ALIKE0.14162914.52
      GlueStick0.2137200
      Ours0.04657353292.84
    • Table 3. Performance evaluation of different ablation models (error threshold ϵ=2 pixel)

      View table

      Table 3. Performance evaluation of different ablation models (error threshold ϵ=2 pixel)

      Training modelDetector metricsDescriptor metrics
      RepeatabilityMLENN mAPMatching score
      Model 10.76610.68220.61370.3748
      Model 20.66740.78330.56810.3321
      Model 30.66130.77960.44630.2339
    Tools

    Get Citation

    Copy Citation Text

    Lin Li, Peng Wang, Yue Li, Haotian Wang, Luhua Fu, Changku Sun. Sub‐Pixel Level Self‐Supervised Convolutional Neural Network for Rapid Speckle Image Matching[J]. Chinese Journal of Lasers, 2025, 52(2): 0204004

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Measurement and metrology

    Received: Jun. 17, 2024

    Accepted: Aug. 1, 2024

    Published Online: Jan. 20, 2025

    The Author Email: Wang Peng (wang_peng@tju.edu.cn)

    DOI:10.3788/CJL240981

    CSTR:32183.14.CJL240981

    Topics