Acta Optica Sinica, Volume. 45, Issue 16, 1610003(2025)
Displacement Field Solving Method for Speckle Images Based on Self-Supervised Learning
The advantages of low costs, non-contact and full-field measurements have contributed to the widespread adoption of digital image correlation (DIC) technique across various fields, including material science, bio-mechanics and structural engineering. In DIC technique applications, the displacement field solution for speckle images constitutes a fundamental component, with its accuracy directly influencing the quality of subsequent structural deformation behavior analysis. Although the traditional method’s theoretical framework has achieved considerable maturity, the inherent parameter selection challenge remains unresolved. The accuracy and stability of displacement field solutions are affected by factors such as subset sizes, shape functions, and optimization algorithms. These parameter selections typically depend on prior knowledge of deformation conditions. Recent advances in deep learning-based optical flow estimation have sparked interest in exploring deep learning methods for speckle image displacement field solutions. However, these approaches require extensive labeled datasets for network pre-training to establish the mapping between speckle images and displacement fields. The labeled datasets, primarily generated through specific displacement field simulations, exhibit limited generalization capability and struggle to deliver satisfactory results in practical applications. Additionally, the pre-training process demands substantial computational resources. This study introduces a self-supervised learning concept and proposes an adaptive displacement field solving method for speckle images to address parameter selection issues and eliminate dependence on pre-training with large-scale labeled datasets.
The traditional solution method’s concept of one-to-one correspondence between pixel coordinates in deformed and reference subsets (Fig. 1) provides valuable insights. This paper presents a self-supervised learning framework for speckle images (Fig. 2). The process begins with region of interest (ROI) delineation on the reference image and assembly of all pixel coordinates within the ROI. The framework utilizes an artificial neural network to characterize the mapping relationship between ROI pixel coordinates and those within the deformed image, taking ROI pixel coordinates as input and corresponding displacement as output. The reference image undergoes warping based on the neural network’s displacement field output. To address experimental illumination variations, the framework incorporates a gray-scale linear mapping module that assigns two learnable parameters to each warped reference image pixel for adaptive correction of illumination-induced gray-scale changes. A relative shape loss function, resistant to gray-scale linear changes, is implemented alongside the absolute gray-scale loss to establish illumination-robust self-supervised information. The framework’s learnable network parameters undergo continuous iterative updates through the back-propagation algorithm to obtain optimal ROI displacement field that minimizes the loss function.
The proposed method undergoes experimental validation across multiple scenarios, and its performance is compared with traditional methods and other deep learning-based approaches. The method demonstrates superior performance in simple displacement field solutions, exhibiting exceptional smoothness and close alignment with ground truth (Figs. 3?6). The mean solution error remains below 0.008 pixel for rigid body motion displacement field and under 0.013 pixel for linear displacement field. In complex star-shaped displacement field solutions, the method maintains high accuracy across both high-frequency and low-frequency regions (Figs. 7?9), achieving a mean solution error of 0.0610 pixel. For rotation displacement field solutions, the method yields optimal results (Figs. 10 and 11), maintaining mean solution error below 1.9 pixel in fields with maximum displacement exceeding 40 pixel. Illumination robustness tests demonstrate minimal sensitivity to illumination changes (Fig. 13) and lowest solution error (Table 3), attributable to the integrated gray-scale linear mapping module and relative shape loss function. Computational efficiency analysis reveals enhanced performance compared to traditional methods, avoiding time-intensive pre-training processes and demonstrating strong application potential.
This paper presents a self-supervised learning-based method for speckle image displacement field solutions. Unlike conventional deep learning-based approaches requiring extensive pre-training datasets, this method enables direct displacement field solution implementation on analyzed speckle images through a self-supervised learning framework incorporating gray-scale linear mapping and relative shape loss functions. Experimental validation demonstrates the method’s superior performance in accuracy, illumination robustness, and computational efficiency. The approach effectively addresses traditional method limitations regarding manual parameter setting and rotation displacement field solutions while eliminating dependence on extensive pre-training datasets required by deep learning-based methods, offering novel perspectives for DIC technique implementation.
Get Citation
Copy Citation Text
Yiming Zhang, Fangnan Hao, Zili Xu, Guang Li. Displacement Field Solving Method for Speckle Images Based on Self-Supervised Learning[J]. Acta Optica Sinica, 2025, 45(16): 1610003
Category: Image Processing
Received: Apr. 2, 2025
Accepted: May. 26, 2025
Published Online: Aug. 18, 2025
The Author Email: Zili Xu (zlxu@mail.xjtu.edu.cn)
CSTR:32393.14.AOS250835