Motion blur restoration is essential for the imaging of moving objects, especially for single-pixel imaging (SPI), which requires multiple measurements. To reconstruct the image of a moving object with multiple motion modes, we propose a novel motion blur restoration method of SPI using geometric moment patterns. We design a novel localization method that uses normalized differential first-order moments and central moment patterns to determine the object’s translational position and rotation angle information. Then, we perform motion compensation by using shifting Hadamard patterns. Our method effectively improves the detection accuracy of multiple motion modes and enhances the quality of the reconstructed image. We perform simulations and experiments, and the results validate the effectiveness of the proposed method.
【AIGC One Sentence Reading】:We propose a novel SPI method using geometric moments for motion blur restoration, improving detection accuracy and image quality of moving objects with multiple motion modes.
【AIGC Short Abstract】:We present a novel single-pixel imaging (SPI) method for imaging moving objects with multiple motion modes. Utilizing geometric moment patterns, our approach enhances motion blur restoration by accurately determining the object's position and rotation angle. Through motion compensation with shifting Hadamard patterns, our method improves detection accuracy and reconstructed image quality, as validated by simulations and experiments.
Note: This section is automatically generated by AI . The website and platform operators shall not be liable for any commercial or legal consequences arising from your use of AI generated content on this website. Please be aware of this.
Single-pixel imaging (SPI) is an emerging computational imaging technique[1–4]. It has a high sensitivity, a wide spectral bandwidth, and cost-effectiveness[5–8]. However, its imaging process inherently requires the target to remain static, as it relies on sequential time-domain illumination patterns and corresponding intensity signals. SPI will encounter challenges when targets move, particularly at high speeds. The relative motion will result in the disruption of the correlation between illumination and detection, which leads to motion blur.
Many strategies have been proposed to mitigate motion blur. The motion compensation based on motion estimation is an effective strategy because the motion can be compensated by shifting the reconstruction pattern along the opposite direction of the motion[9–11]. In general, the motion of the object is unknown in advance. The translational position can be estimated by obtaining one-dimensional (1-D) projection curves of the scene by projecting orthogonal base patterns[12–14]. These methods necessitate excessive illumination patterns to achieve precise location determination. In contrast to the method of obtaining one-dimensional projection curves, some scholars have used the Fourier phase-shift property to obtain the relative motion displacement between frames[15,16]. Only 4–6 patterns need to be projected per frame to obtain the relative displacement. In 2021, Shi et al. introduced a method that leverages geometric moment patterns to detect a target’s position using only three patterns[17]. Building upon this, subsequent research introduced second-order moment patterns[18,19], allowing for the detection of various motion modes of objects using the geometric moment (GM) method. Nevertheless, the inherent noise associated with the binarized geometric moments can seriously compromise the accuracy of these motion parameters.
In this Letter, we propose a new differential geometric moment localization method, which effectively improves the localization accuracy and enhances the quality of the reconstructed image by inverse displacement of the illumination patterns. First, limited by the geometric moment pattern characteristics and digital micro-mirror device (DMD), the binarized geometric moment pattern error is huge for large target scenes, significantly affecting the localization accuracy of motion parameters. Therefore, we use the normalized differential first-order moment patterns to improve the localization accuracy. For the second-order moment pattern, which is greatly affected by the image scale, we replace the second-order moment pattern with the central moment pattern, which is not affected by the scale, and we perform complementary differencing on it to improve the accuracy. Then, we use the GCS + S order Hadamard as the pattern to encode the image scene[20]. We divide the Hadamard patterns into different slices of length n and insert the first-order moment and central moment patterns into the slices to obtain the multi-motion parameters. Lastly, the motion parameters at different moments compensate for the reconstruction pattern with the reverse shift to reduce the effect of motion blur and improve the imaging quality.
Sign up for Chinese Optics Letters TOC Get the latest issue of Advanced Photonics delivered right to you!Sign up now
2. Theory and Methods
SPI exploits the correlation between the modulation pattern and the captured light intensity signal. One may employ geometric moment analysis to ascertain the location and motion state of unknown objects within a scene. The geometric moment, denoted as , for a two-dimensional image is mathematically defined as follows: where symbolizes the target scene, while the sum signifies the order of the geometric moments. Acquisition of the target’s low-order moment data is facilitated through the geometric moment matrix, denoted as , which is prescribed as
Utilizing the first-order moments along with the zeroth-order moment of the target scene, the centroid coordinates can be expressed as[17]where is the geometric moment acquired before the target enters the scene, and the influence of the scene on the localization of the geometric moments can be reduced by background subtraction.
Furthermore, leveraging the centroid moments and second-order moments, the second-order central moments of the target scene are computed as
By principal component analysis theory, the orientation and the axis lengths of the target correspond to the eigenvectors and eigenvalues of the covariance matrix , which is constructed from the second-order central moments[18]
Consequently, the target’s orientation , the length of the primary axis , and the length of the secondary axis are quantified as
The geometric moment pattern is a grayscale pattern; for an image with dimensions , the first-order moment pattern represents an 8-bit grayscale picture. A refined approximation can be achieved through binarization, employing the Floyd–Steinberg’s error diffusion dither algorithm. However, the second-order moment pattern represents a 16-bit grayscale picture, which incurs substantial binarization error, detrimentally impacting localization precision.
To mitigate the binarization error, one might consider simultaneously diminishing the pattern size of both the first-order and second-order moments, thereby reducing the grayscale level. While this approach effectively curtails the binarization error, it constrains the target detection range.
The central moment can be extrapolated directly from the pattern where is the central moment pattern; reducing the pattern size does not affect the detection results, so the second-order information of the target can be obtained from the small-sized central moment patterns.
To reduce the ambient light effect and binarization error , the patterns need to be normalized difference: where are the minimum and maximum values of the central moment patterns, respectively, and are the central moments acquired before the target enters the scene.
Figure 1 illustrates the imaging flow, which is pivotal for motion-compensated single-pixel imaging. This technique involves a synergistic combination of different moment patterns with Hadamard modulation. Figure 1(a) presents the normalized difference patterns of the first-order and central moments. These patterns have been binarized using the Floyd–Steinberg dithering technique, providing the foundation for the subsequent modulation process. Figure 1(b) demonstrates how we integrate the first-order moment and central moment patterns into the traditional Hadamard modulation pattern. To enhance the modulation sequence, we insert first-order moment and central moment patterns into the head of the predefined length Hadamard patterns, extending them to a difference sequence slice of . Figure 1(c) shows the scene and the detected light intensity signal, where the target trajectory is a circle, and the target is undergoing rotation, while the light intensity signal contains the coded information of the target scene and the values of the first-order moments and the central moments. Finally, Fig. 1(d) shows the sequence of the reconstruction patterns, and each slice in the series has motion parameters at the corresponding moment. According to the motion parameters of different slices, we can shift the region of the target in the reconstruction pattern inversely to compensate for motion blurring.
Figure 1.Process of imaging. (a) represents the normalized difference mode for a first-order moment and central moment, employing Floyd–Steinberg dithering for binarization. (b) demonstrates a sequence of modulated modes. (c) shows a rotating moving target with a circular trajectory and detected light intensity signals. (d) Reconstructed pattern sequences after shifting in the reverse direction.
We simulated two complex grayscale targets, an “airplane” target and a “dog” target, with target sizes of and , respectively. The two targets move according to the random trajectories shown in Fig. 2(a) and rotate themselves randomly at the angles shown in Fig. 2(b) in a simple and complex scene of size , which contains 360 frames. We use 64 differential Hadamard patterns for each frame and 10 differential center moment patterns, respectively. The total sampling rate is 17.58%. Since the second-order central moment is a local pattern, and the initial illumination pattern requires a priori knowledge of the target size, we can use 5 to 10 second-order moments’ detections to estimate the target size. Meanwhile, we compared the method with that which uses the GM method[18] in the same environment. To simulate the natural experimental environment, a Gaussian white noise with standard deviation was added to the light intensity signal in the simulation.
Figure 2.Trajectory and rotation angle of the target. (a) Trajectories in the x- and y-directions. (b) Angle of the rotation of the target.
The mean square error (MSE) was introduced to evaluate the accuracy of the motion parameters. The PSNR was also used for quantitative analysis of the localization results,
A comparison of the reconstructed images and motion localization results of these two methods for two targets is shown in Figs. 3 and 4. The results show that our proposed method has higher accuracy than the GM method in calculating the position, angle, and axis length. When the target scene is , it can be seen from Fig. 3 and Table 1 that the MSEs of all the motion parameters of our method are lower than that of the GM method for both simple and complex scenes. Thus, the quality of the motion-compensated reconstructed images of the proposed method is higher than that of the GM method.
Table 1. Errors of the Motion Parameters in the Two Methods
Table 1. Errors of the Motion Parameters in the Two Methods
Scene
Method
MSE
Δx
Δy
Δθ
Δr
Simple scene
Our method
0.35
0.34
7.78
0.12
GM method
3.64
3.73
206.59
3.55
Complex scene
Our method
0.68
0.71
1.69
1.03
GM method
7.71
6.04
315.49
6.04
Figure 3.Target simulation results in the simple scene. (a) The original image. (b) The target image reconstructed by our method. (c) The target image reconstructed by the GM method. (d) The actual position and the calculation results of the two methods. (e) The comparison of the angular errors of the two methods. (f) The comparison of the axial length errors of the two methods.
Figure 4.Target simulation results in the complex scene. (a) The original image. (b) The target image reconstructed by our method. (c) The target image reconstructed by the GM method. (d) The actual position and the calculation results of the two methods. (e) The comparison of the angular errors of the two methods. (f) The comparison of the axial length errors of the two methods.
From Table 1, it can be seen that when the target scene is complex, the motion parameter errors produced by the method proposed in this paper increase slightly compared with the simple scene, but it is still significantly better than the GM method after applying background subtraction. This suggests that the central moment method can effectively adapt to complex scenes. Therefore, the proposed method yields a reconstructed image of higher quality than the GM method, which is affected by errors in motion parameter calculation.
4. Experiment
The experimental setup is depicted in Fig. 5, showcasing the target placement on a three-axis (, , ) motorized stage. A light emitting diode (LED) with a maximum power of 20 W is utilized as the light source, providing a color temperature of 6500 K to illuminate the target. To perform modulation, a DMD (Texas Instruments Discovery V7000) is employed, and the overall light intensity is gathered by converging the lens onto a PMT (Thorlabs PMM02). The optoelectronic signals produced by the PMT are captured through a data acquisition card (NI USB-6341).
Figure 5.Diagram of the experimental setup. The LED is the light source, and the target moves and rotates through three-axis motorized stages. The collecting lens projects the image of the target onto the DMD for modulation, and the modulated light intensity is collected on the PMT by the converging lens. The PMT converts the optical signal into an electrical signal, which is captured by the acquisition card and sent to the computer.
In the experiment, the target is a picture of a toy bear with dimensions of . To obtain the motion parameters of the target, the first-order moment and central moment patterns of size were used for modulation. The central moment window size used was pixels. The scene was encoded using GCS + S patterns of the same size. The object moved randomly with a velocity of 10 mm/s for 120 frames, as depicted in Fig. 6(d). The differential Hadamard patterns used to encode the scene were divided into 120 slices with a length of 192. The total sampling rate is 17.58%. To obtain motion parameters, 10 differential first-order and central moment patterns (or geometric moment patterns) were inserted before each slice of the Hadamard patterns. The experimental results are shown in Fig. 6, which indicate that the error of the proposed method is significantly smaller than the GM method. Meanwhile, the accurate motion parameters are favorable for the reconstruction of the motion blur-resistant images. Compared to Figs. 6(b) and 6(c), the PSNR of the proposed method is higher, and it can distinguish the details of toy bears.
Figure 6.Experimental results. (a) Full sampling reconstruction of images. (b) The target image reconstructed by our method. (c) The target image reconstructed by the GM method. (d) The actual position and the calculation results of the two methods. (e) The comparison of the angular errors of the two methods. (f) The comparison of the axial length errors of the two methods.
In the detection process of a central moment, the movement distance of the target can be calculated using the formula where is the velocity of the target, which is 20 mm/s, is 8, and is the frequency, which is 10 kHz. Substituting these values into the equation gives .
The central moment pattern window size is , and since the size of a single pixel is 0.52 mm, which is significantly larger than 0.016 mm, the motion of the target has little effect on the acquisition of the central moment. When the target’s speed exceeds 650 mm/s, the central moment pattern window’s center is not in the same pixel. This significantly affects the accuracy of the acquired motion parameters.
5. Conclusion
We propose a new differential geometric moment localization method to address the problem of significant errors in binarized geometric moment patterns, which affects the localization accuracy, especially in the case of large target scenes. We use the normalized differential first-order moment patterns and the normalized central moment patterns to localize objects, effectively reducing the localization error compared to the general geometric moment method. Meanwhile, motion-compensated imaging of the object based on more accurate motion parameters improves the signal-to-noise ratio of the reconstructed image. It can reduce the effect of motion blur, thus enhancing the quality of the reconstructed image. The method presented in this Letter has certain limitations. It can only be applied to a single moving object. In the case of multiple moving objects, the method will not be able to accurately acquire motion parameters.