Chinese Optics Letters, Volume. 22, Issue 10, 101101(2024)

Single-pixel imaging of a moving object with multi-motion

Pengcheng Ji1, Qingfan Wu1, Shengfu Cao1, Huijuan Zhang1, Zhaohua Yang2、*, and Yuanjin Yu1,3,4、**
Author Affiliations
  • 1School of Automation, Beijing Institute of Technology, Beijing 100081, China
  • 2School of Instrumentation Science and Optoelectronics Engineering, Beijing University, Beijing 100191, China
  • 3MIIT Key Laboratory of Complex-field Intelligent Sensing, Beijing Institute of Technology, Beijing 100081, China
  • 4Yangtze Delta Region Academy of Beijing Institute of Technology, Jiaxing 314019, China
  • show less

    Motion blur restoration is essential for the imaging of moving objects, especially for single-pixel imaging (SPI), which requires multiple measurements. To reconstruct the image of a moving object with multiple motion modes, we propose a novel motion blur restoration method of SPI using geometric moment patterns. We design a novel localization method that uses normalized differential first-order moments and central moment patterns to determine the object’s translational position and rotation angle information. Then, we perform motion compensation by using shifting Hadamard patterns. Our method effectively improves the detection accuracy of multiple motion modes and enhances the quality of the reconstructed image. We perform simulations and experiments, and the results validate the effectiveness of the proposed method.

    Keywords

    1. Introduction

    Single-pixel imaging (SPI) is an emerging computational imaging technique[14]. It has a high sensitivity, a wide spectral bandwidth, and cost-effectiveness[58]. However, its imaging process inherently requires the target to remain static, as it relies on sequential time-domain illumination patterns and corresponding intensity signals. SPI will encounter challenges when targets move, particularly at high speeds. The relative motion will result in the disruption of the correlation between illumination and detection, which leads to motion blur.

    Many strategies have been proposed to mitigate motion blur. The motion compensation based on motion estimation is an effective strategy because the motion can be compensated by shifting the reconstruction pattern along the opposite direction of the motion[911]. In general, the motion of the object is unknown in advance. The translational position can be estimated by obtaining one-dimensional (1-D) projection curves of the scene by projecting orthogonal base patterns[1214]. These methods necessitate excessive illumination patterns to achieve precise location determination. In contrast to the method of obtaining one-dimensional projection curves, some scholars have used the Fourier phase-shift property to obtain the relative motion displacement between frames[15,16]. Only 4–6 patterns need to be projected per frame to obtain the relative displacement. In 2021, Shi et al. introduced a method that leverages geometric moment patterns to detect a target’s position using only three patterns[17]. Building upon this, subsequent research introduced second-order moment patterns[18,19], allowing for the detection of various motion modes of objects using the geometric moment (GM) method. Nevertheless, the inherent noise associated with the binarized geometric moments can seriously compromise the accuracy of these motion parameters.

    In this Letter, we propose a new differential geometric moment localization method, which effectively improves the localization accuracy and enhances the quality of the reconstructed image by inverse displacement of the illumination patterns. First, limited by the geometric moment pattern characteristics and digital micro-mirror device (DMD), the binarized geometric moment pattern error is huge for large target scenes, significantly affecting the localization accuracy of motion parameters. Therefore, we use the normalized differential first-order moment patterns to improve the localization accuracy. For the second-order moment pattern, which is greatly affected by the image scale, we replace the second-order moment pattern with the central moment pattern, which is not affected by the scale, and we perform complementary differencing on it to improve the accuracy. Then, we use the GCS + S order Hadamard as the pattern to encode the image scene[20]. We divide the Hadamard patterns into different slices of length n and insert the first-order moment and central moment patterns into the slices to obtain the multi-motion parameters. Lastly, the motion parameters at different moments compensate for the reconstruction pattern with the reverse shift to reduce the effect of motion blur and improve the imaging quality.

    2. Theory and Methods

    SPI exploits the correlation between the modulation pattern and the captured light intensity signal. One may employ geometric moment analysis to ascertain the location and motion state of unknown objects within a scene. The geometric moment, denoted as mpq, for a two-dimensional image is mathematically defined as follows: mpq=x,yxpyqf(x,y),where f(x,y) symbolizes the target scene, while the sum p+q signifies the order of the geometric moments. Acquisition of the target’s low-order moment data is facilitated through the geometric moment matrix, denoted as Gpq, which is prescribed as Gpq(x,y)=xpyq,G00(x,y)=x0y0,G01(x,y)=x0y1,G10(x,y)=x1y0,G11(x,y)=x1y1,G02(x,y)=x0y2,G20(x,y)=x2y0.

    Utilizing the first-order moments m10,m01 along with the zeroth-order moment m00 of the target scene, the centroid coordinates (xc,yc) can be expressed as[17]m00=xyf(x,y)m00b,m10=xy[x·f(x,y)]m10b,m01=xy[y·f(x,y)]m01b,xc=m10m00,yc=m01m00,where mpqb is the geometric moment acquired before the target enters the scene, and the influence of the scene on the localization of the geometric moments can be reduced by background subtraction.

    Furthermore, leveraging the centroid moments and second-order moments, the second-order central moments of the target scene are computed as μ20=m20m00xc2,μ02=m02m00yc2,μ11=m11m00xcyc.

    By principal component analysis theory, the orientation and the axis lengths of the target correspond to the eigenvectors v1,v2 and eigenvalues λ1,λ2 of the covariance matrix M, which is constructed from the second-order central moments[18]M=[μ20μ11μ11μ02].

    Consequently, the target’s orientation θ, the length of the primary axis l1, and the length of the secondary axis l2 are quantified as θ=arctan(v1(2)v1(1)),l1=2max(λ1,λ2)·m00,l2=2min(λ1,λ2)·m00.

    The geometric moment pattern is a grayscale pattern; for an image f with dimensions 256×256, the first-order moment pattern represents an 8-bit grayscale picture. A refined approximation can be achieved through binarization, employing the Floyd–Steinberg’s error diffusion dither algorithm. However, the second-order moment pattern represents a 16-bit grayscale picture, which incurs substantial binarization error, detrimentally impacting localization precision.

    To mitigate the binarization error, one might consider simultaneously diminishing the pattern size of both the first-order and second-order moments, thereby reducing the grayscale level. While this approach effectively curtails the binarization error, it constrains the target detection range.

    The central moment can be extrapolated directly from the pattern μpq=x,y(xxc)p(yyc)qf(x,y)=x,yCpq(x,y)f(x,y),where Cpq is the central moment pattern; reducing the pattern size does not affect the detection results, so the second-order information of the target can be obtained from the small-sized central moment patterns.

    To reduce the ambient light effect and binarization error e, the patterns need to be normalized difference: μpq+=x,yf(x,y)(xxc)p(yyc)qaba+eμpq+b,μpq=x,yf(x,y)[1(xxc)p(yyc)qaba]+eμpqb,μpq=(ba)(μpq+μpq)+(b+a)m00,where a,b are the minimum and maximum values of the central moment patterns, respectively, and μpq+b,μpqb are the central moments acquired before the target enters the scene.

    Figure 1 illustrates the imaging flow, which is pivotal for motion-compensated single-pixel imaging. This technique involves a synergistic combination of different moment patterns with Hadamard modulation. Figure 1(a) presents the normalized difference patterns of the first-order and central moments. These patterns have been binarized using the Floyd–Steinberg dithering technique, providing the foundation for the subsequent modulation process. Figure 1(b) demonstrates how we integrate the first-order moment and central moment patterns into the traditional Hadamard modulation pattern. To enhance the modulation sequence, we insert first-order moment and central moment patterns into the head of the predefined n length Hadamard patterns, extending them to a difference sequence slice of 2n+10. Figure 1(c) shows the scene and the detected light intensity signal, where the target trajectory is a circle, and the target is undergoing rotation, while the light intensity signal contains the coded information of the target scene and the values of the first-order moments and the central moments. Finally, Fig. 1(d) shows the sequence of the reconstruction patterns, and each slice in the series has motion parameters xc,yc,θ,rc at the corresponding moment. According to the motion parameters of different slices, we can shift the region of the target in the reconstruction pattern inversely to compensate for motion blurring.

    Process of imaging. (a) represents the normalized difference mode for a first-order moment and central moment, employing Floyd–Steinberg dithering for binarization. (b) demonstrates a sequence of modulated modes. (c) shows a rotating moving target with a circular trajectory and detected light intensity signals. (d) Reconstructed pattern sequences after shifting in the reverse direction.

    Figure 1.Process of imaging. (a) represents the normalized difference mode for a first-order moment and central moment, employing Floyd–Steinberg dithering for binarization. (b) demonstrates a sequence of modulated modes. (c) shows a rotating moving target with a circular trajectory and detected light intensity signals. (d) Reconstructed pattern sequences after shifting in the reverse direction.

    3. Simulation

    We simulated two complex grayscale targets, an “airplane” target and a “dog” target, with target sizes of 56×56 and 64×64, respectively. The two targets move according to the random trajectories shown in Fig. 2(a) and rotate themselves randomly at the angles shown in Fig. 2(b) in a simple and complex scene of size 256×256, which contains 360 frames. We use 64 differential Hadamard patterns for each frame and 10 differential center moment patterns, respectively. The total sampling rate is 17.58%. Since the second-order central moment is a local pattern, and the initial illumination pattern requires a priori knowledge of the target size, we can use 5 to 10 second-order moments’ detections to estimate the target size. Meanwhile, we compared the method with that which uses the GM method[18] in the same environment. To simulate the natural experimental environment, a Gaussian white noise with standard deviation σ=0.1 was added to the light intensity signal in the simulation.

    Trajectory and rotation angle of the target. (a) Trajectories in the x- and y-directions. (b) Angle of the rotation of the target.

    Figure 2.Trajectory and rotation angle of the target. (a) Trajectories in the x- and y-directions. (b) Angle of the rotation of the target.

    The mean square error (MSE) was introduced to evaluate the accuracy of the motion parameters. The PSNR was also used for quantitative analysis of the localization results, MSE(x,y)=1ni=1n(xiyi)2,PSNR(x,y)=10logpeakval2MSE(x,y).

    A comparison of the reconstructed images and motion localization results of these two methods for two targets is shown in Figs. 3 and 4. The results show that our proposed method has higher accuracy than the GM method in calculating the position, angle, and axis length. When the target scene is 256×256, it can be seen from Fig. 3 and Table 1 that the MSEs of all the motion parameters of our method are lower than that of the GM method for both simple and complex scenes. Thus, the quality of the motion-compensated reconstructed images of the proposed method is higher than that of the GM method.

    • Table 1. Errors of the Motion Parameters in the Two Methods

      Table 1. Errors of the Motion Parameters in the Two Methods

      SceneMethodMSE
      ΔxΔyΔθΔr
      Simple sceneOur method0.350.347.780.12
      GM method3.643.73206.593.55
      Complex sceneOur method0.680.711.691.03
      GM method7.716.04315.496.04

    Target simulation results in the simple scene. (a) The original image. (b) The target image reconstructed by our method. (c) The target image reconstructed by the GM method. (d) The actual position and the calculation results of the two methods. (e) The comparison of the angular errors of the two methods. (f) The comparison of the axial length errors of the two methods.

    Figure 3.Target simulation results in the simple scene. (a) The original image. (b) The target image reconstructed by our method. (c) The target image reconstructed by the GM method. (d) The actual position and the calculation results of the two methods. (e) The comparison of the angular errors of the two methods. (f) The comparison of the axial length errors of the two methods.

    Target simulation results in the complex scene. (a) The original image. (b) The target image reconstructed by our method. (c) The target image reconstructed by the GM method. (d) The actual position and the calculation results of the two methods. (e) The comparison of the angular errors of the two methods. (f) The comparison of the axial length errors of the two methods.

    Figure 4.Target simulation results in the complex scene. (a) The original image. (b) The target image reconstructed by our method. (c) The target image reconstructed by the GM method. (d) The actual position and the calculation results of the two methods. (e) The comparison of the angular errors of the two methods. (f) The comparison of the axial length errors of the two methods.

    From Table 1, it can be seen that when the target scene is complex, the motion parameter errors produced by the method proposed in this paper increase slightly compared with the simple scene, but it is still significantly better than the GM method after applying background subtraction. This suggests that the central moment method can effectively adapt to complex scenes. Therefore, the proposed method yields a reconstructed image of higher quality than the GM method, which is affected by errors in motion parameter calculation.

    4. Experiment

    The experimental setup is depicted in Fig. 5, showcasing the target placement on a three-axis (X, Y, R) motorized stage. A light emitting diode (LED) with a maximum power of 20 W is utilized as the light source, providing a color temperature of 6500 K to illuminate the target. To perform modulation, a DMD (Texas Instruments Discovery V7000) is employed, and the overall light intensity is gathered by converging the lens onto a PMT (Thorlabs PMM02). The optoelectronic signals produced by the PMT are captured through a data acquisition card (NI USB-6341).

    Diagram of the experimental setup. The LED is the light source, and the target moves and rotates through three-axis motorized stages. The collecting lens projects the image of the target onto the DMD for modulation, and the modulated light intensity is collected on the PMT by the converging lens. The PMT converts the optical signal into an electrical signal, which is captured by the acquisition card and sent to the computer.

    Figure 5.Diagram of the experimental setup. The LED is the light source, and the target moves and rotates through three-axis motorized stages. The collecting lens projects the image of the target onto the DMD for modulation, and the modulated light intensity is collected on the PMT by the converging lens. The PMT converts the optical signal into an electrical signal, which is captured by the acquisition card and sent to the computer.

    In the experiment, the target is a picture of a toy bear with dimensions of 12mm×12mm. To obtain the motion parameters of the target, the first-order moment and central moment patterns of size 256×256 were used for modulation. The central moment window size used was 36×36 pixels. The scene was encoded using GCS + S patterns of the same size. The object moved randomly with a velocity of 10 mm/s for 120 frames, as depicted in Fig. 6(d). The differential Hadamard patterns used to encode the scene were divided into 120 slices with a length of 192. The total sampling rate is 17.58%. To obtain motion parameters, 10 differential first-order and central moment patterns (or geometric moment patterns) were inserted before each slice of the Hadamard patterns. The experimental results are shown in Fig. 6, which indicate that the error of the proposed method is significantly smaller than the GM method. Meanwhile, the accurate motion parameters are favorable for the reconstruction of the motion blur-resistant images. Compared to Figs. 6(b) and 6(c), the PSNR of the proposed method is higher, and it can distinguish the details of toy bears.

    Experimental results. (a) Full sampling reconstruction of images. (b) The target image reconstructed by our method. (c) The target image reconstructed by the GM method. (d) The actual position and the calculation results of the two methods. (e) The comparison of the angular errors of the two methods. (f) The comparison of the axial length errors of the two methods.

    Figure 6.Experimental results. (a) Full sampling reconstruction of images. (b) The target image reconstructed by our method. (c) The target image reconstructed by the GM method. (d) The actual position and the calculation results of the two methods. (e) The comparison of the angular errors of the two methods. (f) The comparison of the axial length errors of the two methods.

    In the detection process of a central moment, the movement distance d of the target can be calculated using the formula d=v×nf,where v is the velocity of the target, which is 20 mm/s, n is 8, and f is the frequency, which is 10 kHz. Substituting these values into the equation gives d=0.016mm.

    The central moment pattern window size is 16.64mm×16.64mm, and since the size of a single pixel is 0.52 mm, which is significantly larger than 0.016 mm, the motion of the target has little effect on the acquisition of the central moment. When the target’s speed exceeds 650 mm/s, the central moment pattern window’s center is not in the same pixel. This significantly affects the accuracy of the acquired motion parameters.

    5. Conclusion

    We propose a new differential geometric moment localization method to address the problem of significant errors in binarized geometric moment patterns, which affects the localization accuracy, especially in the case of large target scenes. We use the normalized differential first-order moment patterns and the normalized central moment patterns to localize objects, effectively reducing the localization error compared to the general geometric moment method. Meanwhile, motion-compensated imaging of the object based on more accurate motion parameters improves the signal-to-noise ratio of the reconstructed image. It can reduce the effect of motion blur, thus enhancing the quality of the reconstructed image. The method presented in this Letter has certain limitations. It can only be applied to a single moving object. In the case of multiple moving objects, the method will not be able to accurately acquire motion parameters.

    Tools

    Get Citation

    Copy Citation Text

    Pengcheng Ji, Qingfan Wu, Shengfu Cao, Huijuan Zhang, Zhaohua Yang, Yuanjin Yu, "Single-pixel imaging of a moving object with multi-motion," Chin. Opt. Lett. 22, 101101 (2024)

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Imaging Systems and Image Processing

    Received: Dec. 29, 2023

    Accepted: May. 22, 2024

    Posted: May. 22, 2024

    Published Online: Oct. 17, 2024

    The Author Email: Zhaohua Yang (yangzh@buaa.edu.cn), Yuanjin Yu (yuanjin.yu@bit.edu.cn)

    DOI:10.3788/COL202422.101101

    CSTR:32184.14.COL202422.101101

    Topics