Chinese Optics Letters, Volume. 22, Issue 6, 061201(2024)

Generic and flexible self-correction method for nonlinearity-induced phase error in three-dimensional imaging

Jianhua Wang1、*, Peng Xu1, and Yanxi Yang2
Author Affiliations
  • 1School of Information and Control Engineering, Qingdao University of Technology, Qingdao 266520, China
  • 2School of Automation and Information Engineering, Xi’an University of Technology, Xi’an 710048, China
  • show less

    In three-dimensional imaging employing phase-shifting profilometry (PSP), the nonlinear response of projector and camera makes the fringe gray distribution non-sinusoidal, which further leads to phase error. Although the double 3-step phase-shifting method is simple and effective, it needs to add an additional set of fringe sequences, which reduces the measurement efficiency. To this end, this paper introduces a generic and flexible self-correction method for nonlinearity-induced phase error. First, according to the nonlinearity-induced phase error model, we introduce an additional wrapped phase with a phase difference of π/3. The error waveform of the two wrapped phases is opposite but not coincident. Then, we introduce an estimation algorithm for the additional wrapped phase offset. Finally, we fuse the two wrapped phases to correct the phase error. Experiments confirm that the root mean squared error of the proposed method is 64.1% lower than that of the traditional method and 13.3% lower than that of the Hilbert transform method. The proposed method does not require any additional fringes or hardware assistance and can be easily extended to 4-step or 5-step PSP.

    Keywords

    1. Introduction

    In phase-shifting profilometry (PSP)[14], a highly linear intensity response projector, such as digital light processor (DLP), will increase the hardware cost. However, for an ordinary projector, in order to improve the visual effect of human eyes on the projector pattern, the ordinary projector has carried out gamma transformation. For example, the gamma transformation value recommended by the National Television System Committee (NTSC) of the United States is 2.2. The gamma transformation of the projector makes the fringe gray distribution non-sinusoidal. Furthermore, the second-order and third-order nonlinear responses of the charged coupled device (CCD) camera also make the fringe gray distribution non-sinusoidal. In conclusion, the nonlinear response of the measurement system leads to the phase error[5,6].

    Recently, many state-of-the-art methods have been proposed. They can be divided into seven categories. (1) Perform photometric calibration of the projector in advance to obtain the input-output response curve of the projector, thereby generating corrected projection fringe[7]. (2) Establish the projector gamma mathematical model and use this model to correct the projection fringe[8,9]. (3) Pre-establish the phase error look-up table and compensate the phase error according to the look-up table[10,11]. (4) Use Hilbert transform to correct the phase error caused by the nonlinear effect[12]. (5) Use an iterative algorithm to correct the phase error caused by the nonlinear effect[13]. (6) Use the nonlinearity-induced phase error self-correction algorithm[14]. (7) The non-sinusoidal fringes are expressed by a high-order Fourier series, and then the phase error model introduced by the non-sinusoidal fringe is established by projecting a set of additional fringe sequences to generate the opposite phase error and fusing the two phases to correct the phase error. This method is referred to as the double N-step phase-shifting method[15,16].

    Huang et al.[17] proposed a double 3-step phase-shifting (PS) method. Although the double 3-step PS methods are simple and effective, they need to add an additional set of fringe sequences, which reduces the measurement efficiency. In this paper, we propose a generic self-correction method for nonlinearity-induced phase error (GSCN) in 3-step PSP. First, according to the nonlinearity-induced phase error model, we introduce an additional phase from the perspective of compensating the phase error. Then, we introduce an algorithm to shift the additional phase by S pixels. Finally, we fuse the original and additional phase to compensate the phase error. This paper is organized as follows. The principle is presented in Section 2. In Section 3, experiments are carried out to confirm the theoretical point of view. Section 4 concludes this paper.

    2. Principle

    If the intensity response of the measurement system is ideally linear, then the intensity of the ideal captured fringe pattern based on N-step PS can be expressed as Ini(x,y)=a(x,y)+b(x,y)cos[ϕ(x,y)+2π(n1)/N],where a and b represent background and modulation intensity, respectively. n[1,N] and N is the total number of phase shift steps (N=3 for 3-step PS). ϕ is the wrapped phase to be solved for, which can be extracted by the following Eq. (2): ϕ(x,y)=arctann=1NIn(x,y)sin[2π(n1)/N]n=1NIn(x,y)cos[2π(n1)/N].

    However, the nonlinear response of the ordinary projector and camera leads to the non-sinusoidal gray distribution of the captured fringe, and the intensity of the actual captured fringe pattern can be expressed by a high-order Fourier series[12], Inr(x,y)=b0(x,y)+k=1Kbk(x,y)cos{k[ϕ(x,y)+2π(n1)/N]},where b0 represents background intensity (b0 represents zero-frequency component), K represents the maximum order of the harmonic component, and bk is the coefficient of the kth harmonic component.

    Generally, higher harmonics above the fifth order have very little influence on the phase calculation[7,8]. For the 3-step PS, after omitting the pixel coordinates, Eq. (3) can be rewritten as Inr=b0+k=15bkcos{k[ϕ+2π(n1)/3]}.

    According to Eqs. (2) and (4), the real wrapped phase (ϕr) based on 3-step PS is as follows [see Supplement 1 to view the detailed calculation process of Eq. (5)]: ϕr=arctann=13{b0+k=15bkcos{k[ϕ+2π(n1)/3]}}sin[2π(n1)/3]n=13{b0+k=15bkcos{k[ϕ+2π(n1)/3]}}cos[2π(n1)/3]=arctanb1sinϕb2sin(2ϕ)+b4sin(4ϕ)b5sin(5ϕ)b1cosϕ+b2cos(2ϕ)+b4cos(4ϕ)+b5cos(5ϕ).

    The ideal wrapped phase (ϕi) can be written directly as ϕi=arctansinϕcosϕ.

    The phase error introduced by the nonlinear intensity response of the measurement system is [see Supplement 2 to view the detailed calculation process of Eq.  (7)]Δϕ=ϕrϕi=arctan[tan(ϕrϕi)]=arctan(b2b4)sin(3ϕ)b5sin(6ϕ)b1+(b2+b4)cos(3ϕ)+b5cos(6ϕ).

    Since the coefficient b1 of the first harmonic component is much larger than the coefficients of the other harmonic components, Eq. (7) can be further simplified as Δϕarctan(b2b4)sin(3ϕ)b5sin(6ϕ)b1(b2b4)sin(3ϕ)b1b5sin(6ϕ)b1=λ1sin(3ϕ)λ2sin(6ϕ)λ1sin(3ϕ),where λ1=(b2b4)/b1,λ2=b5/b1, and λ1λ2.

    The opposite phase error relative to Eq. (8) is ΔϕO=λ1sin(3ϕ+π)=λ1sin[3(ϕ+π/3)].

    The captured fringe patterns based on 3-step PS are I1=a+bcosϕ,I2=a+bcos(ϕ+2π/3),I3=a+bcos(ϕ+4π/3).

    According to Eqs. (2) and (10), the wrapped phase ϕ based on 3-step PS is ϕ=arctan3(I2I3)2I1I2I3.

    According to Eq. (9), we need to further get ϕA=ϕ+π/3. Obviously, the following result can be obtained according to Eq. (10): {a=I1+I2+I33bsinϕ=I2I33,bcosϕ=2I1I2I33.

    Further, we can get bsinϕA=bsin(ϕ+π/3)=b2sinϕ+3b2cosϕ=I2I323+2I1I2I323=I1I33,bcosϕA=bcos(ϕ+π/3)=b2cosϕ3b2sinϕ=2I1I2I36I2I32=I12I2+I33.

    The additional wrapped phase ϕA based on 3-step PS is ϕA=(ϕ+π/3)=arctan3(I1I3)I12I2+I3.

    A fringe pattern of the measured object is shown in Fig. 1(a). According to Eqs. (11) and (15), the original wrapped phase ϕ and the additional wrapped phase ϕA can be obtained, as shown in Figs. 1(b) and 1(c). We extract the cross sections at the red lines in Figs. 1(b) and 1(c) to obtain the local wrapped phase information shown in Fig. 1(d). We carefully observe ϕ and ϕA in Fig. 1(d), and we can see that their fluctuations are opposite. If we can estimate the pixel offset S and shift ϕA left by S pixels and delete the S pixels to the left and right of the original and additional wrapped phases, then we can get ϕS and ϕAS in Fig. 1(e). Further, we fuse ϕS and ϕAS to get ϕF, as shown in Fig. 1(e). In ϕF, the phase error introduced by the nonlinear intensity response of the measurement system is significantly reduced. From the above analysis, it can be seen that how to estimate the pixel offset S between ϕ and ϕA in Fig. 1(d) is the key. Therefore, we propose an S estimation algorithm (see Supplement 3 to view the pseudocode).

    Illustration of the additional wrapped phase translation and phase error correction. (a) Captured fringe. (b) Original wrapped phase ϕ. (c) Additional wrapped phase ϕA. (d) Cross sections in panels (b) and (c). (e) Generation of ϕS, ϕA-S, and ϕF.

    Figure 1.Illustration of the additional wrapped phase translation and phase error correction. (a) Captured fringe. (b) Original wrapped phase ϕ. (c) Additional wrapped phase ϕA. (d) Cross sections in panels (b) and (c). (e) Generation of ϕS, ϕA-S, and ϕF.

    Step 1: For the wrapped phase of the ith row, when the wrapped phase jumps, the wrapped phase difference of adjacent pixels must be greater than π. So we search for the wrapped phase jump and record the column coordinate into the matrix OWP_co_index and AWP_co_index.

    Step 2: When calculating the pixel offset Si in the ith row, if one of the two situations in Fig. 2 occurs, the Si cannot be calculated. The first case is that ϕA has one more phase jump than ϕ, as shown in Fig. 2(a). The second case is that ϕ has one more phase jump than ϕA, as shown in Fig. 2(b). When one of the above two situations occurs, the estimation of Si is inaccurate and needs to be eliminated.

    Two situations that induce a line of wrapped phase to fail to calculate S. (a) ϕA has one more phase jump than ϕ. (b) ϕ has one more phase jump than ϕA.

    Figure 2.Two situations that induce a line of wrapped phase to fail to calculate S. (a) ϕA has one more phase jump than ϕ. (b) ϕ has one more phase jump than ϕA.

    Step 3: For the wrapped phase of the ith row that meets the requirements of Step 2, we subtract the column coordinates recorded by OWP_co_index and AWP_co_index correspondingly, and then calculate the average value, so as to obtain the pixel offset Si of the ith row.

    Step 4: We repeat Steps 1 to 3 and take the mean value of all Si, so as to obtain S of the entire wrapped phase.

    3. Experiments

    To validate the effectiveness of the proposed method, we constructed a three-dimensional (3D) measurement system and conducted comparative experiments. The 3D measurement system included a projector with a resolution of 1280×800 pixels (NP-M311W+), a camera with a resolution of 1024×1280 pixels (MER-131-210U3M), and a computer with an Intel Core i5-4258U CPU. We adopt MATLAB for 3D reconstruction.

    In the first experiment, we used the fox mask in Fig. 1(a) as the measured object. The reconstructed results using the traditional 3-step PS method and the proposed GSCN method are shown in Figs. 3(a) and 3(b), respectively. It is evident that Fig. 3(a) exhibits a significant amount of dense ripples on the 3D surface shape, whereas Fig. 3(b) shows a smoother result. To further evaluate the error correction performance of the proposed method, Fig. 3(c) compares the cross sections of the red line segment in Fig. 3(a) with Fig. 3(b), while Fig. 3(d) provides a detailed view of Fig. 3(c). In Fig. 3(d), there are significant periodic errors in the phase from using the traditional 3-step PS method, while the periodic error using the proposed GSCN method is significantly suppressed. Further quantitative comparison results will be provided in the two experiments in Figs. 6 and 7.

    3D reconstruction results of the fox mask. (a) Traditional 3-step PS method. (b) Proposed GSCN method. (c) Comparison of the cross sections of the reconstruction results. (d) Close-up of the cross sections.

    Figure 3.3D reconstruction results of the fox mask. (a) Traditional 3-step PS method. (b) Proposed GSCN method. (c) Comparison of the cross sections of the reconstruction results. (d) Close-up of the cross sections.

    The second experiment used a Venus statue as the measured object. Figures 4(a)4(c) show the captured deformed fringe patterns, the wrapped phase obtained by the 3-step PS method, and the wrapped phase obtained by the proposed GSCN method, respectively. Figure 4(d) presents a comparison between a segment of the wrapped phase taken from the red line position in Figs. 4(b) and 4(c). Figure 4(e) presents the result of the pixel shifting and fusing of the two shifted wrapped phases, where the fused wrapped phase no longer exhibits periodic fluctuations. Figures 4(f) and 4(g) show the 3D reconstructed surfaces using the 3-step PS method and its self-corrected result, respectively. The self-corrected 3D surface no longer exhibits ripples and is closer to the ground truth. Figure 4(h) compares a cross section of the red line position in Fig. 4(f) with the corresponding section in Figs. 4(g) and 4(i) providing a close-up view. The proposed GSCN method effectively reduces the influence of the system nonlinearity and significantly improves the reconstruction accuracy.

    3D reconstruction results of the Venus statue. (a) Deformed fringe patterns. (b) Original wrapped phase ϕ. (c) Additional wrapped phase ϕA. (d) Comparison of a section of the wrapped phase ϕ and ϕA. (e) Generation of ϕS, ϕA-S, and ϕF. (f) 3-step PS method. (g) Proposed GSCN method. (h) Cross-section comparison of the reconstruction results. (i) Close-up of the cross section.

    Figure 4.3D reconstruction results of the Venus statue. (a) Deformed fringe patterns. (b) Original wrapped phase ϕ. (c) Additional wrapped phase ϕA. (d) Comparison of a section of the wrapped phase ϕ and ϕA. (e) Generation of ϕS, ϕA-S, and ϕF. (f) 3-step PS method. (g) Proposed GSCN method. (h) Cross-section comparison of the reconstruction results. (i) Close-up of the cross section.

    As shown in Fig. 5(a), in order to confirm the effectiveness of the proposed GSCN method on the specimen with a stepped surface, we used a dental cast as the measured object. Figure 5(b) shows a captured fringe pattern. Figures 5(c) and 5(d) show the 3D reconstructed surfaces using the 3-step PS method and proposed GSCN method, respectively. We extract a row of step surfaces from Figs. 5(c) and 5(d) for comparison, that is, we extract the cross sections of the red dashed lines from Figs. 5(c) and 5(d) to obtain Fig. 5(e). It can be seen from Fig. 5(e) that the proposed GSCN method is also effective for step surfaces. For a more detailed comparison, we further enlarged the local area of Fig. 5(e) to obtain Fig. 5(f), and it can be seen that the proposed GSCN method greatly reduces the system nonlinearity-induced phase errors.

    3D reconstruction results of the dental cast. (a) Dental cast with stepped surfaces. (b) Captured fringe pattern. (c) Traditional 3-step PS method. (d) Proposed GSCN method. (e) Comparison of the cross sections of the reconstruction results. (f) Close-up of the cross sections.

    Figure 5.3D reconstruction results of the dental cast. (a) Dental cast with stepped surfaces. (b) Captured fringe pattern. (c) Traditional 3-step PS method. (d) Proposed GSCN method. (e) Comparison of the cross sections of the reconstruction results. (f) Close-up of the cross sections.

    To further compare the reconstruction accuracy between the proposed GSCN method and other existing methods, we used two isolated complex objects (David statue and dental cast) as the measured object. Figure 6(a) shows the captured deformed fringe pattern, and Fig. 6(b) shows the fused wrapped phase (ϕF). We extract a row of locally wrapped phases of ϕS, ϕAS, and ϕF to obtain Fig. 6(c). It can be seen that the fluctuation of the fusion wrapped phase (ϕF) has been significantly suppressed. Figures 6(d)6(h) show the reconstruction results using the traditional 3-step phase-shifting method, the proposed method, the Hilbert transform method, the double 3-step phase-shifting method, and the 12-step phase-shifting method, respectively.

    3D reconstruction results of the David statue and dental cast. (a) Deformed fringe patterns. (b) Fused wrapped phase. (c) Wrapped phase comparison. (d) 3-step PS method. (e) Proposed GSCN method. (f) Hilbert transform method. (g) Double 3-step PS method. (h) 12-step PS method. (i) Phase errors of the four methods.

    Figure 6.3D reconstruction results of the David statue and dental cast. (a) Deformed fringe patterns. (b) Fused wrapped phase. (c) Wrapped phase comparison. (d) 3-step PS method. (e) Proposed GSCN method. (f) Hilbert transform method. (g) Double 3-step PS method. (h) 12-step PS method. (i) Phase errors of the four methods.

    By comparing Figs. 6(d) and 6(e), it is evident that the proposed method effectively eliminates the wrinkles on the surfaces of the David statue and dental cast, indicating its good error correction performance for complex objects. The reconstructed surfaces in Figs. 6(f)6(h) no longer exhibit pronounced water ripples. However, the reconstruction surface obtained using the Hilbert transform method shows some holes. This is attributed to the Hilbert transform’s sensitivity to shadows and height changes, resulting in significant errors in the additional wrapped phase and subsequently causing the appearance of holes in the reconstructed surface. We take the phase cross section using the two-step PS method as the ground truth and obtain the phase errors of each method shown in Fig. 6(i). It can be seen that the cross section from the traditional 3-step PS method not only exhibits the largest phase error but also has strong fluctuations. The differences among the proposed method, the Hilbert transform method, and the double 3-step PS method are less pronounced.

    To further quantitatively compare the reconstruction accuracy of different methods for this experiment, we used the reconstruction result obtained from the 12-step phase-shifting method as the ground truth. We then calculated the root mean squared error (RMSE), mean absolute error (MAE), and largest error (LE) for each method. The results are shown in Table 1.

    • Table 1. Phase Error Comparison of the Elephant Model (in rad)

      Table 1. Phase Error Comparison of the Elephant Model (in rad)

      Approach3-stepProposedDouble 3-stepHilbert
      RMSE0.15570.05590.04160.0645
      MAE0.41760.14130.07860.1216
      LE0.48330.29093.38363.3382

    The proposed GSCN method has lower RMSE, MEA, and LE compared to the traditional 3-step phase-shifting method. Additionally, the RMSE and LE of the proposed GSCN method are lower than those of the Hilbert transform method. Taking the comparison of the RMSE as an example, the RMSE of the proposed method is 64.1% lower than that of the traditional method and 13.3% lower than that of the Hilbert transform method. It is worth noting that both the Hilbert transform method and the double 3-step phase-shifting method have significantly larger largest errors compared to the proposed self-correction method. This discrepancy can be attributed to the errors introduced by the additional wrapped phase in the Hilbert transform and the double 3-step phase-shifting methods. On the other hand, the additional wrapped phase in the proposed method is calculated directly from the original fringe patterns, enabling it to resist noise and other disturbances with the same capability as the original wrapped phase. It is worth noting that the running cost of the proposed method has slightly increased. The running times of the traditional 3-step phase-shifting method, the double 3-step phase-shifting method, and the proposed algorithm are approximately 1.023 s, 2.058 s, and 2.419 s, respectively. The running cost of the proposed algorithm has increased by 17.5% compared to that of the double 3-step phase-shifting method. In the future, the graphics processing unit (GPU) can be further adopted to accelerate image processing speed and reduce the running cost of the proposed algorithm.

    Last, we conducted a 3D surface reconstruction of a Rui-Shou mask, as shown in Fig. 7(a). Figure 7(b) shows the captured fringe pattern and Fig. 7(c) shows the result of the Hilbert transform applied to Fig. 7(b). A comparison is made by selecting a segment of intensity at the positions indicated by the red and blue lines in Figs. 7(b) and 7(c), as shown in Fig. 7(d). The blue line represents the intensity of Fig. 7(b), while the red line represents the intensity of Fig. 7(c). By comparing them, it can be seen that the blue line has a phase shift of π/2, and the background intensity has been filtered out. Figures 7(e)7(h) show the reconstruction results using the 12-step phase-shifting method, the 3-step phase-shifting method, the proposed method, and the Hilbert transform method, respectively. It can be seen that both the proposed method and the Hilbert transform method effectively eliminate the nonlinear errors on the reconstructed surface. However, the reconstruction result using the Hilbert transform method exhibits some holes. The Hilbert transform method cannot effectively eliminate background intensity and is sensitive to height jump, resulting in the loss of some correct intensity values in the deformed fringe obtained using the Hilbert transform method.

    Reconstruction results of the Rui-Shou mask. (a) Real image. (b) Deformation fringe pattern. (c) Hilbert transform result of (a). (d) Intensity comparison. (e) Reconstruction result using the 12-step phase-shifting method. (f) Reconstruction result using the 3-step phase-shifting method. (g) Reconstruction result using the proposed method. (h) Reconstruction result using the Hilbert transform method.

    Figure 7.Reconstruction results of the Rui-Shou mask. (a) Real image. (b) Deformation fringe pattern. (c) Hilbert transform result of (a). (d) Intensity comparison. (e) Reconstruction result using the 12-step phase-shifting method. (f) Reconstruction result using the 3-step phase-shifting method. (g) Reconstruction result using the proposed method. (h) Reconstruction result using the Hilbert transform method.

    We also conducted quantitative comparisons for this experiment. By using the reconstruction results obtained with the 12-step phase-shifting method as the ground truth, we calculated the MAE, LE, and RMSE, as shown in Table 2. The proposed method achieved a lower RMSE compared to the Hilbert transform method, indicating higher accuracy of the proposed method. Furthermore, the maximum error of the Hilbert transform method was significantly larger than that of the 3-step phase-shifting method and the proposed method, consistent with the analysis mentioned above.

    • Table 2. Phase Error Comparison of the Rui-Shou Mask (in rad)

      Table 2. Phase Error Comparison of the Rui-Shou Mask (in rad)

      MethodMAELERMSE
      3-step0.41310.54080.1592
      Proposed0.11920.31420.0480
      Hilbert0.11613.26620.0642

    The proposed GSCN method is effective for both objects with continuous surfaces and objects with step surfaces, but it has the following noteworthy issues. The proposed GSCN method slightly increases the running cost. Therefore, for scenarios with very high real-time requirement, it is recommended to use the GPU for improvement.The pixel offset S of the additional wrapped phase results in the left or right surfaces not being errors corrected. Due to the lower fringe frequency causing a decrease in 3D measurement precision, the fringe frequency is generally high, and the uncorrected 3D surface is also very small. Therefore, the proposed GSCN method still has the universality of the errors correction introduced by the system nonlinear response.

    4. Conclusion

    In this paper, we propose a generic and flexible self-correction method for nonlinearity-induced phase error in phase-shifting profilometry. Based on the application of the 3-step PSP, the proposed method can accurately compensate for the nonlinearity-induced phase error in any 3D measurement system. The contribution and novelty of the proposed method include the following. (1) The proposed method only requires a little extra program to perform the phase error compensation automatically and flexibly, without needing additional fringe patterns or pre-operation. (2) The proposed method is not affected by the 3D measurement system and its parameters and has good universality for different 3D measurement system. (3) The proposed method is simple and easy to implement. (4) The proposed method does not require any additional fringes or hardware assistance and can be easily extended to 4-step or 5-step PSP. When the ordinary projector is used in the 3D measurement to improve the price advantage, the proposed method in this work has potential for application.

    Tools

    Get Citation

    Copy Citation Text

    Jianhua Wang, Peng Xu, Yanxi Yang, "Generic and flexible self-correction method for nonlinearity-induced phase error in three-dimensional imaging," Chin. Opt. Lett. 22, 061201 (2024)

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Instrumentation, Measurement, and Optical Sensing

    Received: Dec. 19, 2023

    Accepted: Feb. 19, 2024

    Posted: Feb. 19, 2024

    Published Online: Jun. 20, 2024

    The Author Email: Jianhua Wang (wangjianhua@qut.edu.cn)

    DOI:10.3788/COL202422.061201

    CSTR:32184.14.COL202422.061201

    Topics