Chinese Optics Letters, Volume. 21, Issue 10, 101202(2023)

π-phase-shifted two-plus-one method for non-diffuse surface

Jianhua Wang1、*, Yanxi Yang2, and Peng Xu1
Author Affiliations
  • 1School of Information and Control Engineering, Qingdao University of Technology, Qingdao 266520, China
  • 2School of Automation and Information Engineering, Xi’an University of Technology, Xi’an 710048, China
  • show less

    We propose a method for reconstructing non-diffuse surfaces based on the π-phase-shifted two-plus-one phase-shifting method. First, we introduce a 2fH + a + 2fM + 2fL method for unwrapped phase extraction. Subsequently, we introduce a new set of π-phase-shifted 2fH + a/2 + 2fM + 2fL fringe patterns with halved background intensity. The saturated pixels will be replaced with the unsaturated pixels in the π-phase-shifted fringe patterns. Finally, we analyze eight fringe replacement cases and give the corresponding phase calculation, and further give the general formulas. Experiments confirm that the sum of the phase error of the proposed method is 81.4% lower than that of the traditional method, and 61.5% lower than that of the adaptive fringe projection method.

    Keywords

    1. Introduction

    Fringe projection profilometry (FPP) is one of the widely used structured light three-dimensional (3D) reconstruction methods[14]. For various common highly reflective objects, the local intensity saturation of the captured fringe pattern leads to coding distortion, which makes it difficult for FPP to correctly extract the phase information[5]. A large number of state-of-the-art methods have been proposed to address the above issue; these methods are called high-dynamic range (HDR) techniques by scholars. Commonly used HDR techniques include the following three categories: (a) Methods based on multiple exposure adjustment[68]. Zhang and Yau[6] first proposed the exposure time adjustment method. These methods usually have a good image signal-to-noise ratio (SNR); however, the exposure time range is difficult to quantify, and the large number of fringes sacrifices the measurement efficiency. (b) Methods based on adaptive fringe projection[911]. These methods can adjust the pixel intensity of the projected fringe pattern according to the surface reflectivity. However, when the projected intensity decreases, the SNR of the captured fringe pattern will decrease as well. Further, for strongly reflective surfaces, adjusting pixel intensity is often not enough. (c) Methods based on polarizing filters[12]. Specular reflected light is polarized, so this principle can be used to eliminate fringe saturation. However, it requires additional hardware and complex adjustment on site, and reduces the SNR in dark areas.

    In this work, we introduce a 2fH+a+2fM+2fL method for unwrapped phase extraction. For 3D reconstruction of non-diffuse surfaces, we introduce a new set of π-phase-shifted 2fH+a/2+2fM+2fL fringe patterns with halved background intensity. If a pixel in the original fringe pattern is saturated, the corresponding pixel from the π-phase-shifted fringe pattern is selected for replacement. Eight replacement methods and their general formulas are given in detail.

    2. Principle

    Zhang et al.[13] proposed a two-plus-one phase-shifting method, which can reduce the motion-induced phase error. Based on Zhang’s algorithm, we introduced a 2fH+a+2fM+2fL algorithm (two high-frequency fringes, a background image, two mid-frequency fringes, and two low-frequency fringes) to calculate the unwrapped phase. For temporal phase unwrapping using the multifrequency method in this work, fL=1/W, and W is width of the projected fringe pattern. The captured fringe patterns can be expressed as {g1(x,y)=a(x,y)+b(x,y)sinφH(x,y)g2(x,y)=a(x,y)g3(x,y)=a(x,y)+b(x,y)cosφH(x,y)g4(x,y)=a(x,y)+b(x,y)sinφM(x,y)g5(x,y)=a(x,y)+b(x,y)cosφM(x,y)g6(x,y)=a(x,y)+b(x,y)sinφL(x,y)g7(x,y)=a(x,y)+b(x,y)cosφL(x,y),where a and b represent the background and modulation intensities of the captured fringe patterns.

    The wrapped phases can be expressed as {ψH(x,y)=arctan[g1(x,y)g2(x,y)g3(x,y)g2(x,y)]ψM(x,y)=arctan[g4(x,y)g2(x,y)g5(x,y)g2(x,y)]ψL(x,y)=arctan[g6(x,y)g2(x,y)g7(x,y)g2(x,y)].

    Since fL=1/W, the unwrapped phase φL is equal to the wrapped phase ψL. The final unwrapped phase φH can be expressed as[14]{φL(x,y)=ψL(x,y)φM(x,y)=ψM(x,y)+2π×round[(fM/fL)φL(x,y)ψM(x,y)2π]φH(x,y)=ψH(x,y)+2π×round[(fH/fM)φM(x,y)ψH(x,y)2π],where “round” is the rounding function.

    To accurately calculate the three wrapped and unwrapped phases, none of the seven fringes in Eq. (1) can saturate. For a pixel at a certain index position, if any of the seven patterns saturates, it will cause a phase error. To this end, a π-phase-shifted 2fH+a/2+2fM+2fL method is introduced whose fringe patterns are described as[15]{g1(x,y)=a(x,y)b(x,y)sinφH(x,y)g2(x,y)=a(x,y)/2g3(x,y)=a(x,y)b(x,y)cosφH(x,y)g4(x,y)=a(x,y)b(x,y)sinφM(x,y)g5(x,y)=a(x,y)b(x,y)cosφM(x,y)g6(x,y)=a(x,y)b(x,y)sinφL(x,y)g7(x,y)=a(x,y)b(x,y)cosφL(x,y).

    Refer to Appendix A for the calculation detail of Eq. (4). (a) The pixels of any one, two, or three of the high-frequency patterns in Eq. (1) are saturated, (b) the pixels of any one or both of the mid-frequency patterns in Eq. (1) are saturated, and (c) the pixels of any one or both of the low-frequency patterns in Eq. (1) are saturated. Then we substitute the pixels of the corresponding pattern in Eq. (4). The background image a in Eq. (1) may also produce intensity saturation because of the high reflection of the measured surface. To this end, we reduced the projected background intensity to half; see Appendix A for details. The captured π-phase-shifted background intensity will also be approximately reduced by half, i.e., a(x,y)/2. When the intensity of a pixel in the original background image is saturated, it will be replaced by the pixel in the π-phase-shifted background image for wrapped phase calculation.

    There are eight cases for the high-frequency wrapped phase calculation, as shown below.

    Case 1: Three high-frequency fringe patterns are not saturated. ψH(x,y)=arctan[g1(x,y)g2(x,y)g3(x,y)g2(x,y)].

    Case 2: g1(x,y) is saturated and replaced by g1(x,y). ψH(x,y)=arctan[g2(x,y)g1(x,y)g3(x,y)g2(x,y)].

    Case 3: g2(x,y) is saturated and replaced by g2(x,y). ψH(x,y)=arctan[g1(x,y)2g2(x,y)g3(x,y)2g2(x,y)].

    Case 4: g3(x,y) is saturated and replaced by g3(x,y). ψH(x,y)=arctan[g1(x,y)g2(x,y)g2(x,y)g3(x,y)].

    Case 5: g1(x,y) and g2(x,y) are saturated and replaced by g1(x,y) and g2(x,y). ψH(x,y)=arctan[2g2(x,y)g1(x,y)g3(x,y)2g2(x,y)].

    Case 6: g2(x,y) and g3(x,y) are saturated and replaced by g2(x,y) and g3(x,y). ψH(x,y)=arctan[g1(x,y)2g2(x,y)2g2(x,y)g3(x,y)].

    Case 7: g1(x,y) and g3(x,y) are saturated and replaced by g1(x,y) and g3(x,y). ψH(x,y)=arctan[g2(x,y)g1(x,y)g2(x,y)g3(x,y)].

    Case 8: Three high-frequency fringe patterns are saturated. ψH(x,y)=arctan[2g2(x,y)g1(x,y)2g2(x,y)g3(x,y)].

    Equations (5)–(12) can be expressed by the following general formula: ψH(x,y)=arctan{(1)C1[G1(x,y)G2(x,y)](1)C2[G3(x,y)G2(x,y)]}.

    In Eq. (13), if g1(x,y) is not replaced, G1(x,y)=g1(x,y) and C1=0; else, G1(x,y)=g1(x,y) and C1=1. If g2(x,y) is not replaced, G2(x,y)=g2(x,y); else, G2(x,y)=2g2(x,y). If g3(x,y) is not replaced, G3(x,y)=g3(x,y) and C2=0; else, G3(x,y)=g3(x,y) and C2=1.

    Similar to Eq. (13), the mid-frequency and low-frequency wrapped phases can be expressed as ψM(x,y)=arctan{(1)C3[G4(x,y)G2(x,y)](1)C4[G5(x,y)G2(x,y)]}.ψL(x,y)=arctan{(1)C5[G6(x,y)G2(x,y)](1)C6[G7(x,y)G2(x,y)]}.

    In Eqs. (14) and (15), if g4(x,y) and g6(x,y) are not replaced, G4(x,y)=g4(x,y), G6(x,y)=g6(x,y), C3=0 and C5=0; else, G4(x,y)=g4(x,y), G6(x,y)=g6(x,y), C3=1 and C5=1. If g2(x,y) is not replaced, G2(x,y)=g2(x,y); else, G2(x,y)=2g2(x,y). If g5(x,y) and g7(x,y) are not replaced, G5(x,y)=g5(x,y), G7(x,y)=g7(x,y), C4=0 and C6=0; else, G5(x,y)=g5(x,y), G7(x,y)=g7(x,y), C4=1 and C6=1.

    A mouse surface has large local highlights at the camera’s perspective. Figures 1(a) and 1(b) show the original and π-phase-shifted fringe patterns, g1 and g1. Figure 1(c) shows the cross sections of Figs. 1(a) and 1(b). Some pixels are saturated in Fig. 1(c). Figure 1(d) is a partial enlargement of Fig. 1(c). When the pixels in fringe g1 are saturated, the pixels in fringe g1 are not saturated, so the saturated pixels of g1 can be replaced by the pixels of g1 for phase calculation. However, there may also be a phenomenon in which pixels are saturated in both original and π-phase-shifted fringe patterns due to local strong reflections. Obviously, the proposed method (PM) cannot correctly calculate the phase for such pixels, but such pixels are relatively small. Therefore, the PM can still greatly reduce the phase error introduced by intensity saturation.

    Original and π-phase-shifted fringe patterns. (a) Original fringe; (b) π-phase-shifted fringe; (c) cross sections of (a) and (b); (d) partial enlargement of Fig. 1(c).

    Figure 1.Original and π-phase-shifted fringe patterns. (a) Original fringe; (b) π-phase-shifted fringe; (c) cross sections of (a) and (b); (d) partial enlargement of Fig. 1(c).

    3. Experiments

    Our FPP system includes a digital light processor (DLP) from Texas Instruments (TI) with a resolution of 912pixels×1140pixels, a camera from Daheng Imaging with a resolution of 1024pixels×1280pixels, and a computer.

    In the first experiment, the measured object is a Peking Opera Mask, and a captured fringe is shown in Fig. 2(a). Figure 2(b) shows 3D reconstruction by the traditional method (TM), and Fig. 2(c) shows 3D reconstruction by the PM. It can be seen that the reconstructed surface based on the TM has a missing surface due to intensity saturation, while the reconstructed surface based on the PM is more complete. To further demonstrate the robustness of the PM, a large-area and strongly reflective object shown in Fig. 3(a) is used for 3D measurement. 3D measurement of such an object is the most challenging. Figure 3(b) shows 3D reconstruction by the TM, and Fig. 3(c) shows 3D reconstruction by the PM. By comparing Figs. 3(b) and 3(c), it can be seen that the error introduced by saturated pixels based on the PM is greatly reduced, which further confirms the effectiveness of the PM.

    3D shape reconstruction of a mask under image saturation. (a) Original fringes with frequency 15/912; (b) 3D reconstruction by the TM; (c) 3D reconstruction by the PM.

    Figure 2.3D shape reconstruction of a mask under image saturation. (a) Original fringes with frequency 15/912; (b) 3D reconstruction by the TM; (c) 3D reconstruction by the PM.

    3D shape reconstruction of a Chinese porcelain. (a) Original fringes with frequency 15/912; (b) 3D reconstruction by the TM; (c) 3D reconstruction by the PM.

    Figure 3.3D shape reconstruction of a Chinese porcelain. (a) Original fringes with frequency 15/912; (b) 3D reconstruction by the TM; (c) 3D reconstruction by the PM.

    Experimental comparisons using the TM, the adaptive fringe projection method (AFPM), the multiple exposure adjustment method (MEAM), and the PM are carried out. The measured object is a metal product manufactured by a high-speed computerized numerical control (CNC) machining center. Figure 4(a) shows a fringe g4 with a frequency of 15/912 in a set of fringe sequence using the TM. The PM requires an additional set of π-phase-shifted fringe sequences. Figure 4(b) shows a fringe g4 in the additional set of π-phase-shifted fringe sequence. The pixel gray values in fringes g4 and g4 are approximately inversely cosine-distributed. The key to the AFPM is the generation of an adaptive projection fringe sequence. A fringe with a frequency of 15/912 of the adaptive projected fringe sequence generated in this work is shown in Fig. 4(c). The MEAM (24 exposures) is used, and 24 sets of fringe sequences are generated. We extract the optimal results with the largest gray scale but not saturated from the 24 sets of fringes, and perform phase calculation based on the optimal fringe sequence. Figure 4(d) shows a fringe g1 with a frequency of 180/912 when the exposure time is 15 ms. The captured fringe contain intensity-saturated regions, such as the pixel intensity of the ①, ②, and ③ regions in Fig. 4(d). Figure 4(e) shows a fringe g1 in the optimal set of fringe sequences. Obviously, the pixel intensity of the ①, ②, and ③ regions in Fig. 4(e) is suppressed. When the exposure time is wide enough from small to large and the number of exposures is very large, a good final fringe sequence can be extracted by MEAM for phase calculation. Figure 4(f) shows the 3D reconstructed surface using the TM. There are a large number of 3D surface deletions on the highly reflective surfaces, such as the ① and ② surfaces shown in Fig. 4(f). Figure 4(g) shows the 3D reconstructed surface using the AFPM. The 3D surface deletions on the surfaces ① and ② have been reduced. Figure 4(h) shows the 3D reconstructed surface using the PM. Compared with the surface ② in Fig. 4(g), the 3D reconstruction result of the surface ② in Fig. 4(h) is better. However, the 3D reconstruction result of the ① surface in Fig. 4(h) is slightly worse than that of the ① surface in Fig. 4(g). Figure 4(i) shows the 3D reconstructed surface using the MEAM (24 exposures). We can see that the 3D reconstruction shapes on the surfaces ① and ② in Fig. 4(i) is the best of that reconstructed by the four methods.

    3D shape reconstruction comparisons of a metal product. (a) An original fringe with frequency 15/912; (b) π-phase-shifted fringe with frequency 15/912; (c) adaptive projection fringe with frequency 15/912; (d) fringe with exposure time 15 ms and frequency 180/912; (e) optimal fringe with frequency 180/912; (f) 3D measurement using the TM; (g) 3D measurement using the AFPM; (h) 3D measurement using the PM; (i) 3D measurement using the MEAM (24 exposures).

    Figure 4.3D shape reconstruction comparisons of a metal product. (a) An original fringe with frequency 15/912; (b) π-phase-shifted fringe with frequency 15/912; (c) adaptive projection fringe with frequency 15/912; (d) fringe with exposure time 15 ms and frequency 180/912; (e) optimal fringe with frequency 180/912; (f) 3D measurement using the TM; (g) 3D measurement using the AFPM; (h) 3D measurement using the PM; (i) 3D measurement using the MEAM (24 exposures).

    The comparisons in Fig. 4 confirm that the MEAM (24 exposures) has the best 3D measurement result. However, the large number of fringes, uncertainty in exposure time range, difficult quantification of the number of exposures, and complex operation limit the application of the MEAM. However, to quantify the measurement accuracy of the other three methods, we take the phase extracted by the MEAM (24 exposures) as the standard value and employ a sum of the phase error (SPE) to evaluate measurement accuracy. SPE is defined as SPE=y=1wx=1h[φ24exposure(x,y)φ(x,y)],where w and h are the width and height of the captured fringe respectively, and φ24exposure is the unwrapped phase extracted by the MEAM.

    SPE comparisons of the metal product are shown in Table 1. The SPE of the PM is 81.4% lower than that of the TM, and 61.5% lower than that of the AFPM. To compare the measurement accuracy more comprehensively, we also introduced the number of phase errors (NPEs) at different phase error thresholds to evaluate the measurement accuracy, as shown in Fig. 5. It can be seen from the comparisons in Table 1 and Fig. 5 that the 3D surface reconstructed by the TM is the worst. The 3D surface reconstructed by the PM is better than that using the AFPM.

    NPE comparisons.

    Figure 5.NPE comparisons.

    • Table 1. Precision Comparison of a Metal Product

      Table 1. Precision Comparison of a Metal Product

      AlgorithmTMAFPMPM
      SPE (105 rad)4.74522.29060.8820

    To further confirm the above results, we performed 3D reconstruction on a wireless mouse with high reflectivity; the results are shown in Fig. 6. Figure 6(a) shows the 3D reconstructed surface using the TM. There are a larger number of 3D surface deletions on surface ①. Figure 6(b) shows the 3D reconstructed surface using the adaptive fringe projection method. The 3D surface deletions on surface ① is greatly reduced. Figure 6(c) shows the 3D reconstructed surface using the PM. The 3D surface deletions on surface ① are less than those in Fig. 6(b). Figure 6(d) shows the 3D reconstructed surface using the 24-exposure adjustment method. The result of 3D reconstruction on surface ① based on the 24-exposure adjustment method is the best; however, the measurement efficiency of the 24-exposure adjustment method is low.

    3D shape reconstruction comparisons of a wireless mouse. (a) 3D measurement using the TM; (b) 3D measurement using the AFPM; (c) 3D measurement using the PM; (d) 3D measurement using the 24-exposure adjustment method.

    Figure 6.3D shape reconstruction comparisons of a wireless mouse. (a) 3D measurement using the TM; (b) 3D measurement using the AFPM; (c) 3D measurement using the PM; (d) 3D measurement using the 24-exposure adjustment method.

    SPE comparisons of a wireless mouse are shown in Table 2. The SPE of the PM is 76.7% lower than that of the TM, and 29.7% lower than that of the AFPM.

    • Table 2. Precision Comparison of a Wireless Mouse

      Table 2. Precision Comparison of a Wireless Mouse

      AlgorithmTMAFPMPM
      SPE (105 rad)4.87041.61521.1347

    Table 3 shows the comparison of the calculation time of each algorithm. Adaptive fringes generation using the AFPM is time-consuming, but the algorithm of the AFPM is the same as that of the TM, so the time cost of the TM and the AFPM is the same, 0.93 s. The time cost of the MEAM is 33.82 s, and the time cost of the PM is 1.95 s. However, the measurement efficiency is also related to the number of fringes and the manual adjustment of the equipment. The manual adjustments of AFPM and MEAM are complicated.

    • Table 3. Comparison of the Calculation Time of Each Algorithm

      Table 3. Comparison of the Calculation Time of Each Algorithm

      AlgorithmTMAFPMMEAMPM
      Time cost (s)0.930.9332.821.95

    The comparison of the number of fringes is shown in Table 4. For the TM based on the 2fH+a+2fM+2fL algorithm, the number of fringes is seven. The number of fringes based on the PM is 2×7=14, the number of fringes based on the AFPM is at least 3×7+m>21 (pixel mapping based on the horizontal and vertical unwrapped phases requires 2×7=14 fringes, adaptive projection requires 7 fringes, and projection intensity estimation requires m fringes, m=18 in this work), and the number of fringes based on the MEAM (24 exposures) is 24×7=168. Compared with the number of fringes of the AFPM and MEAM, the PM improves by 64.1% and 91.7%, respectively.

    • Table 4. Comparison of the Number of Fringes

      Table 4. Comparison of the Number of Fringes

      AlgorithmTMAFPMMEAMPM
      Number of fringes73916814

    Phase errors caused by the intensity saturation can also be corrected effectively with the standard N-step phase-shifting algorithm as long as the phase shift is large enough. To compare the efficiency and accuracy of 3D reconstruction, we take a high-precision standard part processed by the CNC machine as the measured object, and compared the 3D reconstruction efficiency and accuracy of the TM, the PM, and the combination of the eight-step phase-shifting method and three frequency temporary phase unwrapping algorithm (3F-8S), respectively. Table 5 shows the comparison of the number of fringes of the TM, PM, and 3F-8S. Table 6 shows the comparison of the total time cost between TM, PM, and 3F-8S. The calculation time cost of the 3F-8S is 1.82 s. Since the frame rate of the measurement system is 10 Hz, 3F-8S takes 0.1×10=1s longer than the PM to capture fringe patterns.

    • Table 5. Comparison of the Number of Fringes of TM, PM, and 3F-8S

      Table 5. Comparison of the Number of Fringes of TM, PM, and 3F-8S

      AlgorithmTMPM3F-8S
      Number of fringes71424
    • Table 6. Comparison of the Time Cost of TM, PM, and 3F-8S

      Table 6. Comparison of the Time Cost of TM, PM, and 3F-8S

      AlgorithmTMPM3 F-8S
      Time cost (s)0.931.951.82
      Total time cost (s)0.93 + 0.7 = 1.631.95 + 1.4 = 3.351.82 + 2.4 = 4.22

    Figures 7(a), 7(b), and 7(c) show a high-frequency, a mid-frequency, and a low-frequency captured fringe pattern, respectively. Surface ① of the fringe pattern contains intensity saturation. Figures 7(d), 7(e), and 7(f) show the extracted unwrapped phases based on the TM, PM, and 3F-8S. For local comparison, we extract a row of cross sections of the unwrapped phases; the results are shown in Fig. 7(g). It can be seen from the unwrapped phase comparison that the unwrapped phases extracted based on the PM and 3F-8S are similar, and much better than that of the TM.

    Unwrapped phase comparison of the metal part. (a) High-frequency captured fringe pattern; (b) mid-frequency captured fringe pattern; (c) low-frequency captured fringe pattern; (d) unwrapped phase using the TM; (e) unwrapped phase using the PM; (f) unwrapped phase using the 3F-8S. (g) Cross-sectional comparison of the unwrapped phase (line 80).

    Figure 7.Unwrapped phase comparison of the metal part. (a) High-frequency captured fringe pattern; (b) mid-frequency captured fringe pattern; (c) low-frequency captured fringe pattern; (d) unwrapped phase using the TM; (e) unwrapped phase using the PM; (f) unwrapped phase using the 3F-8S. (g) Cross-sectional comparison of the unwrapped phase (line 80).

    Figure 8(a) shows a high-precision standard part processed by the CNC, whose computer-aided design (CAD) model has an ideal thickness of 8 mm. After CNC machining, we actually measured the thickness to be 8.01 mm, as shown in Figs. 8(b) and 8(c).

    Actual measurement of the metal part. (a) High-precision standard part; (b) actual measurement of the metal part thickness; (c) actual measured value.

    Figure 8.Actual measurement of the metal part. (a) High-precision standard part; (b) actual measurement of the metal part thickness; (c) actual measured value.

    We take the actual measured height (thickness) of 8.01 mm as the standard height, and use the TM, PM, and 3F-8S to obtain the 3D point clouds, as shown in Fig. 9. The height errors are shown in Table 7. On the non-intensity saturated surface, the height errors of the three positions using TM are 0.036, 0.058, and 0.069 mm, respectively, where the maximum height error is 0.069 mm and the average height error is 0.054 mm. The height errors of the three same positions using the PM are 0.035, 0.006, and 0.06 mm, respectively, where the maximum height error is 0.06 mm and the average height error is 0.034 mm. The height errors of the three same positions using 3F-8S are 0.03, 0.021, and 0.055 mm, respectively, where the maximum height error is 0.055 mm and the average height error is 0.035 mm.

    • Table 7. Precision Comparison of TM, PM, and 3F-8S

      Table 7. Precision Comparison of TM, PM, and 3F-8S

      AlgorithmTMPM3F-8S
      Maximum height error on the non-intensity saturated surface (mm)0.0690.060.055
      Average height error on the non-intensity saturated surface (mm)0.0540.0340.035
      RMSE (mm)2.29710.30560.2814

    3D point clouds of the metal part using the TM, PM, and 3F-8S. (a) 3D point cloud using the TM; (b) 3D point cloud using the PM; (c) 3D point cloud using the 3F-8S.

    Figure 9.3D point clouds of the metal part using the TM, PM, and 3F-8S. (a) 3D point cloud using the TM; (b) 3D point cloud using the PM; (c) 3D point cloud using the 3F-8S.

    However, only comparing the height error of the unsaturated surface is incomplete because the above three methods can well reconstruct the 3D shape for the unsaturated surface. We can see that the 3D point cloud using the TM has holes in the intensity-saturated surface, while the 3D point clouds using PM and 3F-8S are complete in the intensity-saturated surface. For comprehensive comparison, we take the 3D point cloud using the 24-exposure method as the standard value, and use the root mean squared error (RMSE) for comparison; the results are shown in Table 7. The RMSE using PM is 86.7% lower than that using the TM, and the RMSE using 3F-8S is 87.7% lower than that using the TM. By comparison, it can be seen that the RMSE using the PM is similar to that using 3F-8S, but the number of fringe patterns is reduced by 41.7%.

    4. Discussion

    For the MEAM, satisfactory 3D measurement results can be obtained as long as the exposure range and number are large enough. However, for different objects with high reflective properties, the exposure range is difficult to predict, the exposure time cannot be quantified, and the large number of fringes leads to a great decrease in measurement efficiency. Therefore, the application of the MEAM is limited.For the AFPM, it needs to precapture images according to the camera and estimate the projection intensity, but the estimation of projection intensity is limited in two aspects. One is the pixel mapping error between the camera and projector. Since the reflection intensity of the measured surface needs to be estimated in the camera coordinate and transformed into the projector coordinate, it is necessary to establish a pixel mapping between the camera and projector. The other is the dynamic range of the camera and the projector. For example, the gray-scale range of the camera and the projector is 0–255 for the 8-bit image, but due to the high reflection of the surface, the pixel intensity exceeds this range, resulting in inaccurate projection intensity estimation. Therefore, the AFPM inevitably has certain limitations, and the number of fringes is also much higher than that of the PM.The PM only needs to additionally project a set of π-phase-shifted fringe sequences, does not require complex hardware adjustment, and its algorithm is simple. 3D reconstruction error of the PM is also much smaller than that of the AFPM.

    5. Conclusions

    For 3D measurement of a non-diffuse surface based on the 2fH+a+2fM+2fL algorithm, we propose a method (PM) that combines the original fringes and the π-phase-shifted fringes. Saturated pixels are replaced by unsaturated pixels in the π-phase-shifted 2fH+a/2+2fM+2fL fringes. We give eight replacement cases and corresponding phase calculation methods in detail. Experiments verify the effectiveness and robustness of the PM. Since the PM does not require numerous and difficult-to-quantify exposure adjustment, nor does it require pre-estimation adaptive projection intensity and pixel mapping, it is simple, efficient, and easy to implement.

    Tools

    Get Citation

    Copy Citation Text

    Jianhua Wang, Yanxi Yang, Peng Xu. π-phase-shifted two-plus-one method for non-diffuse surface[J]. Chinese Optics Letters, 2023, 21(10): 101202

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Instrumentation, Measurement, and Optical Sensing

    Received: Mar. 16, 2023

    Accepted: May. 18, 2023

    Published Online: Sep. 20, 2023

    The Author Email: Jianhua Wang (wangjianhua@qut.edu.cn)

    DOI:10.3788/COL202321.101202

    Topics