Photonics Research, Volume. 13, Issue 8, 2224(2025)

Fringe projection profilometry via LED array with pre-calibration

Jin Tan1...3, Bo Zhang1,4, Hong-Xu Huang1,5, Wei-Jie Deng2,6 and Ming-Jie Sun1,* |Show fewer author(s)
Author Affiliations
  • 1School of Instrumentation and Optoelectronic Engineering, Beihang University, Beijing 100191, China
  • 2Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China
  • 3e-mail: 2317108@buaa.edu.cn
  • 4e-mail: zhangbo@buaa.edu.cn
  • 5e-mail: huanghongxu@buaa.edu.cn
  • 6e-mail: dengweijie@ciomp.ac.cn
  • show less

    Fringe projection profilometry (FPP) is a method that determines height by analyzing distortional fringes, which is widely used in high-accuracy 3D imaging. Now, one major reason limiting imaging speed in FPP is the projection device; the capture speed of high-speed cameras far exceeds the projection frequency. Among various devices, an LED array can exceed the speed of a high-speed camera. However, non-sinusoidal fringe patterns in the LED array systems can arise from several factors that will reduce the accuracy, such as the spacing between adjacent LEDs, the inconsistency in brightness across different LEDs, and the residual high-order harmonics in binary defocusing projection. It is challenging to resolve by other methods. In this paper, we propose a method that creates a look-up table using system calibration data of phase-height models. Then we utilize the look-up table to compensate for the phase error during the reconstructing process. The foundation of the proposed method relies on the time-invariance of systematic error; any factor that impacts the sinusoidal characteristic would present as an anomaly in the unwrapped phase. Experiments have demonstrated that the root mean square errors (RMSEs) of the results yielded by the proposed method were reduced by over 90% compared to those yielded by the traditional method, reaching 20 μm accuracy. This paper offers an alternative approach for high-speed and high-accuracy 3D imaging with an LED array and presents a workable solution for addressing complex errors from non-sinusoidal fringes.

    1. INTRODUCTION

    Fringe projection profilometry (FPP) is an advantageous non-contact, high-precision, and cost-effective 3D imaging technique. It is utilized in various fields such as industrial inspection, medical imaging, cultural heritage preservation, and robot vision [15], making it one of the popular methods for high-accuracy 3D imaging [612]. FPP projects multiple sinusoidal patterns onto an object and captures images of these distorted fringes to retrieve the height of the object [13]. Usually, it is necessary to project at least three sinusoidal patterns to retrieve the height; therefore the imaging speed is limited by both the switch rate of the projector and the frame rate of the camera. The speed with which the projector can switch between sinusoidal patterns is limited to around 100 Hz [1417], which prohibits the application of FPP for objects in high-speed motion. Furthermore, the chromatic light intensities of commercial projectors are calibrated by specific gamma distortions to meet the requirement of human vision, which compromises the accuracy of FPP [1824].

    Zhang et al. proposed a method for making binary patterns become pseudo-sinusoidal patterns through defocusing [2528], which improves the switch rate of the projector by reducing the amount of information in projected images and circumventing the non-linear gamma distortion. However, it still relies on a projector (e.g., 20 kHz for DLP Discovery 4100), which is far lower than 107 frames per second by high-speed camera captures [29], limiting the speed of FPP. Furthermore, the binary-defocusing fringes have higher-order harmonic errors, impacting the accuracy; specific methods are required to solve this issue [3035]. To further enhance speed, high-speed switching light sources were developed [3640].

    With the advancement of the LED array, the LED array’s switch rate exceeds 107  Hz [4144] beyond the high-speed cameras, coupled with rising resolutions. The binary defocusing needs to control whether lights are on or off to implement binary patterns, providing an opportunity to use an LED array in 3D imaging. The LED array would become significant equipment in the future of high-speed 3D imaging. However, it faces numerous challenges, with one of the most important issues being accuracy. The inconsistency in brightness across different LEDs, the spacing between adjacent LEDs, and the high-order harmonics in defocusing would affect the sinusoidal fringes, which impacts accuracy. These combined effects lead to complicated errors that are not often discussed in current research.

    This paper introduces a method using a look-up table (LUT) to compensate for the complicated errors. Unlike other existing LUT based methods that solve only one particular error, the proposed method solves the mixed systematic errors with time-invariance. The proposed method involves constructing the LUT with system calibration data of the phase-height model and applying a specific way to use it. The proposed method does not distinguish the effects of different errors, but converts them all into phase anomalies, as long as these errors are time-invariant. These phase anomalies are recorded in the LUT, which is then applied in the reconstruction process to compensate for these time-invariant errors effectively. Experimental results demonstrate the feasibility of the proposed method. This work is foundational towards realizing high-speed and high-accuracy 3D imaging with an LED array in the future.

    2. PRINCIPLE

    A. Phase-Shifting Algorithm

    Obtaining the phase from fringes is an essential step in FPP. Due to its high speed, accuracy, resolution, and resistance to environmental influences, it is often applied in FPP [11]. For an N-step phase-shifting algorithm with equal phase shift, the nth fringe pattern projected can be described as InP(x,y)=A(x,y)+B(x,y)cos(φ(x,y)2nπ/N),where φ(x,y) is the phase, A(x,y) is the background intensity, B(x,y) is the modulation, n=0,1,2,3,,N1, and N is the step number of phase-shifting algorithm. When there are at least three images (N3), the phase can be obtained by φ(x,y)=arctann=0N1Ikn(x,y)sin(2nπN)n=0N1Ikn(x,y)cos(2nπN),where Ikn is the intensity captured by a camera.

    After obtaining the wrapped phase, the next step is to unwrap it. We expect the projected fringes to be close to sinusoidal fringes, which indicates the unwrapped phase change is regular and smooth. However, several factors can affect the sinusoidal pattern during measurements in the LED array system. These factors include higher-order harmonics due to defocusing, uneven brightness among the LEDs, and gaps between adjacent LEDs. Such issues can lead to phase anomalies that require compensation.

    B. Look-Up Table Generation from the Calibration Data

    System calibration is the process of converting the unwrapped phase into height. Inaccuracies in the unwrapped phase can often lead to errors in height calculations. While research on system calibration typically focuses on accurately translating phase data into height, it has not fully explored the potential for phase compensation through data obtained during the calibration process. This paper presents a method for phase compensation based on these observations.

    In an LED array defocusing system, phase errors primarily arise from three factors: residual high-order harmonics in defocusing, the spacing between adjacent LEDs, and inconsistencies in brightness across different LEDs. In a fixed system, systematic factors create phase anomalies that remain time-invariant. This error condition is complex, making it difficult to distinguish between the various contributions. An LUT provides an advantage for phase compensation without requiring a detailed analysis of the mix of different effects. The phase anomalies were recorded during the system calibration process.

    The phase-height model for system calibration is utilized in this process. It involves placing a flat plate at various known heights, projecting fringes onto the plate, capturing images with a camera, and calculating the unwrapped phase. This establishes the relationship between height and unwrapped phase. The plane of the plates is aligned with the plane of the LED array. A 3D table m×n×h is generated during this process, where m and n correspond to the number of pixels in the camera’s rows and columns, and h represents the different positions used for measurements. The values within this table indicate the unwrapped phase.

    Image projection can be viewed as the inverse of camera capture. In this framework, the projector is conceptualized as a reverse camera, generally modeled through the pinhole perspective, which assumes the absence of lens distortions and follows perspective transformations. The projector acts as a phase generator. When standard sine fringes are projected, straight line segments within the image coordinate system experience linear changes in phase, including those segments that are not perpendicular to the fringes: Φideal(ximage)=k×ximage+b,where Φideal is the ideal unwrapped phase in standard sine fringes, k is related to the width of the fringes and the selection of line segments, characterizing the change rate of phase, b represents the phase value at the starting point of this line segment, and ximage denotes the distance in the image coordinate system from the location on the line to the starting point.

    Through perspective transformation, a line segment in the image coordinate system remains a line segment when projected onto a plane in the world coordinate system. Both lines are contained within the same plane, and their corresponding points are denoted as P and P, as shown in Fig. 1. The variable xworld represents the length of the ximage corresponding plane.

    The relationship between corresponding points on two lines under perspective transformation.

    Figure 1.The relationship between corresponding points on two lines under perspective transformation.

    The point P is represented in the O coordinate system as PT=R·(ximage,0)T+T=(x1,y1)T.

    The rotation matrix R=[cosθsinθsinθcosθ] represents the counterclockwise angle by which the coordinate system O is rotated to become O. The translation matrix T=[p1p2] denotes the horizontal and vertical translation amounts to the point O in the original coordinate system. Given that the points P, P(xworld,0), and the fixed point S(x0,y0) all lie on the same straight line, we can establish the following relationship: |xworld01x1y11x0y01|=0.

    Combining Eqs. (4) and (5), we get the relationship of ximage and xworld: xworld=A×ximage+1B×ximage+C,where A, B, and C are related to R, T, and coordinate of S. Once the system is determined, it becomes fixed. The ideal unwrapped phase for the corresponding line segment in the plane of the world coordinate system thus satisfies Φideal(xworld)=AI×xworld+1BI×xworld+CI.

    The method utilizes the least squares fitting to acquire the parameters. Here, AI, BI, and CI are linked to the system parameters; they are fixed when the system is fixed. A line in the plane of the world coordinate system transforms into a line in the camera’s image coordinate system and then into a line in the pixel coordinate system. It is observed that the relationship between the ideal unwrapped phase in the pixel column still satisfies Φideal(v)=AΠ×v+1BΠ×v+CΠ,where AΠ, BΠ, and CΠ are parameters related to the system, and they are fixed. v represents the column coordinate of the camera pixel. Shown in Fig. 2, in the system calibration, we obtain the real unwrapped phase of the plane. By Eq. (8) and the least squares fitting to determine AΠ, BΠ, CΠ, we get the ideal unwrapped phase for a row. Applying a similar process to all pixel rows enables us to find corresponding parameters AuΠ, BuΠ, and CuΠ. From these parameters, we calculate their average as AmeanΠ, BmeanΠ, CmeanΠ, which is then used to determine the ideal unwrapped phase for this position plane. This procedure is used for other position planes too, leading to an ideal unwrapped phase table. The ideal unwrapped phase table and the real unwrapped phase table have a one-to-one correspondence. By subtracting them, we obtained the corresponding phase error table. The real unwrapped phase table and the phase error table together form the look-up table: LUT:{err|uΦ,vΦ,Φ}.

    Get the ideal unwrapped phase map from the real unwrapped phase map in the calibration data.

    Figure 2.Get the ideal unwrapped phase map from the real unwrapped phase map in the calibration data.

    To further enrich the tables, we employ linear connections between adjacent discrete points, which allows all phases to have corresponding positions within the LUT. The more information contained in the LUT, the more detailed the phase error and the better the correction effect will be. The optimal distance depends on the required accuracy, and the phase errors vary with height. Under the unchanged system, the LUT maintains its correction performance.

    C. Phase Compensation Based on Look-Up Table

    The information contained in the LUT presents how phase errors vary with different heights. The LUT is applied to the real unwrapped phase of the object pixel by pixel, compensating the phase error of each pixel according to its (uΦ,vΦ,Φ). The process of compensation involves three steps. As shown in Fig. 3, the first two steps are to find the approximate location of the object point in the real unwrapped phase table; the third step utilizes information obtained in the previous two steps to look up and calculate the phase correction amount in the phase error table.

    The three steps of the proposed method to use the LUT.

    Figure 3.The three steps of the proposed method to use the LUT.

    The first step is finding the positions of two calibration plates that can clamp the object point. The camera ray is defined by a camera pixel (uΦ,vΦ) projected onto different positions of calibration plates with different heights, corresponding to different locations within the fringe. Consequently, this pixel exhibits distinct phase values under different heights. The phase value (uΦ,vΦ) in the real unwrapped phase table varies monotonically with height, as illustrated in Fig. 3. By pinpointing where the phase value Φ falls between the phase values of the two calibration plates, we can ascertain their positions [they are expressed as the ith and (i+1)th positions].

    The second step is finding the pixel location corresponding to the phase value Φ on the ith and (i+1)th calibration plates, and deriving the compensation weight. The projection is considered a phase generator, where every phase value can be assumed as a ray at the projection. However, in the LED array system, the phase value that a particular ray represents varies with height; thus this assumption does not apply to all heights. But within a small height range, such as between calibration plates ith and (i+1)th, the phase value corresponding to that ray changes very little and can almost be considered constant. The phase value Φ corresponds to the projecting ray that strikes different locations on the calibration plates ith and (i+1)th; the different locations are then captured by the camera and appear in different pixel positions. As shown in Fig. 3, find the pixel location of phase value Φ within the uΦth row of pixels from these two calibration plates, and denote them as vi and vi+1, respectively. They would satisfy the relationship vivΦvi+1 or vi+1vΦvi, which depends on whether the projector is on the right or left side of the camera. (uΦ,vi) and (uΦ,vi+1) are considered as the pixel column coordinates of the positions where the projection ray represented by phase value Φ hits these two calibration plates. The vi and vi+1 are not integers but have decimal parts. The difference between vi and vi+1 compared to vΦ can roughly indicate the relationship of distance between the object point and these two calibration plates. Following the principle that the weight linearly increases as it gets closer to a particular calibration plate and the total weight is one, we set the weights as follows: the weight for the ith calibration plate is Qi=|vΦvi+1|/|vi+1vi|, and for the (i+1)th calibration plate is Qi+1=|vΦvi|/|vi+1vi|.

    The third step involves using the information obtained in the first two steps to look up and calculate the phase correction amount in the phase error table. As shown in Fig. 3, in the phase error table, at positions (uΦ,vi,i) and (uΦ,vi+1,i+1), we obtain the phase correction amounts for the ith calibration plate and the (i+1)th calibration plate respectively as erri and erri+1. In the end, we calculate the phase correction value for the object point as erri×Qi+erri+1×Qi+1, and then apply this phase correction value to complete the phase correction process.

    The ideal unwrapped phase table is more suitable compared to using the real unwrapped phase table with errors for system calibration of phase-high. Therefore, in this approach, the ideal unwrapped phase table serves as the system calibration data. The traditional method and the proposed method are illustrated in Fig. 4. Here, the traditional method is defined as a height measurement by assuming the fringes projected on the object are strictly sinusoidal and consequently the height of the object is retrieved without any phase compensation.

    The flowchart of the traditional method and the proposed method.

    Figure 4.The flowchart of the traditional method and the proposed method.

    3. EXPERIMENT

    To verify the proposed method’s feasibility, we constructed a 3D imaging system using an LED array as the light source with a refresh rate of 10 MHz. The projection component consists of a 128×128 LED array and an 85 mm focal length lens with an F/1.4 aperture. The LED array was developed and manufactured by our laboratory [4144]. The LED we used has a dimension of 0.6  mm×0.3  mm, the spatial duty cycle of it is 23.5%, and the central wavelength of the light source is 630 nm. The camera features a 1200×1920 pixels resolution and a 16 mm focal length lens. The calibration plate aligns with the LED array for a bigger measurement range and is positioned as shown in Fig. 5. In the experiment, multi-wavelength phase unwrapping is used. The binary fringes with four-step phase shifting are exclusively utilized for the phase unwrapping process without causing non-2π phase changes to the phases obtained from three-step phase shifting. The LED array sends a synchronous signal to the camera to capture images. The system’s measurement range is 100  mm×100  mm×25  mm. Within this range, plaster plates are placed at intervals of 0.5 mm and moved, and the placement of the plates follows a sequence either from high to low or vice versa.

    The diagram of the experimental setup.

    Figure 5.The diagram of the experimental setup.

    In principle, the smaller the interval is, the higher accuracy the measuring result will have. But a smaller interval also leads to more calibration works and a larger LUT, and the accuracy will be steady after the interval reaches a certain level. Therefore, a reasonable interval needs to be chosen to generate the LUT. After an error calibration analysis at different defocusing positions (from 5 mm defocusing to 30 mm defocusing), we find that the phase errors change linearly after the interval reaches 0.51  mm (the interval varies slightly at different defocusing positions). Therefore, we chose a 0.5 mm interval for the following experiment.

    The experiment was conducted using two approaches to validate the effect of this proposed method. (1) The platform moves the plane to positions of 4.80 mm, 11.20 mm, and 21.80 mm for 3D imaging by the traditional method and proposed method, shown in Fig. 6. The RMSE is calculated using the height value at that position as the ground truth for the plane. After applying the proposed method, the corresponding RMSEs are 0.0249 mm, 0.0162 mm, and 0.0101 mm for each position, representing reductions of 91%, 94%, and 93% in error, respectively. The system errors vary with height changes in Fig. 6(a) caused by the amount of defocus and the imaging accuracy of the proposed method is improved when the height increases in Fig. 6(b); it is caused by the rise of the signal-to-noise ratio. (2) Steps are placed on a special platform to ensure that the step planes are parallel to the calibration plane and their positions are known. Marking 16 points on each of these step planes, multiple measurements using an electronic caliper provide the height values for these points, which serve as the ground truth. The traditional method and the proposed method are used to image the steps as shown in Fig. 7. Heights of the marked points are extracted and plotted on a curve graph, and the corresponding RMSE is calculated. The wavy non-sinusoidal errors are largely resolved.

    Ensure the measured plane is parallel to the calibration plane, moving it to the positions of 4.80 mm, 11.20 mm, and 21.80 mm, employing the traditional method and proposed method for 3D imaging, and calculating the RMSE. (a) The measurements of the traditional method and their RMSEs. (b) The measurements of the proposed method and their RMSEs.

    Figure 6.Ensure the measured plane is parallel to the calibration plane, moving it to the positions of 4.80 mm, 11.20 mm, and 21.80 mm, employing the traditional method and proposed method for 3D imaging, and calculating the RMSE. (a) The measurements of the traditional method and their RMSEs. (b) The measurements of the proposed method and their RMSEs.

    Place the marked steps on a special platform, and apply the traditional method and proposed method for 3D imaging. Extract the marker points from surfaces A and B, and plot the height curves of ground truth, traditional method, and proposed method. (a) 3D imaging of the steps using the traditional method and proposed method. (b) Plot the height curve for the points extracted on surface A, using traditional method and proposed method; their RMSEs are 0.155 mm and 0.015 mm, respectively. (c) The result of surface B via the same process, and their RMSEs are 0.301 mm and 0.019 mm, respectively.

    Figure 7.Place the marked steps on a special platform, and apply the traditional method and proposed method for 3D imaging. Extract the marker points from surfaces A and B, and plot the height curves of ground truth, traditional method, and proposed method. (a) 3D imaging of the steps using the traditional method and proposed method. (b) Plot the height curve for the points extracted on surface A, using traditional method and proposed method; their RMSEs are 0.155 mm and 0.015 mm, respectively. (c) The result of surface B via the same process, and their RMSEs are 0.301 mm and 0.019 mm, respectively.

    3D imaging was conducted for an inclined plane, a small ball, and a statue. The traditional method and the proposed imaging results are illustrated in Fig. 8 to validate the method’s effect on complex objects. The wavy non-sinusoidal errors on the objects are largely resolved.

    3D imaging for objects with traditional method and proposed method. (a) The inclined plane. (b) The small ball. (c) The statue.

    Figure 8.3D imaging for objects with traditional method and proposed method. (a) The inclined plane. (b) The small ball. (c) The statue.

    To show the effect of noise on the proposed method, we add different degrees of Gaussian noise to a plane imaging data and plot the curve of their relationship in Fig. 9. We used 10×lg(μsignal/σnoise) to represent the signal-to-noise rate (SNR), where σ is the standard deviation and μ is the mean. The proposed method still has corrective ability in different SNRs.

    The relationship between the SNR and the accuracy (RMSE).

    Figure 9.The relationship between the SNR and the accuracy (RMSE).

    4. DISCUSSION AND CONCLUSION

    To show the scope of application of the proposed method, we discuss two complex measurement situations, the objects with specular or semi-translucent surfaces and the objects with deep discontinuities.

    As for measurement of objects with specular or semi-translucent surfaces, we think FPP is not designed to deal with specular or semi-translucent objects, because diffuse reflection or quasi diffuse reflection of the object is the presumption of FPP’s principle. Therefore, the proposed method is not applicable to the objects with specular or semi-translucent surfaces for compensation.

    Deep discontinuities on object height are indeed tricky for FPP. We assume that there is no occlusion due to deep discontinuities and the measured object has strong diffuse reflection, which FPP can deal with. There are two cases in the assumption. Case 1: when the discontinuities are small such that they can normally be unwrapped by multi-frequency phase unwrapping, then our proposed method can compensate for the phase errors at these deep discontinuities according to the LUT. Case 2: when the discontinuities are so deep that a large phase change is caused, which exceeds the recovery range of multi-frequency phase unwrapping, then the phase unwrapping will fail, and our proposed method is of no help.

    In this work, the LED array is capable of switching at MHz, which matches the frame rate of edge-cutting high-speed cameras, making high-speed 3D imaging possible. To solve the impacts of the spacing between adjacent LEDs, the inconsistency in brightness across different LEDs, and the residual high-order harmonics in binary defocusing projection, this paper proposes an LUT compensation method based on system calibration data. This method significantly reduces system errors by more than 90%, reaching 20 μm accuracy. However, the proposed method is effective for solving errors in the system that do not change; it requires recalibration when components of the system change. Therefore, it is ideally applied in industrial inspection scenarios where devices remain static. This work offers an alternative approach for future advancements in achieving high-speed, high-accuracy 3D imaging with binary defocus projection systems utilizing LED arrays.

    Tools

    Get Citation

    Copy Citation Text

    Jin Tan, Bo Zhang, Hong-Xu Huang, Wei-Jie Deng, Ming-Jie Sun, "Fringe projection profilometry via LED array with pre-calibration," Photonics Res. 13, 2224 (2025)

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Research Articles

    Received: Feb. 27, 2025

    Accepted: May. 18, 2025

    Published Online: Jul. 25, 2025

    The Author Email: Ming-Jie Sun (mingjie.sun@buaa.edu.cn)

    DOI:10.1364/PRJ.560762

    CSTR:32188.14.PRJ.560762

    Topics