Range-gated imaging has the advantages of long imaging distance, high signal-to-noise ratio, and good environmental adaptability. However, conventional range-gated imaging utilizes a single laser pulse illumination modality, which can only resolve a single depth of ranging in one shot. Three-dimensional (3D) imaging has to be obtained from multiple shots, which limits its real-time performance. Here, an approach of range-gated imaging using a specific double-pulse sequence is proposed to overcome this limitation. With the help of a calibrated double-pulse range-intensity profile, the depth of static targets can be calculated from the measurement of a single shot. Moreover, the double-pulse approach is beneficial for real-time depth estimation of dynamic targets. Experimental results indicate that, compared to the conventional approach, the depth of field and depth resolution are increased by 1.36 and 2.20 times, respectively. It is believed that the proposed double-pulse approach provides a potential new paradigm for range-gated 3D imaging.
【AIGC One Sentence Reading】:The proposed double-pulse range-gated imaging achieves 3D imaging in a single shot, enhancing depth resolution and real-time performance.
【AIGC Short Abstract】:The proposed single-shot double-pulse range-gated imaging approach enhances 3D imaging by resolving multiple depths in one shot, improving depth resolution and field by 1.36 and 2.20 times. It benefits real-time depth estimation of dynamic targets, offering a new paradigm for range-gated imaging.
Note: This section is automatically generated by AI . The website and platform operators shall not be liable for any commercial or legal consequences arising from your use of AI generated content on this website. Please be aware of this.
Range-gated imaging (RGI), as an active imaging technique[1], combines pulsed laser illumination with an intensified complementary metal oxide semiconductor (ICMOS) camera to selectively receive and amplify echo-signal from a specific depth of field (DOF). This working principle allows RGI to have the advantages of long distance and high contrast for imaging[2]. In recent years, RGI[3] has developed rapidly and has been widely applied in many fields, such as target detection, space detection, and underwater imaging.
According to its working principle, the RGI system inherently has the capability of three-dimensional (3D) imaging. The early delay slicing method[4] enabled RGI to achieve extremely high resolution but required a large number of images and large storage space. The super-resolution method[5], coded method[6], and multi-wavelength[7] method, supported by high-performance equipment or complex system frameworks, can achieve a wide range of high-resolution imaging. Among them, the range-intensity profile (RIP) method, as a type of super-resolution method, has attracted widespread attention from researchers. This method requires capturing at least two images that partially overlap in the time domain and performs super-resolution 3D reconstruction using the convolution property between the pulse shape of the laser and the gate curve of the ICMOS camera. Additionally, the gain-modulated method[8] is proposed to minimize the dependence on the pulse shape. The negative impact of noise[9] can be mitigated using the triangular-RIP spatial-correlation method[10]. However, all the existing methods require multiple captures of images for 3D reconstruction, which limits their application in dynamic scenes.
In this paper, a range-gated 3D imaging method using double-pulse laser illumination and a single frame of an image is proposed. The basic idea is that double pulses of a certain energy ratio and time delay will carry more depth information when they are reflected by the targets and received during the gate-opening period. With the help of the double-pulse range-intensity profile (DP-RIP), the depth of targets can be estimated. Furthermore, the gray normalization process is proposed to deal with the scenes with different surface reflectivities. The proposed single-shot double-pulse method has enlarged imaging DOF and enhanced depth resolution, and it has good real-time performance. Experiments of performance evaluation and real-time 3D imaging of moving targets demonstrate the validity of this method.
Sign up for Chinese Optics Letters TOC Get the latest issue of Advanced Photonics delivered right to you!Sign up now
2. Principle
A conventional single-pulse range-gated imaging (SP-RGI) system is mainly divided into three parts: emitting unit, receiving unit, and synchronization control unit[11], as shown in Fig. 1. The emitting unit consists of a laser source and a beam expansion device, which provides a laser pulse for active illumination. The receiving unit is composed of an ICMOS camera with gating capability and a telescopic lens. The intensifier of the ICMOS camera provides a narrow exposure time controlled by the gate width and high signal amplification for the received light. The control unit sets the system delay and gate width for the ICMOS camera so that the echo pulse signal within the DOF will be collected by the camera, as the areas are marked in dark green. Without the DOF, the ambient light and back-scattered light will be greatly attenuated because the gate is closed.
Figure 1.Schematic diagram of a conventional range-gated imaging system. BE represents beam expansion. TL represents telescopic lens. The imaging distance is ct0/2 and the imaging DOF is ctg/2, where c is the speed of light.
The RIP is usually applied to explain the principle of range-gated 3D imaging[5]. It is defined as the convolution of the laser pulse and the sensor gate in the time domain, which is formed by receiving echo energy of different depths within DOF. The received energy value can be written as in which and are the pulse function and gate function, respectively. The symbol denotes the convolution. is the surface reflectivity. is the RIP, as the SP-RIP curve shown in Fig. 2. For the conversation of analyzing depth resolution, here we replace the horizontal axis with the RIP curve from to . Their relationship is .
For the proposed DP-RGI, the framework[12] is almost the same as SP-RGI; the only change is on the emitting unit. Our system needs to emit two pulses that have a known energy ratio and time span during one gating period. The DP-RIP function () can be denoted as
The factor represents the th pulse energy whose energy ratio satisfies . The time span between two pulses is , and the gating time is set to . From the DP-RIP curve in Fig. 2, we can distinguish three different depth values from the measurement of the single shot. The DOF and depth resolution can be described by
However, the above discussion ignores the influence of surface reflectivity. In practice, the energy profile is determined by the surface reflectivity and RIP simultaneously, as described by
The consequence is that different depths of different areas may have the same energy values. As shown in Fig. 3(a), different pixels may have different DP-RIP curves. When the depth of pixel is and pixel is , they have the same echo intensity . It destroys the one-to-one map between the intensity and depth. So the gray normalization is proposed to avoid this confusion.
Figure 3.The RIPs of different pixels. (a) The influence of surface reflectivity. (b) After the gray normalization.
The gray normalization uses the echo intensity of the single-pulse mode. In this mode, the pulse energy is . The system delay time remains unchanged, but the gated width is set to . The ratio of and is independent of . The normalized DP-RIP curves are displayed in Fig. 3(b). In this case, we can use the same intensity threshold to estimate the depth distribution for all the pixels,
3. Experiment
3.1. Experimental setup
A schematic diagram of the proposed system is shown in Fig. 4. The pulse laser has a wavelength of 532 nm, a pulse duration of , a pulse energy of 1 μJ, and a repetition rate of 45 kHz. The image sensor is an ICMOS camera with II generation intensifier, which has a minimum gate width , a maximum frame rate of 10 frame/s, an intrinsic trigger delay , an intensity depth of 8 bits, and an image resolution of pixel. The half-wave plate (HWP) and polarizing beam splitter (PBS) are used together to adjust the splitting ratio. By rotating the HWP, the splitting ratio of two illuminating beams is set to 2:1 (the direct path without a mirror has lower energy). The time delay of the longer path with a mirror is . To suppress speckle noise and achieve a uniform lateral energy distribution of the beam, both paths pass through the HE consisting of a micro-lens array[13] and lenses. The electrical signal converted from the trigger beam is used as an external trigger of DG645 to generate the desired signal for the ICMOS. The DG645 has a time jitter of 25 ps and an intrinsic trigger delay . By setting the appropriate system delay and the gate width , the image of the target can be captured. To capture the scene with the suitable field of view, the ICMOS camera is equipped with a telescopic lens (TL) with a focal length of 400 mm. A band-pass filter (BF) of 532 nm is employed to block the ambient light.
Figure 4.Schematic diagram of the DP-RGI system. HWP is a half-wave plate. PBS denotes the polarizing beam splitter. APD is an avalanche photo-diode. HE represents the beam homogenization and expansion device. MR is a mirror. BF is a band-pass filter. TL is a telescopic lens. DG645 is a digital signal generator.
In our experiment, the system is going to work at the imaging distance , in which is the time delay from connecting wires. Moreover, the ICMOS camera works in the accumulation mode[14], which means that one exposure period of the camera captures echo pulses from multiple gate-opening periods. The exposure period of the camera is 80 ms, so approximately 3.6 k echo pulses are accumulated into one frame of the image, which effectively improves the signal-to-noise ratio.
3.2. Experimental results and discussion
In our work, we mainly carry out the following experiments. ExperimentI analyzes the characteristics of DP-RGI. ExperimentII shows the depth estimation of static objects and analyzes the influence of reflectivity. ExperimentIII demonstrates the depth calculation of dynamic targets. In all the experiments, system delay and gated width are set to 138 and 15 ns, respectively.
Experiment I: Characteristics of DP-RGI
This section verifies the appearance of three sub-intervals and introduces the imaging characteristics of DP-RGI. The DP-RIP of a pixel can be obtained[15] by fixing the object distance and continuously changing the system delay.
In the experiment, a plate with the diffuse surface, as shown in Fig. 5(a), is placed at a distance of 25 m from the system, and images are captured by increasing the delay time with steps of 0.02 ns. A total of 150 frames are collected. The DP-RIP of a pixel is drawn by extracting the gray value of the same position in frame sequences, as shown in Fig. 5(b).
Figure 5.Imaging target and experimental data. (a) The diffuse plate. (b) The DP-RIP of one pixel.
In the DP-RIP from Fig. 5(b), there are three sections of flat curves with gray values of approximately 70, 100, and 30, which are located at the ranges of [21.00, 21.87], [22.02, 23.43], and [23.58, 24.45], respectively. The intensity ratio of the 1st to 3rd flat curves is 70:30, which is approximately the same as the outgoing energy ratio 2:1 of the two pulses. Otherwise, the DOF is no longer [16] ( is the full width at half-maximum of the laser pulse), but is in the DP-RGI.
The rising and falling edges of DP-RIP are not steep enough, resulting from the physical constraints including the laser pulse width and system response time. It is possible that the RP-RGI will have some unreliable evaluation intervals, such as the sub-intervals (1), (3), (5), and (7) in Fig. 5(b). When a moving object happens to lie within these ranges, maybe it cannot correctly calculate the depth information.
To better demonstrate the imaging characteristics of DP-RGI, Fig. 6 shows the imaging results compared with the SP-RGI. All the images are captured with the same gate width. It can be observed that the depth information of three cardboard boxes cannot be distinguished simultaneously in conventional SP-RGI, as shown in Figs. 6(b) and 6(c). What is worse is that even some objects are not captured due to the limited DOF, such as the black area within the pink rectangle. The curves in Figs. 6(e) and 6(f) further show that the depth of targets cannot be distinguished owing to almost equal echo intensity.
Figure 6.The results of different imaging modes. The objects are three cardboard boxes. (a)–(c) are captured by the ICMOS camera. (a) Works in the DP-RGI mode and (b), (c) works in the SP-RGI mode with the same parameters but different pulse emitting times. The color in the pink rectangle is the pseudo-color of depth. The corresponding depths of red, green, and blue are approximately 21.2, 22.7, and 24.2 m, respectively. The black area in the pink rectangle is out of DOF. The curves in (d)–(f) are the distributions of gray values corresponding to the white line in (a)–(c), respectively.
In contrast, Fig. 6(a) indicates that all three boxes are visible because the DOF is enlarged in the DP-RGI. Additionally, the difference of echo intensity for different ranges is significant, as shown in Fig. 6(d). As a result, we can estimate the depth distribution of objects in one shot.
These results prove that, compared with SP-RGI, DP-RGI has a larger DOF and improved depth resolution under the same parameters. Table 1 presents a quantitative comparison of the imaging DOF and depth resolution.
Table 1. Performance Comparison of the SP-RGI and DP-RGI
Table 1. Performance Comparison of the SP-RGI and DP-RGI
Property
DOF (m)
Resolution (m)
SP-RGI
2.91 (3.00)
2.91 (3.00)
DP-RGI
3.95 (4.11)
1.32 (1.37)
The values of the regular font are measured in the experiment, while the values of the italic font are the theoretical values calculated by equation and . We can see that the measured value has good agreement with the theoretical value. Compared to SP-RGI, the DOF (and depth resolution) of DP-RGI increased 1.36 (and 2.20) times, respectively.
Experiment II: 3D imaging of static objects with different reflectivities
This section presents the depth estimation of static objects, focusing on analyzing and solving the influence of different reflectivities by the gray normalization operation process. The experimental results are shown in Fig. 7. Different from the scene in Fig. 6, we replace the left box with a white plate, which has higher surface reflectivity than the cardboard box. In this case, the echo intensities of the left and middle areas are approximately equal in DP-RGI mode, as shown in Fig. 7(a). Combining with Curve 1 in Fig. 7(e), it can be concluded that the intensity of these two areas cannot be separated, no matter how to set the threshold value. As a result, the depth of the left area is wrongly judged, as displayed in Fig. 7(b).
Figure 7.Depth estimation of static objects with different reflectivities. (a) is taken by the ICMOS camera in the DP-RGI mode. (c) is the normalized image. (b) and (d) are the corresponding depth distributions. Curve 1 and Curve 2 are plotted in (e).
As described in Sec. 2, the method of gray normalization is proposed to eliminate the negative effect of different reflectivities. It takes an image in the SP-RGI mode as the reference image, and the original image is normalized by dividing this reference image pixel by pixel. The pulse energy used here is the same as the energy of the first pulse in the double-pulse mode. In the experiment, we just keep the first pulse and block the second pulse. The system delay time keeps 138 ns unchanged, but the gated width is set to 27 ns. Figure 7(c) is the result after gray normalization. From Curve 2 in Fig. 7(e), it is obvious that the intensity of these two areas can be well distinguished now. Therefore, the depth distribution can be correctly estimated, as indicated in Fig. 7(d).
Experiment III: Real-time 3D imaging of dynamic objects
This part shows the real-time 3D imaging of a moving target. A remote-controlled car carrying a cardboard box serves as the dynamic target. In the experiment, the camera is set to continuous acquisition mode at a frame rate of 10 frame/s. When the box moves forward through the DOF, the images captured in the DP-RGI mode are processed in real time with the proposed DP-RIP algorithm. The image resolution is adjusted to pixel to improve the efficiency of the processing. It should be noted that this experiment mainly demonstrates the capability of real-time 3D imaging, and the gray normalization process is not used here because it requires another shot in a different mode, which cannot be achieved in real time. Here, some experimental results are shown in Fig. 8.
Figure 8.The depth estimation of the forward-moving target. (a)–(g) are original images captured in DP-RGI mode, which correspond to sub-intervals (1)–(7) in Fig. 5(b). (h)–(n) and (o)–(u) are the depth distributions without and with post-processing.
Figures 8(a)–8(g) are the original images captured in the DP-RGI mode, which correspond to sub-intervals (1)–(7) of DP-RIP in Fig. 5(b). Generally, the depth corresponding to even sub-intervals can be correctly retrieved. However, there may be incorrect depth estimation for odd sub-intervals, such as Figs. 8(h) and 8(l). The reasons are analyzed in ExperimentI. Reducing the pulse width and response time will help to improve the above phenomenon.
Because of the uneven illumination within the field of view and different surface reflectivities between the car and the box, noise is serious in depth estimation, as shown in Figs. 8(h)–8(n). To solve this problem, we first binarize the original images to separate the target from the dark background. Then, the correcting depth information, which is determined by the common illumination area of two light paths, is utilized to fill the target area. The depth distribution after post-processing is shown in Figs. 8(o)–8(u).
To further demonstrate the universality of the proposed method, a person walking forward and afterward irregularly serves as another dynamic target. The walking direction can be distinguished from the output depth. Some experimental results are shown in Fig. 9.
Figure 9.The depth estimation of a person walking forward and afterward. (a)–(h) are the original images, whose depth estimations correspond to (i)–(p). fps represents frame/s.
RGI is a promising 3D imaging approach. However, existing methods always require multiple images for 3D reconstruction, which limits their applications in dynamic scenes. We notice that the 3D information in RGI essentially comes from the time relationship between the emitted laser pulse and gated receiver, i.e., the ICMOS camera. Existing methods obtain more 3D information by increasing the shot number with the camera. In contrast, the proposed method tries to obtain more 3D information by increasing the number of laser pulses in a single shot, which makes it especially suitable for dynamic targets. More specifically, a new 3D imaging method called DR-RGI is proposed, which estimates depth distribution based on the DP-RIP. Theoretical analysis and experimental verification are presented, indicating that the DOF and depth resolution of DP-RGI are increased by 1.36 and 2.19 times, compared with SP-RGI under the same parameters, and real-time 3D imaging can be realized for a moving target.
Any method that can generate unambiguous RIP in a single shot means it can distinguish different depth values in a single shot. Thus, both the gain modulation method[8] and the proposed method have the capacity of single-shot 3D imaging. However, their generations of RIP are quite different, which leads to different characteristics. The gain modulation method requires gain control of the hardware, while the proposed method depends on pulse delay and energy reduction, which makes it easy to establish the required optical path. The gain modulation method generates continuous RIP while the proposed DP-RIP is discrete. In general, depth estimation with discrete RIP is more robust for noise and complex environments. The proposed method has extended DOFs, which is beneficial for tracking dynamic targets.
From the single-shot image in the DP-RGI mode, we can only distinguish three different depth values. The way to improve the depth resolution is to increase the number of emitted pulses. In theory, n-pulse RGI can distinguish 2n-1 different depth values. In practice, the improvement of depth resolution is also limited by the system noise. Furthermore, 3D imaging with the proposed method for multiple targets remains challenging. In the system framework, increasing the number of emitted pulses has relatively low device complexity, which makes the proposed system more conducive to practical application and potential commercialization.