Acta Optica Sinica
Co-Editors-in-Chief
Qihuang Gong
Wangwei Shen, Jiaye Wang, Guoqiang Li, Sizhe Xing, An Yan, Zhongya Li, Jianyang Shi, Nan Chi, and Junwen Zhang

ObjectiveWith the development of the new generation of mobile communication technology, there is an increased demand for bandwidth, speed, and latency in passive optical networks (PONs). Wavelength-division multiplexing PON (WDM-PON) which utilizes frequency resources for bandwidth allocation can assign different channels to different optical network units (ONUs) simultaneously. This eliminates time slot competition among ONUs, reduces system latency, and holds promise for addressing high-latency issues. An embedded communication channel called the auxiliary management and control channel (AMCC) has been proposed and successfully implemented in WDM-PON to enable the transmission of management and control information at a lower cost without altering the frame structure. In recent years, there has been increasing attention to frequency-division multiplexing coherent PON (FDM-CPON), which also supports bandwidth allocation in the frequency domain. To complete AMCC transmission in FDM-CPON, we put forward two simple and cost-effective transmission mechanisms of transmission management and control signal for FDM-CPON, including the addition and multiplication of AMCC and data channel at the digital end. Meanwhile, we conduct a comparative analysis on the performance of these two transmission mechanisms in a 200 Gbit/s FDM-CPON system based on 16QAM transmission over 20-km fiber. The research results provide references for AMCC transmission and system design of high-speed FDM-CPON in the future.MethodsTo implement the two transmission mechanisms and conduct a comparative analysis on their performance in a 200 Gbit/s FDM-CPON system based on 16QAM transmission over a 20-km fiber, we generate 16QAM and on-off keying (OOK) signals for the transmission of data channel and AMCC at the digital end respectively. After mapping the low level in OOK to 1 and the high level to a real number greater than 1, the OOK signal can be up-sampled to the same length as the data channel signal. By multiplying bitwise, the combination of multiplication-based AMCC and the data channel can be achieved. For addition-based AMCC, the low level in OOK should be mapped to 0, while the high level is mapped to a complex number with both real and imaginary parts greater than 0. This mapped signal is then added bitwise to the data channel signal. After the combination of AMCC and the data channel, the signal is received by an integrated coherent receiver (ICR) over a 20-km fiber. At the receiver, the amplitude of the received signal is extracted, and the amplitude variations of the signal are obtained by smoothing filtering. After energy detection and inverse mapping, the decoding of the OOK signal is completed. Simultaneously, the received signal undergoes the classical coherent digital signal processing (DSP) for decoding. Additionally, we modify the modulation index (MI) and bandwidth of AMCC at the transmitter, studying the performance of the two transmission mechanisms in different conditions.Results and DiscussionsWe test the sensitivity curves of data channel signals overlaid with both multiplication-based AMCC and addition-based AMCC under different MIs, as well as the Q curves of OOK signals transmitted by AMCC. Under the same receiver optical power (ROP) and MI, the influence of multiplication-based AMCC on the sensitivity of the data channel signal is smaller. Simultaneously, the Q value of OOK transmitted by multiplication-based AMCC is greater than that of addition-based AMCC. We also experimentally verify the effect of the MI and bandwidth of AMCC on the sensitivity of the data channel signal and the Q value of AMCC. Under the same MI and bandwidth, the data channel signal combined with multiplication-based AMCC exhibits higher sensitivity and power budget than the data-channel signal combined with addition-based AMCC. Meanwhile, the larger MI and bandwidth lead to a greater influence of AMCC on the performance of the data-channel signal. When the MI of AMCC is set at 26.1% with a corresponding bandwidth of 24.4 MHz, the effect of multiplication-based AMCC on signal sensitivity is 3 dB lower than that of addition-based AMCC.ConclusionsWe verify and compare the effects of multiplication-based AMCC and addition-based AMCC on the performance of the data channel signal and the OOK signal transmitted by AMCC in a high-speed FDM-CPON. Experimental results from a 200 Gbit/s FDM-CPON system based on 16QAM transmission over 20-km fiber indicate that multiplication-based AMCC has a smaller influence on the sensitivity and power budget of the data-channel signal, with higher Q value of the AMCC-transmitted signal. When the MI of AMCC is set at 26.1% with a corresponding bandwidth of 24.4 MHz, the effect of multiplication-based AMCC on signal sensitivity is 3 dB lower than that of addition-based AMCC. Additionally, experiments are conducted to assess the effect of different MIs and bandwidths of AMCC on the sensitivity of the data-channel signal, with results consistent with the conclusions drawn from theoretical analysis. The results provide significant references for AMCC transmission and system design of high-speed FDM-CPON in the future.

Apr. 25, 2024
  • Vol. 44 Issue 8 0806001 (2024)
  • Peidong Hua, Zhenyang Ding, Kun Liu, Haohan Guo, Teng Zhang, Sheng Li, Ji Liu, Junfeng Jiang, and Tiegen Liu

    ObjectiveOptical fiber refractive index (RI) sensors have caught widespread attention from researchers in biochemical sensing and environmental monitoring due to their high precision, high sensitivity, resistance to electromagnetic interference, corrosion resistance, low cost, and easy preparation. The commonly employed optical fiber RI sensors currently include surface plasmon resonance, local surface plasmon resonance, fiber Bragg gratings, long-period fiber Bragg gratings, fiber-optic whispering gallery mode, fiber Fabry-Perot sensors, photonic crystal fibers, D-type fibers, and tapered fibers. However, most fiber optic RI sensors are currently single-point sensors and cannot achieve multi-point detection or even distributed sensing. Based on the detection of Rayleigh backscattering spectra (RBS) in optical fiber, optical frequency domain reflectometry (OFDR) features high measurement accuracy, high sensing spatial resolution, and long measurement distance, which makes it very suitable for distributed RI sensing. Distributed RI sensing can not only obtain the RI magnitude in the solution but also locally detect the diffusion processing of the solution and test the distribution of fluids. These are all that single-point sensors or even quasi-distributed sensors cannot achieve.MethodsTraditional distributed RI sensing based on OFDR adopts a cross-correlation demodulation algorithm, which has sound noise suppression ability and stability. However, it is difficult to achieve distributed RI measurements with a micron-level spatial resolution. Therefore, this type of distributed RI sensing based on cross-correlation demodulation is not enough to be applied to distributed biological analysis, drug design, and other fields. Unlike cross-correlation demodulation methods, OFDR based on differential relative phase demodulation realizes sensing by the relative phase change of RBS. Since the differential phase demodulation method directly measures the relative phase change caused by external RI changes, this method is more sensitive than traditional cross-correlation demodulation methods. Therefore, the differential relative phase demodulation method is expected to achieve distributed RI sensing with a micron-level spatial resolution.Results and DiscussionsWe first theoretically analyze the principle of differential relative phase demodulation and the RI sensitivity characteristics. To characterize the theoretical sensitivity of the differential phase demodulation method and compare it with experimental results, we simulate the relationship between phase variation and external RI change at a taper waist of 6 μm. The simulation results are shown in Fig. 1(a), and the slope of 1483.7 rad/RIU is the theoretical sensitivity. Meanwhile, in Eq. (11), Δf is related to taper waist radius r. Therefore, the relationship between theoretical sensitivity and the diameter of the taper waist can be simulated, with the results shown in Fig. 1(b). In the experiment, the phase variations along distance in the sensing area of tapered fiber are compared when only average denoising and wavelet denoising are adopted. This reveals that only average denoising cannot achieve distributed RI sensing at the micron level. Meanwhile, with only wavelet denoising, the phase variations caused by the RI changes in the sensing region with a spatial resolution of 68 μm can be distinguished. However, due to the excessive phase noise in the subfigure of Fig. 5(b), there are still significant fluctuations in the demodulation signal of the sensing region. After average denoising (H=5) and wavelet denoising, phase fluctuation noise can be well suppressed with a sensing spatial resolution of 340 μm. The phase variations along the fiber distance under different RI can be clearly distinguished. The results are shown in Fig. 6(c). A linear fitting curve between phase variations and the external RI change at the effective sensing region is shown in Fig. 6(d) with a linear fit of 0.997. The maximum standard deviation at each RI is 0.0067 rad, and the smoothed measurement sensitivity is 1328.6 rad/RIU, which is close to the simulation results in Fig. 1(b). To compare the difference between the proposed differential phase demodulation method and the traditional cross-correlation demodulation method, we utilize cross-correlation demodulation to the data in Fig. 6. The linear fitting curve of the proposed differential phase demodulation method is better than that of the cross-correlation algorithm. Meanwhile, the standard error of the smoothed differential phase demodulation method is lower than that of the cross-correlation demodulation algorithm. More importantly, compared to the cross-correlation demodulation method, the differential phase demodulation method increases the sensing spatial resolution by 10 times, reaching the level of hundreds of micrometers.ConclusionsWe present distributed RI sensing by tapered fiber based on differential relative phase OFDR. The principle of the proposed method is theoretically analyzed and the sensitivity of phase variations with external RI changes are simulated. In the experiment, we achieve distributed RI sensing with a spatial resolution of 340 μm after average denoising and wavelet smoothing. The effective sensing area is 45 mm. The linear fitting between phase variations and external RI change is 0.997 and the maximum standard deviation at each RI is 0.0067 rad. The experimental RI sensitivity is 1328.6 rad/RIU, close to the simulation result of 1483.7 rad/RIU. The linear fitting and standard deviation of the differential phase method are better than those of the cross-correlation algorithm. More importantly, the sensing spatial resolution is improved by 10 times. The proposed differential relative phase method based on OFDR provides a foundation for achieving micrometer-level distributed biosensing.

    Apr. 25, 2024
  • Vol. 44 Issue 8 0806002 (2024)
  • Wanwan Kang, Zhihua Shao, Kuangyu Zhou, and Xueguang Qiao

    ObjectiveRocks have both mechanical and acoustic properties, and there exist inherent relations among them. The characteristics of ultrasonic waves (UWs) change when passing through rocks, and the UWs carry the structural information of rocks. Thus, the interior properties of rocks can be obtained by analyzing the received UWs. Nowadays, the hydraulic properties of rocks have become a new focus in the engineering field. For example, in oil and gas exploration, the water content affects the density and strength of reservoir rocks. The analysis results of the reservoir structure are directly affected by the varied amplitude and velocity of exploration waves. In rock engineering, such as solution mining, long-distance tunnels, and reservoir bank slopes, the pore water affects the stability of rocks and even threatens the safety of engineering projects. Therefore, it is significant to study the ultrasonic propagation characteristics of rocks during water absorption and softening.At present, a common method to detect the water content is to employ the piezoelectric transducer (PZT). However, the PZT has some inherent drawbacks, such as large size, narrow bandwidth, and low resistance to electromagnetic disturbance, which decreases the detection resolution and brings large deviations. Optical fiber sensors feature compact size, high sensitivity, broadband response, and sound resistance to electromagnetic interference. The most commonly employed optical fiber sensors in ultrasonic detection are Fabry-Perot interferometer (FPI) and fiber Bragg grating (FBG). The FPI sensors usually suffer from the low-reflection reflectors and the FBG encounter difficulties when utilized with high-frequency UWs. Fortunately, the optical fiber FPI constructed with two FBGs combines the advantages of both FPI and FBG and becomes the preferred solution in ultrasonic rock water content detection.MethodsWe propose a new ultrasonic method based on an FBG-FPI optical fiber sensor for water-content detection in rocks. In experiments, red sandstone is employed as the detecting object (cylinder, 80 mm×100 mm). The 1 MHz longitudinal pulsed wave emitted by PZT is adopted as the ultrasonic source. The transmitted UWs are detected by a pair of fiber gratings inscribed into a thin core fiber (TCF). The UWs velocity can be calculated by measuring the transmission distance and flight time inside the rock. The method of fast Fourier transform (FFT) is leveraged to convert time-domain signals into frequency-domain ones. For the frequency-domain results, the main frequency and the normalized amplitude are extracted respectively. By employing the fitted curve between the measured UWs velocity and the rock water variation, the water content is reconstructed, and an average detection deviation is obtained simultaneously. Additionally, the results measured by PZT are also recorded for comparison in identical conditions.Results and DiscussionsThe experiment results show that in the longitudinal wave conditions, the wave velocity of the red sandstone first decreases and then increases with the rising water content, while the main frequency and corresponding amplitude both decrease with the increasing water content. When the water content increases from 0 to 0.16%, the wave velocity measured by the optical fiber sensor (or PZT) decreases from 3440.86 m/s (or 3691.74 m/s) to 3389.83 m/s (or 3681.55 m/s). When the water content rises from 0.16% to 2.33%, the wave velocity measured by the optical fiber sensor (or PZT) increases from 3389.83 m/s (or 3681.55 m/s) to 4020.10 m/s (or 3980.10 m/s) (Fig. 5). When the water content increases from 0 to 2.33%, the main frequency measured by the optical fiber sensor (or PZT) decreases from 1.000 MHz (or 0.987 MHz) to 0.933 MHz (or 0.887 MHz), and the normalized amplitude reduces from 1.000 (or 1.000) to 0.058 (or 0.040) (Fig. 6). The optical fiber sensor and PZT are found to exhibit the similar response tendency with the changing water content. After water content reconstruction, an average absolute deviation between the optical fiber sensor (or PZT) measurement results and the actual values is approximately 0.055 (or 0.069) (Fig. 7). It is shown that the deviation of the FBG-FPI optical fiber sensor is smaller, which proves the optical fiber ultrasonic detection feasibility of rock water.ConclusionsA new optical fiber method is proposed for the ultrasonic detection of water content in rock mass. The time-domain and frequency-domain results are obtained using an FBG-FPI optical fiber sensor by ultrasonic transmission method. In the comparative experiments, the FBG-FPI optical fiber sensor presents a similar response tendency to PZT with increasing water content. Additionally, the FBG-FPI optical fiber sensor has a smaller detection deviation than that of PZT. Furthermore, laser ultrasound can be employed as a broadband source to replace piezoelectric excitation and helps to improve the detection resolution with the broadband response of optical fiber sensors.

    Apr. 25, 2024
  • Vol. 44 Issue 8 0806003 (2024)
  • Pingping Wei, and Chao Han

    ObjectivePhase-only hologram (POH) is favored by many researchers in holographic display technology due to its high diffraction efficiency and zero twin image. Common POH generation algorithms can be divided into iterative and non-iterative methods. The iterative methods require a lot of iterative optimization to obtain the required POH, which needs a large number of iterations and is time-consuming. The error diffusion algorithm does not require iteration and greatly improves the computational speed of POHs. In the traditional error diffusion method, the amplitude of all pixels on the complex amplitude hologram (CAH) is set to 1, and this hologram and its CAH are adopted to compute the error which will be diffused on the CAH to generate the POH. However, since different target images have various amplitude distributions, directly setting the CAH amplitude to 1 is not suitable for all images. Therefore, the quality of the generated POH is not high and the reconstruction image of the hologram cannot obtain a satisfactory display effect. Therefore, we call for a new error diffusion algorithm to improve the reconstructed image quality.MethodsTo improve the quality of the hologram reconstructed image generated by the error diffusion algorithm, we build a hologram error compensation model based on the bidirectional error diffusion algorithm by analyzing the relationship between the amplitude distribution of the target image and the generated hologram, and propose a new POH generation method. Firstly, the CAH of the target image is computed and its amplitude is set to 1. Secondly, the error between the POH and the original CAH is calculated by the error compensation model. Thirdly, the new error is adopted to generate a new POH by bidirectional error diffusion. Finally, a new error between this new POH and the original CAH is computed and the second error diffusion is carried out to obtain the final POH. Numerical simulations are conducted to compare the hologram reconstruction effect of the two methods. Additionally, the normalized correlation (NC) coefficient and the structural similarity index measure (SSIM) are employed to quantitatively compare and analyze the hologram reconstruction results. Meanwhile, the experimental schematic diagram is drawn and the optical imaging system is built, with the proposed method verified by optical experiments.Results and DiscussionsBy carrying out numerical simulation and optical experiments, the quality of hologram reconstructed images generated by different error diffusion methods is verified. The simulation results of the two error diffusion methods are shown in Fig. 6. The images of the first column in Fig. 6 are reconstructed ones by the traditional method, and contain obvious speckle noise. The images of the second column and third column in Fig. 6 are the reconstructed images of the holograms generated by the first improved error diffusion and second error diffusion respectively. Compared with the first column, the definition of the reconstructed images in the latter two columns is higher. The detail section of the images in the third column contains more information than the second column. For example, the detail part of Fig. 6(c) shows more information on the pepper stalk than that of Fig. 6(b). Additionally, for the detail part of the pirate, the hair of the man in Fig. 6(l) is more clear than that in Fig. 6(k), and the lines of the hair are more obvious. The NC coefficient and the SSIM are respectively adopted in Tables 2 and 3 to evaluate the quality of numerical simulation results of hologram reconstruction images quantitatively. After the first error diffusion, the NC coefficient and the SSIM increase by 0.05-0.14 and 0.036-0.09 respectively. After the second error diffusion, the NC coefficient and the SSIM increase by 0.01-0.026 and 0.025-0.036 respectively. Simulation results reveal that the reconstructed image quality of the proposed method is better than that of the original method. The similarity of the proposed method with the original image is higher, and reconstructed images of the proposed method are more in line with the visual quality requirements of human eyes. The comparison results of optical experiments on hologram reconstructed images by the traditional error diffusion method and the proposed error diffusion method are shown in Fig. 8. Fig. 8 indicates that for different target images, the hologram reconstructed images of the proposed algorithm can be displayed more clearly, but the hologram reconstructed images of the traditional error diffusion algorithm are obviously noisy and blurred. Comparison of the details of the two methods displays the sailboat in Fig. 8(e) and its reflection on the surface of the lake, while the sailboat in Fig. 8(e) is blurred. The pattern on the long spike behind the man’s hat in Fig. 8(g) is clear, while the pattern on the long spike in Fig. 8(h) is not clearly seen. The optical experiment results are consistent with those of simulations. The simulation and experimental results show that the proposed error diffusion algorithm is effective in improving the quality of hologram reconstructed images, with the feasibility and superiority of the proposed method verified.ConclusionsA bidirectional error diffusion compensation model is built by calculating the new error between CAH and POH. The hologram reconstructed images generated by the model contain more object light wave information. Additionally, the twice error diffusion algorithm is adopted to further improve the holographic display quality. Simulation results show that the reconstructed images generated by the improved method have higher resolution and more detailed information. The NC coefficient and SSIM serve as quantitative evaluation criteria for the simulation results. In Tables 2 and 3, the mean NC and SSIM values of the proposed method are 0.9743 and 0.8630 respectively, 0.0927 and 0.0848 higher than those of the traditional error diffusion method. The optical experiment results show that the reconstructed images generated by the improved algorithm have higher image quality and resolution in detail. Simulations and experimental results prove the effectiveness and feasibility of the improved algorithm, and this algorithm has application significance for computational holographic display.

    Apr. 10, 2024
  • Vol. 44 Issue 8 0809001 (2024)
  • Ting Luo, Xing Zhao, Yunsong Zhao, and Tao Li

    ObjectiveX-ray computed tomography (CT) imaging technology, with nondestructive testing capabilities, has been widely used in industry, medicine, and other fields. When X-ray CT imaging is performed on samples containing high-absorption materials such as metals, the reconstructed images often contain metal artifacts due to beam hardening, scattering, and other factors, which severely degrade the quality of CT imaging. More recently, dual/multi-spectral CT has been proposed as an effective means of reducing beam-hardening and metal artifacts. However, it needs multiple scans of the object or specialized multi-spectral CT equipment. In this paper, we studied the multi-material decomposition reconstruction technique with traditional CT scanned data to reduce beam-hardening and metal artifacts.MethodsThe problem of multi-material decomposition reconstruction in traditional single-spectral CT is inherently highly underdetermined, leading to non-unique solutions. To obtain physically meaningful true solutions, it is necessary to incorporate additional constraints. In a type of scenario, the constituent materials of the scanned object are known and immiscible. The reconstructed image vectors are orthogonal if these materials are selected as basis materials needed in multi-material decomposition reconstruction. Based on this finding, an orthogonal multi-material decomposition reconstruction technique (OMDRT) combined with the X-ray energy spectrum was proposed. In the proposed OMDRT method, the order of basis materials was sorted based on the decreasing sequence of their attenuation coefficients. With triple-material decomposition reconstruction as an example, the proposed OMDRT method includes steps as follows: 1) triple-material decomposition reconstruction; 2) generation of the first material’ mark images from reconstructed image; 3) triple-material decomposition reconstruction with the first material’ mark images; 4) generation of the first and second materials’ mark images from reconstructed images; 5) triple-material decomposition reconstruction with the first and second materials’ mark images. Steps 4) and 5) were performed iteratively. In steps 3) and 5), the weights for the decomposition reconstruction of basis materials from the projection data were adjusted based on the materials’ regional location marked in the materials’ mark images。Results and DiscussionsThe numerical phantom used in the simulation is shown in Fig. 2(c), and it includes three materials: water, bone (simulating the teeth), and AgHg (simulating the dental filling) with standard densities of 1 g/cm3, 1.92 g/cm3, and 12 g/cm3, respectively. If the mass attenuation coefficients of these three materials are used as basis functions, the density of the material region in the corresponding image is the standard density. We select AgHg as the first basis material, bone as the second basis material, and water as the third basis material. By using the simulated projections of phantom without and with noise, density images are reconstructed with the proposed OMDRT. From the last rows in Fig. 4 and Fig. 11, we can see that the three materials are mostly separated in the results of three iterations, and metal artifacts have been effectively corrected basically. Figure 8 and figure 13 show that there are no obvious artifacts in either the density images or the virtual monochromatic image. To quantitatively analyze the image quality, we calculate the peak signal-to-noise ratio (PSNR) and normalized mean absolute deviation (NMAD) between the resulting virtual monochromatic images and the actual virtual monochromatic images. From Fig. 7 and Fig. 12, we can observe that the proposed OMDRT method converges within several iterations. In summary, the experimental results show that the method proposed in this paper has a good application effect in reducing metal artifacts.ConclusionsFor the metal artifact correction in CT images of scanned objects with known and non-mixing materials, we propose an iterative OMDRT of traditional CT. The proposed method chooses known materials as the basis materials, adjusting the weights for the decomposition reconstruction of basis materials based on their regional location. We choose a dental phantom with dental fillings to verify the validity of the proposed method. The basis materials are separated correctly with our method for both simulated noise-free data and Poisson noise data. In addition, artifacts caused by metal implants in both the triple-basis density images and the virtual monochromatic images combined by them are reduced effectively. Moreover, the proposed method converges within a small number of iterations, facilitating its widespread practical application. We verify the multi-material decomposition reconstruction technique of traditional CT. The experimental part does not utilize actual data and does not consider the effect of scattered photons, which are issues that require further research. During the experimental process, it is found that the accuracy of the spectrum significantly affects the effectiveness of the proposed method. How to acquire spectrum quickly and accurately is also a challenge that needs to be addressed in practical experiments. Future work will cover the OMDRT of dual/multi-spectral CT and explore its effectiveness in other applications.

    Apr. 25, 2024
  • Vol. 44 Issue 8 0811001 (2024)
  • Dongcheng Han, Shizhi Yang, Qiang Zhao, Liangliang Zhang, and Yan Deng

    ObjectiveThe static volumetric 3D display technology displays 3D objects by volume pixels in 3D space, presenting real stereoscopic images. It can provide physiological and psychological depth clues for human visual systems to perceive 3D objects and can meet all-around observation needs. Additionally, it is the most likely 3D display technology to achieve high spatial resolution, multi-angle, and simultaneous observation of multiple people, real-time interaction, and large size. Among them, the static volume 3D display technology based on dual beam addressing has attracted much attention due to its unique advantages such as fine voxel, high spatial resolution, easy realization of full-color display, and meanwhile the image is no shaking and does not require auxiliary equipment (such as glasses) to view. By employing the energy of two infrared photons to pump a material into an excited energy level, the electrons in the excited energy level will transition to a lower energy level and produce visible light, which is an effective way to achieve dual-beam addressing. The material that can implement this luminescence process is also known as the two-step two-frequency (TSTF) up-conversion luminescence (UCL) material, and it can have great potential applications in static volumetric 3D display technology due to its rapid response, high contrast, and high color purity. Despite this, the material has received few reports in volumetric 3D display applications because of its low UCL efficiency and small display volume. Additionally, some literature focuses on the properties of materials, with less introduction of 3D display systems. The above two points greatly limit the application and research interest of the 3D volume display of TSTF UCL materials. Thus, we develop a 3D imaging system based on the TSTF UCL mechanism of rare earth ions, and meanwhile build a projection imaging optical path based on digital optical processing (DLP) and a line laser shaping optical path based on scanning galvanometer and cylindrical mirror. The display system is based on the TSTF UCL technology, which employs a dual infrared laser excitation, and adopts the digital micromirror display (DMD) and scanning galvanometer to achieve rapid scanning of image volume at high resolution. It has lower material performance requirements and cost, and more simple method than dual DLP imaging mode. This system is very suitable for the preliminary study of the stereoscopic display effect of TSTF UCL materials and also provides an effective idea for the imaging schemes of other addressing media materials. Additionally, the TSTF UCL material utilized for the display is a cyclohexane solution of core-shell NaYF4∶0.5%Er@NaGdF4∶2%Yb@NaYF4∶1%Er (NYF@NGF@NYF) nanocrystals, which has great potential for large-scale imaging.MethodsWe present a static volumetric 3D display system with wide wavelength and fast response, which includes three parts of display medium, control system, and laser system. In the experiment, nanocrystals NaYF4∶Er@NaGdF4∶Yb@NaYF4∶Er with dual-step dual-frequency up-conversion ability are selected as imaging medium. The control system employs 1024×768 DMD and scanning galvanometer to project the infrared laser. By the appropriate design of imaging optical software, the two-dimensional slice of the stereoscopic image is converted into the control signal of the DMD/scanning galvanometer. The laser system adopts 1550 nm and 850 nm infrared lasers as the addressing and imaging light source and adjusts the beam and optical path with appropriate parameters.Results and DiscussionsThe upconversion emission spectra of NYF@NGF@NYF are measured [Fig. 3(b)]. After integrating the emission spectrum in the visible range (500-700 nm), it can be concluded that contrast is I1550+850/(I1550+I850)=28.69, where I1550+I850, I1550, and I850 are the emission intensity under co-excitation of 1550 nm and 850 nm lasers, under excitation of 1550 nm laser, and under excitation of 850 nm laser respectively. With self-made display materials and the self-built static volumetric display system, a variety of 3D images can be demonstrated at a refresh rate of 40 Hz, and the images are clear and bright (Fig. 7). The maximum luminous power of a single point measured by the power meter can reach 0.5 mW, the theoretical maximum resolution can be 30×1024×768, and the number of voxels is close to 23 million.ConclusionsWe report a two-beam scanning 3D imaging system based on the dual-frequency upconversion luminescence mechanism of rare earth ions. The DLP and scanning galvanometer in the optical path are controlled by the computer to build a 3D dynamic model in the liquid medium. The images presented by the system feature stability, high resolution, fast scanning speed, and maximum voxels of 23 million, without observation angle limitations. The various parts of the system such as the light source, the light path, and the display medium are independent and can be quickly replaced and flexibly adjusted to adapt to the excitation properties of different materials. The material adopted for the display is a cyclohexane solution of the core-shell structure NYF@NGF@NYF nanocrystals, which has great potential for large-scale imaging. The system has certain reference significance for the development of volumetric 3D display and provides support for the preliminary research on 3D display capability of display media such as TSTF materials. Meanwhile, this display system is characterized by convenient building and obvious display effect, without the requirement for high material properties. It assists with the preliminary research on up-conversion materials in 3D display and serves as references for exploring large-size 3D volume display technology.

    Apr. 25, 2024
  • Vol. 44 Issue 8 0811002 (2024)
  • Yanjie Wei, and Yao Xiao

    ObjectiveDefects such as debonding, bulges, pores, pits, delaminations, and inclusions in composites are common during manufacture and service. They not only reduce strength and stiffness but also fail structures. Reliable non-destructive testing methods are required to assess the quality of composite materials. Long pulse thermography (LPT) is a full-field, non-contact, and non-destructive testing method based on image visualization that provides an efficient way to assess the defect quality. However, the defect visibility of LPT can be compromised by various factors such as experimental conditions, heat intensity, inherent material properties, and noise. The LPT effectiveness is constrained by fuzzy edges and low-contrast defects. Consequently, enhancing defect visibility via signal processing methods is crucial for inspecting defects in composite materials using LPT. Thus, we propose an infrared image sequence processing method that utilizes Fourier transform, phase integration, and edge-preserving filters to enhance the quality of LPT detection results for composite materials. Meanwhile, a few latent variables that better reflect the defect information inside the specimen are proposed by transforming the temperature information of the surface during the cooling period. These variables can eliminate the influence of uneven heating and improve defect visualization. This method enables clear delineation of defect edges and accurate measurement of defect sizes. Our approach and findings are expected to contribute to qualitative and quantitative measurements in the non-destructive testing of composite structures.MethodsWe propose a novel infrared image sequence processing algorithm to enhance the defect visibility of LPT. This approach comprises four steps of background uniformity processing, phase extraction, frequency domain integration, and image quantization. Initially, thermal data is acquired after a square pulse heating period and subsequently pre-processed to eliminate the inhomogeneity of the initial temperature distribution. Subsequently, phase Fourier analysis is conducted to extract the phase information related to defects of varying depths and sizes. Next, the phase difference between defect and sound regions is pixel-wise integrated along frequencies to integrate defect information into a new image. Lastly, the integrated phase image transforms into an 8-bit visual image by applying edge-preserving filters and local adaptive Gamma correction.Results and DiscussionsTo evaluate the effectiveness of the proposed method, we conduct an experiment using a glass fiber reinforced polymer (GFRP) panel and compare it with various thermal signal processing methods. The efficacy of the proposed method is substantiated via qualitative and quantitative analysis, with the influence of acquisition parameters additionally discussed. Figure 7 illustrates the raw infrared images captured in different instances. The defects with deep depths have low contrast and fuzzy edges. The phase images processed by background uniformity and Fourier transform are depicted in Figs. 9(a)-9(c). The visibility of defects in these phase images is improved compared to the raw images. However, the deeper defects are more obvious in the phase images at low frequencies and vice versa. It is challenging to identify all defects at various depths using only phase images at a single frequency. To this end, the frequency domain integration method is utilized to amalgamate the phase information of all defects, and subsequently, the resulting phase integration image is enhanced and quantified. The processed results are presented in Fig. 9(d), where all 20 defects of various depths and sizes are distinguishable. The edges of the defects are visible, which facilitates subsequent image segmentation and edge extraction processing for accurate defect size measurement. Additionally, three traditional thermal signal processing algorithms of absolute thermal contrast, thermographic signal reconstruction, and principal component analysis are also compared. Figures 11 and 12 highlight the superiority of the proposed method from qualitative and quantitative perspectives respectively. Analyzing the variations in temperature difference over time and the signal-to-noise ratio across various sampling frequencies (Fig. 13) allows for determining the optimal acquisition time of 30 seconds and a sampling frequency of 30 Hz, striking a balance between computational efficiency and detection effectiveness.ConclusionsWe employ a homemade infrared non-destructive testing system utilizing LPT for the experiments. A method for processing infrared image sequences based on Fourier transform, phase integration, and edge-preserving filters is developed to mitigate the influence of uneven heating and enhance the contrast of defects. The inspection results of the GFRP panel demonstrate that phase signals can offer more information about defects, and integrating phase information across all frequencies significantly enhances detection performance compared to a fixed-frequency signal phase image. Meanwhile, the accurate defect size measurement in segmented images further validates the reliability of the proposed method. An important advantage of this method is that fewer parameters should be determined, specifically the optimum sampling time and frame rate. Other data dimensionality reduction techniques such as ATC, TSR, or PCA can yield multiple principal component images requiring human visual interpretation. In contrast, the proposed method generates a single optimal detection image, which significantly amplifies the detection automation. Finally, our study provides guidance for practical non-destructive inspection of composite structures.

    Apr. 25, 2024
  • Vol. 44 Issue 8 0812001 (2024)
  • Zhenglin Liang, Bin Chen, and Shiqian Wu

    ObjectiveThe rapid advancement of modern information technology has led to the increasing maturation of three-dimensional (3D) shape measurement technologies. At present, this technology has been applied to biomedicine, cultural relic protection, man-machine interaction, and so on. Structured light measurement emerges as a prominent 3D measurement technology, distinguished by its non-contact, high precision, and rapid speed. It stands as one of the most extensively utilized and reliable 3D measurement technologies. The de Bruijn sequence, noted for the uniqueness of any fixed length subsequence within the entire sequence, is widely employed in structured light coding. In discrete sequence coding, only one projection pattern coded by a de Bruijn sequence is required to measure the 3D information of an object, ensuring high measurement efficiency. In continuous phase-shifting coding, the de Bruijn sequence is applied to code the phase order to assist in the phase unwrapping process. However, the presence of identical consecutive codes in a de Bruijn sequence makes it challenging to precisely determine fringe numbers and positions within uniform color areas in captured images. In this paper, to solve this problem, a new type of de Bruijn sequence named adjacency-hopping de Bruijn sequence is proposed. Such sequences guarantee that all neighboring codes are different while holding the uniqueness of the subsequences. These two properties lay the foundation for accurate decoding and efficient matching. Meanwhile, an efficient and complete structured light coding and decoding process is devised by combining the adjacency-hopping de Bruijn sequence with the phase-shifting method to complete the 3D measurement task.MethodsAccording to graph theory, generating a de Bruijn sequence can be accomplished by systematically traversing an Eulerian tour on a de Bruijn graph. In this paper, we redefine the vertex and edge sets of the de Bruijn graph to construct a specialized oriented graph. This oriented graph ensures that adjacent codes of each vertex are different. As a result, a unique type of de Bruijn sequence called an adjacency-hopping sequence, where all neighboring codes are guaranteed to be different, can be generated by traversing an Eulerian tour on the oriented graph. This specialized sequence is then employed to encode phase orders of the phase-shifting fringes. Specifically, the phase-shifting images are embedded into the red channel, while the phase order-encoded images via the proposed adjacency-hopping sequence are embedded into the green and blue channels. In the decoding process, color images captured by the camera are separated to calculate the wrapping phase and decode the phase order respectively. Subsequently, the Hash lookup algorithm is utilized for sequence matching, facilitating the determination of the phase order. Ultimately, 3D information is achieved.Results and DiscussionsInitially, a comparative experiment is devised to compare classic de Bruijn sequence-based coding approaches (e.g. the original de Bruijn sequence, the multi-slit de Bruijn sequence, and the recursive binary XOR sequence) with the proposed adjacency-hopping de Bruijn sequence coding method, showcasing the advantages of the proposed sequence in discrete coding. The experimental results illustrate that similar to the improved de Bruijn sequence-based approaches (i.e., the multi-slit de Bruijn sequence and the recursive binary XOR sequence), the proposed method effectively addresses the fringe separation problem encountered in the original de Bruijn sequence. Furthermore, compared with the aforementioned improved methods, the proposed adjacency-hopping de Bruijn sequence coding method demonstrates higher matching efficiency and is more suitable for integration with phase-shifting measurements. Subsequently, a series of practical measurement experiments are designed to further illustrate the processing flow of the proposed method and evaluate its performances, such as stability, measurement efficiency, and accuracy. The experimental results demonstrate that the coding and decoding method presented in this paper exhibits good robustness in scenarios involving optical path occlusions. Hence, it can be applied to measure objects with complex surface structures. Moreover, the proposed coding and decoding method achieves measurement accuracy comparable to the selected comparative phase-shifting approaches while significantly reducing the number of projected patterns, resulting in improved measurement efficiency.ConclusionsWe introduce a special de Bruijn sequence named the adjacency-hopping de Bruijn sequence. We theoretically prove the existence of such sequences and elucidate their generation method. The proposed sequence guarantees that all neighboring codes are different while preserving the uniqueness of subsequences. Then, the proposed sequence is employed to encode phase orders, and a novel phase-shifting-based coding method is finally introduced. On the projection side, the proposed method leads to a significant reduction in the number of projected patterns, thereby improving the projection efficiency. On the decoding side, each phase order-coded fringe can be separated accurately while guaranteeing efficient matching. The experimental results demonstrate that compared with the classical complementary Gray-code plus phase-shifting method and the multi-frequency heterodyne method, the proposed method achieves comparable measurement accuracy while reducing the number of projection patterns from 11 or 12 to 4.

    Apr. 25, 2024
  • Vol. 44 Issue 8 0812002 (2024)
  • Guihua Li, Ziwei Wang, Weiqing Sun, Pengxiang Ge, Haoyu Wang, and Mei Zhang

    ObjectiveDigital image correlation (DIC) technology is a processing method commonly employed for image matching, and meanwhile after nearly forty years of development, its accuracy, efficiency, practicality, and other aspects have yielded significant improvement. With the development and progress of science and technology, DIC technology for three-dimensional (3D) measurement should also be economical and practical, with the utilization of relatively simple devices for a full range of functions. DIC measurement systems with the assistance of the camera and external equipment can also be realized with multiple viewpoints and multi-directional measurements, on which many scholars have carried out thorough research. Among them, the single camera system has a flexible field of view adjustment and simple optical path, but features poor stability and low accuracy. The multi-camera system requires the adoption of multiple cameras, and the calibration process is complicated. Although the multi-camera measurement system can improve the range and accuracy of 3D measurement, it is difficult to be widely leveraged in 3D full-field measurement due to the high requirements of environmental conditions and the expensive cameras. Given the shortcomings of the existing single-camera and multi-camera systems, we propose a dual-camera 3D reconstruction method assisted by dual plane-mirror.MethodsWe put forward a visual 3D measurement method assisted by a dual plane-mirror, which is to analyze the virtual and real correspondences of the corner points in the mirror surface and thus obtain the reflection transformation matrix. Meanwhile, the virtual and real transformations of the object surface are completed by the reflection transformation matrix, and the 3D full-field measurements are realized finally. Additionally, this method avoids spraying diffuse spots on mirrors to take up the spatial resolution of the camera, making the solution of the reflection transformation matrix easy and efficient. Firstly, the image information of the front surface and the back side surface of the object is acquired simultaneously by the dual-camera DIC measurement system (Fig. 1). Secondly, the calibration plate is placed in front of the plane mirror, and the dual cameras can observe the real calibration plate and the virtual image in the mirror at the same time (Fig. 4). The midpoint method based on the common vertical line is adopted to determine the 3D coordinates of the corner points in space (Fig. 5), and the specific positions of the dual plane mirrors are finally determined by changing the position of the calibration plate several times. Finally, the reflection transformation matrix is solved by the mirror position equation, and then the 3D reconstruction of the object is completed.Results and DiscussionsTo verify the accuracy of the proposed method, we conduct static and dynamic experiments on the measured parts respectively. In the static experiments, the 3D contour of the game coin is reconstructed, and in the dynamic experiments, the thermal deformation of the five-side aluminum column is investigated (Fig. 6). By employing the proposed method, the dual-mirror equation and reflection transformation matrix can be obtained under the mirror angle of 108° (Table 1). The front and back contours of the ordinary game coin are reconstructed in three dimensions, the theoretical thickness of the game coin is 1.80 mm, and the measured thickness is around 1.75 mm (Fig. 9). The reconstruction method of spraying diffuse spots on the mirror surface is compared to verify the 3D reconstruction accuracy of the proposed method (Fig. 9), and the 3D reconstruction of the game coin by the proposed method is found to be better than that of spraying diffuse spots on the mirror surface. Meanwhile, the proposed method avoids taking up the spatial resolution of the camera by spraying diffuse spots on the mirror surface, with higher accuracy.Aluminum column 3D reconstruction and thermal deformation measurement results are shown. Firstly, the reconstruction results of the surface profile of the five-side aluminum column are obtained by the proposed method (Fig. 10), the real height of the aluminum column is 70.00 mm±0.01 mm, and the average measurement height is 70.0035 mm, which is a sound measurement effect. Secondly, the average height change of the five outer surfaces of the aluminum column can be obtained during thermal deformation (Table 2). The thermal deformation displacement cloud map of the outer surfaces is shown in Fig. 11 and the height change of different surfaces in the cooling process is illustrated in Fig. 12. To more intuitively demonstrate the accuracy of the proposed method of real-virtual transformation, we compare and analyze the deviation values of the height change obtained by the two methods at each node (Fig. 13), which shows that the proposed method has higher measurement accuracy.ConclusionsWe propose a dual plane mirror-assisted visual DIC 3D full-field measurement method. The static experiment results indicate that the proposed method is better than the reconstruction method of spraying diffuse spots on mirrors for the 3D reconstruction of game coins with higher accuracy. The results of dynamic thermal deformation experiments indicate that when the temperature of the five-side aluminum column is reduced from 330 to 20 ℃, the height change of the outer surfaces of the column is basically consistent with the simulation results of the finite element software, and the deviation values of the height change measured by the proposed method are smaller than those of the method of spraying diffuse spots on mirrors. Since the proposed method can avoid spraying diffuse spots on mirrors to take up the spatial resolution of the camera, it features simple operation, high measurement accuracy, and sound application perspectives.

    Apr. 25, 2024
  • Vol. 44 Issue 8 0812003 (2024)
  • Tianyu Yuan, Xiangjun Dai, and Fujun Yang

    ObjectiveMonocular stereo vision features low cost and compactness compared to binocular stereo vision and has a broader application prospect in space-constrained environments. Stereo vision systems based on dual-biprism are widely employed in engineering measurement due to their adjustable field of view (FoV). Compared to other types of monocular vision systems, this method is compact and easy to adjust. Topography reconstruction and deformation measurement are the main application purposes of monocular vision systems. The error factors existing in the imaging system should be considered and evaluated to obtain high-precision measurement results. The acquisition and reconstruction of depth information are crucial for accuracy. The depth equation derived from the optical geometry can be adopted to analyze factors affecting the reconstruction accuracy. Analyzing the influence of object distances and angles on image depth information and disparity in depth equations can provide references for system layout and optimization. Additionally, the artificially placed dual-biprism has offset and rotation, and the errors caused by postures will change the imaging model which is derived in the ideal state. Therefore, model correction considering posture errors is important for high-precision imaging. Meanwhile, the dual-biprism posture will lead to the FoV difference. The quantitative study of the FoV caused by the posture can be helpful for the reasonable arrangement of system layout and object positions. Based on the previous studies, to make the monocular stereo system composed of dual-biprism more applicable to high-precision topographic reconstruction and deformation measurement, we will conduct an in-depth study on the influences of systematic errors and prismatic postures on the FoV.MethodsThe depth equation of the monocular vision system is expressed by geometrical optics and the ray tracing method. By making a small angle assumption and ignoring the distance between the dual-biprism and the camera, a depth equation with parameters such as disparity, included angle, and object distance can be obtained, as demonstrated in Eq. (8). By solving the partial derivative of the depth equation, the relationship among object distance, included angle, and disparity is obtained, as illustrated in Eqs. (9)-(10). The classification of prism postures is discussed, including rotation around the base point and offset along the x- or z-direction, as shown in Fig. 3. According to the systematic error introduced by the prism postures, the imaging model is further modified. Furthermore, the modified model is utilized to analyze the influence of prism postures on the FoV, as described in Eqs. (12)-(14). The experiments include verifying the validity of the derivation of the depth equation by leveraging the DIC results as true values, proving the model correctness by calculating the coordinates of the corner points, and investigating the FoV changes caused by the prismatic postures by matching the coordinates of the corresponding points. First, the experiment of object distance change is carried out. After keeping the object distance unchanged, the included angle of the prism is changed to evaluate the influence of the object distance and included angle on the disparity respectively. The DIC results are compared with the results of Eq. (8) to verify the derivation correctness. The dual-biprism is offset according to the classifications, the image is collected before and after posture changes, and the pixel coordinates of the corners of the whole field are extracted by the corner recognition method. The angular coordinates and offset distance before posture change are substituted into Eqs. (12)-(14). The calculated pixel coordinates are compared with the pixel coordinates identified above to verify the equation derivation correctness. Finally, the influence of postures on the FoV is determined by tracking the pixel coordinates of specific corners in the calibration plate before and after the prism posture changes.Results and DiscussionsThe depth equation for the monocular stereo vision system can be described as Eq. (8). The influence of the parameters on the disparity can be obtained by solving the derivatives regarding the object distance and included angle for the depth equation respectively. The derived equations can be expressed as Eqs. (9) and (10). The depth equation description shows that the disparity decreases with increasing distance and shows a nonlinear change. As shown in Fig. 3, all three posture classifications cause a change in the standard virtual point model. The camera is calibrated after the device is placed to verify the derivation validity. As shown in Fig. 8, the FoV changes introduced by the postures can be obtained by tracking the corner points extracted in the calibration board before and after the posture variations. When the prism group rotates 1° clockwise around the base point, the FoV in each channel will shift anticlockwise, which will also cause the FoV in the overall overlapping area to move in this direction. If only the right prism rotates 1° clockwise, it will make the pixel coordinates of the virtual point in the right channel shift 57 pixels to the right, and the FoV offset of the side channel will reduce the overlapping FoV of the system. When the prism group is offset to the right by 1 mm along the x-direction, the same trend will be introduced. If only the right prism is offset, the virtual point will be offset by 49 pixels to the right. Meanwhile, when the prism group moves 1.4 mm along the positive half-axis of the z-direction, there is no significant FoV change in Fig. 8(c). The speckle images before and after the object distance and angle changes are captured, with the disparity map obtained. Then, the depth map can be computed using Eq. (8), in which the depth information of each point in the overlapping FoV can be obtained. The profile of the measured object can be obtained using the coordinate transformation method. The derivation correctness of the equation can be verified by selecting three cross-sections on the object and comparing the profiles of the object obtained by the two methods, as shown in Figs. 9-11. The corrected models considering the prismatic postures are illustrated in Eqs. (12)-(14). The pixel coordinates of the corner points obtained before the posture change are calculated by substituting them into Eqs. (12)-(14) to obtain the offset coordinates, which can be compared with the pixel coordinates of the corner points extracted after the offset to verify the correctness of the corrected model, as shown in Fig. 12.ConclusionsThe relationship between depth equation and disparity in prism-splitting type monocular stereo vision systems is studied, with the system error introduced by the dual-biprism postures considered. The depth equation of the system is derived by combining the virtual point model and ray tracing method. By solving the derivative of the depth equation, the influence of object distance and included angle on disparity is studied. The results show that the disparity of the image increases with the reducing object distance and rising included angle. The imaging model is modified to address the system errors introduced by postures. The experimental results show that the pixel coordinates of virtual points can be accurately calculated using the modified model with known offset distances of the dual-biprism and world coordinates of spatial points, which can determine the mapping relationship of spatial points for different prism postures. Finally, the rotation of the dual-biprism or the offset along the direction perpendicular to the optical axis of the camera will cause the FoV of the system to change, while the posture change along the optical axis of the camera will only reduce the imaging range. Finally, we can provide references for high-precision reconstruction and deformation measurement of monocular stereo vision systems composed of optical elements.

    Apr. 25, 2024
  • Vol. 44 Issue 8 0812004 (2024)
  • Yonghong Wang, Wanlin Chen, Bingfei Hou, and Biao Wang

    ObjectivePosition and pose are two basic parameters describing the position and attitude of an object in space, and they are extensively researched in robot grasping, automatic driving, and industrial inspection. Traditional attitude estimation methods such as using mechanical, laser tracker, inertial unit, and other attitude measurement systems have their drawbacks, including the need for contact measurement or susceptibility to interference by ambient light, and optical path complexity. As an optical measurement method, the digital image correlation (DIC) method features strong anti-interference ability and a simple optical path without contact. Meanwhile, it has been widely employed in the measurement of displacement, strain, and mechanical properties, but less research on attitude measurement is conducted. At present, there is a position measurement system based on the DIC method, which adopts the space vector method. This method requires the calculation of the inverse tangent function in rotation angle calculation, which has a large error and requires more calculation points. To deal with the shortcomings of the traditional position measurement system, we propose a position estimation system based on the three-dimensional digital correlation (3D-DIC) method to complete the measurement of multiple position parameters of a rigid body in space. Meanwhile, a new position solution method is put forward for the weaknesses of the existing space vector method, and a new matching calculation method is also proposed to solve the problem of DIC in measuring large rotation angles.MethodsThe mathematical model of position solution based on singular value decomposition (SVD) is first derived, and then the position measurement system is built for experiments. The specimen which has been sprayed with scattering spots is fixed on a moving platform, and the specimen moves along with the movement of the platform. After calibrating the binocular camera, the image sequences before and after the specimen movement are captured by the binocular camera, and the 3D-DIC is employed to match the image sequences before and after the movement and thus obtain the spatial three-dimensional coordinates of the calculation points. After obtaining a set of 3D coordinates before and after the movement of the calculation points, the SVD method is adopted to solve the rotation matrix and translation matrix, with the movement position parameters of the specimen solved. For the large errors of 3D-DIC in measuring large rotational deformation, we propose the matching calculation method of adding intermediate images. The feasibility and accuracy of the proposed method are verified by the translational degree of freedom and rotational degree of freedom experiments. Finally, a set of accuracy comparison experiments with the space vector method are conducted to verify whether this proposed method is better.Results and DiscussionsAfter experimental validation, the position estimation system based on the proposed 3D digital correlation method can realize the measurement of multiple position parameters of a rigid body in space. The absolute errors of the three translational degrees of freedom in the transverse, longitudinal, and elevation are less than 0.07 mm (Fig. 6), and the absolute errors of the yaw and roll angles are less than 0.02° when the rotation angle is less than 10° (Figs. 7 and 9). Meanwhile, the proposed matching calculation method of adding intermediate images also reduces the error of large angle measurement (Fig. 10). The accuracy comparison experiments with the existing space vector method show that the proposed method has smaller measurement errors in rotation angle measurement and requires fewer calculation points (Table 2).ConclusionsWe establish a position estimation system based on the 3D digital image correlation method, and propose a position solution method based on singular value decomposition. The 3D coordinates of the computation point are obtained by taking the image sequence before and after the motion of the object to be measured for the position solution, and multiple position parameter measurement of the spatial rigid body is realized. The results of the three translational degrees of freedom measurement experiments validate that the proposed 3D-DIC-based position measurement system is suitable for measuring the spatial translational degrees of freedom of the rigid body. Additionally, the large-angle measurement experiments verify that the proposed improved matching calculation method which adds intermediate images has obvious improvement in large-angle measurements, and the results of yaw angle and roll angle measurements show that the present measurement system is also applicable to the rotational degree of freedom position measurements of small and large angles. Compared with the traditional position estimation system, our method features high accuracy and a simple optical path without contact. Compared with the existing space vector method, our study has small measurement errors in both yaw and roll angles, and the required number of calculation points is also greatly reduced. In summary, the position and pose measurement system based on our 3D digital image correlation method is suitable for spatial rigid body position measurement, and the measurement accuracy is high, which meets the measurement requirements.

    Apr. 25, 2024
  • Vol. 44 Issue 8 0812005 (2024)
  • Xiaoyong Gao, Yangyang Liu, Guangxi Hu, Liangjun Lu, and Haimei Luo

    ObjectiveIn recent years, the research on silicon-based thermo-optic (TO) devices has focused on in-depth direction and become more complex, and the realization of higher-performance silicon-based TO devices is the main purpose of our research. There are many kinds of silicon-based optical switches developed so far. We design Mach-Zehnder interferometer (MZI)-type silicon-based TO switches with large bandwidth, simple structure, and high robustness, and the N×N TO-integrated switches and the electro-optic (EO)-integrated switches have been widely studied. The EO switch has a fast switching speed (nanosecond level), but its crosstalk and insertion loss are high due to the free carrier absorption effect. In contrast, TO switches excel in maintaining low loss and low crosstalk, but their switching response time is intrinsically limited, typically on the microsecond scale. Optical switches in hybrid network systems are typically used to handle high-capacity and high-bandwidth optical communication services, making TO switches the preferred choice due to their low loss and low crosstalk characteristics.MethodsThe MZI-type 1×8 silicon-based TO switch proposed and prepared in this paper is composed of one 2×2 MZI and six 1×2 MZI switching units connected by a binary tree structure, which has two input ports and eight output ports in the optical switch, with the first stage comprising a 2×2 MZI switching unit, the second stage comprising two 1×2 MZI switching units, and the third stage comprising four 1×2 MZI switching units. Effective control of the optical signals is achieved by the phase shift of the TO-tuned phase shifter, which directs the light to the destination branch waveguide, thus realizing the optical switching function. The coupler and phase shifter in the optical switching unit are optimized by using the finite-difference time-domain method and the particle swarm optimization algorithm to improve the switching performance and reduce the chip size. The long connecting waveguide is designed as a wide waveguide of 2 to reduce the waveguide transmission loss. The package connects the optical switch chip to a 14-channel optical fiber array by using an end-face coupling, curing it with an ultraviolet curing adhesive. In addition, a multi-channel voltage source is designed, which mainly consists of a CPU, op-amp LM324, analogue switches, and four DAC modules. This multi-channel voltage source has 32 selective ports, and the synchronous switching of the optical switching ports is achieved by simultaneously regulating the voltages of all levels of thermal phase shifters through the host computer. The results show that the optical switch achieves low on-chip insertion loss, lower crosstalk, and a reduced response time of the optical switch.Results and DiscussionsThe experiments demonstrate that the designed and prepared 1×8 TO switch performs well in all aspects; its average on-chip insertion loss is about 1.1 dB (Fig. 8); the fiber-to-fiber loss fluctuates and varies in different paths because the difference in connecting waveguide lengths of the different paths and different ports with different coupling efficiencies to the fiber may not be the same. The crosstalk of its eight output ports is less than -23.6 dB (Fig. 9), and the crosstalk is the leading cause of switching signal degradation. The 2×2 MMI coupler can reduce crosstalk. The response time of the switch is less than 60 μs (Fig. 11) because the thickness of the silica cladding layer at the bottom is more significant than that of the buffer layer. The thermal conductivity of silica is about 1/100 of that of silicon; after heating, the heat diffusion from the core layer to the substrate is slower, and the falling-edge time is longer than that of the rising-edge time. When the input electrical signal's frequency is high, the TO phase shifter's response can no longer follow the electrical signal due to its limited response bandwidth, and the response is no longer square-wave in character. However, the rise and fall time becomes shorter. In addition, there is a difference in the response time of the switch due to manufacturing process errors in different phase shifters.ConclusionsOur proposed 1×8 optical switch chip is based on a tree structure consisting of one 2×2 MZI and six 1×2 MZIs, with a TiN heater in each thermal phase shifter's upper/lower arm. The switch chip is constructed on a SOI platform by using a CMOS-compatible process with a size of 1.75 mm×3 mm. The chip exhibits an on-chip insertion loss of about 1.1 dB at the operating wavelength of 1550 nm, a crosstalk of less than -23.6 dB, a response time of the switch of better than 60 μs, and an average power dissipation of the switch of about 34.09 mW. The experimental results show that the 1×8 TO switch has the advantages of compactness, low loss, and low crosstalk.

    Apr. 25, 2024
  • Vol. 44 Issue 8 0813001 (2024)
  • Xiangyu Li, Yanhong Wang, Jingzhi Wu, and Peng Zhang

    ObjectiveBased on the light-matter interaction, optical tweezers generate strong force on micro and nano-sized particles by momentum transfer, which is non-contact and non-damage. In bioscience, optical tweezers have been applied to capture bacteria and non-invasive manipulation of organelles within a single living cell and become an effective way to detect and control micro- and nano-scale objects. However, optically capturing and manipulating individual particles smaller than the light wavelength remains a major challenge. To overcome the light diffraction limitation, the local surface plasmons (LSPs) in the metal nanostructure can effectively focus and confine the propagating light within the nanometer scale, with better spatial locality and higher local field intensity. A plasma well is generated by the coupling of electromagnetic waves at the interface of the metal-dielectric layer by surface plasmon excitation. This unique electromagnetic pattern can limit light beyond the diffraction limit, causing the electromagnetic field to decay exponentially from the metal-dielectric interface. These two properties are crucial for optical capture applications: the former significantly reduces the volume of the captured object, and the latter enhances the production of optical forces due to the field intensity gradient. Meanwhile, we study the distribution of the electric field and Poynting vector diagram under different polarization modes and the distribution of light force and potential well generated by the interaction between nanoparticles and the scattering near the field of coaxial structure. Finally, it provides a new way for capturing and manipulating micro and nano particles and other living cells in a low-concentration solution.MethodsThe coaxial structure consists of silicon concentric ring and a silver layer. The coaxial structure of the laser source illuminates vertically from the bottom, and the electric field distribution of online polarization, circular polarization, and different coaxial apertures under the light source and Poynting vector diagram are calculated by finite-difference time-domain (FDTD) method. Additionally, the Maxwell stress tensor method is adopted to calculate the light force generated by the interaction of dielectric particles with a 10 nm radius with the structure in free space. The optical trapping performance of the structure in two light modes is studied. The optical trapping force and potential well distribution of particles in the x-y and y-z planes under different light source modes are calculated. The force analysis of the nanoparticles shows that the positive force Fx and negative force F-x in both the x-y and y-z planes cause the coaxial structure to produce the trapped particles in the center of the potential well.Results and DiscussionsThe coaxial structure is coupled with the optical field to enhance transmission and local electromagnetic field (Fig. 1). The transmission characteristic curve shifts to the right as the height of the coaxial structure h increases. When h=150 nm, the transmission spectrum has two peaks at the wavelength of 540 nm and 750 nm. These peaks are transmitted by light waves into a coaxial aperture of finite thickness and are fully reflected, which brings the Fabry-Perot resonance mode. Under the action of the circularly polarized light field, the coaxial plasma structure generates a vortex light field of spin energy flow (Fig. 6). The resulting vortex field affects the spin angular momentum carried by the circularly polarized light and makes it pass through the coaxial aperture, and the orbital angular momentum is transmitted in the near field through the spin-orbit interaction of the electromagnetic field. The optical trapping force and potential well distribution of particles in the x-y and y-z planes under different light source modes are calculated respectively (Figs. 7 and 8). The dual trapping potential well is generated to expand the trapping region, overcome the Brownian diffusion of nanoparticles in a low-concentration solution, and improve the trapping efficiency, providing a new way for capturing and manipulating micro-nano particles and other living cells in a low-concentration solution.ConclusionsThe distribution of electric field and Poynting vector diagram under linear polarization and circular polarization light sources are calculated by the FDTD method, with the optical trapping performance of nanoparticles under two light modes. The results show that the transmission value reaches the maximum at 750 nm, and the depth of the potential well reaches 17kBT under the incident light intensity of 1 μW/μm2. Meanwhile, the circularly polarized light forms a potential well depth of 8kBT vortex field above the structure, which overcomes the Brownian diffusion of nanoparticles in a low-concentration solution and improves the capture efficiency. The results can be employed both as optical tweezers for manipulating nanoparticles and as semiconductor structures for laser emission.

    Apr. 25, 2024
  • Vol. 44 Issue 8 0814001 (2024)
  • Zhennuo Wang, Li Zhong, Deshuai Zhang, Suping Liu, Zhipeng Pan, Jinyuan Chang, Tianjiang He, and Xiaoyu Ma

    ObjectiveAs the main pumping light source of solid state laser, fiber laser, and fiber amplifier, 976 nm diode laser has been widely used in industrial processing, medical treatment, communication, and other fields. As an important pumping light source of erbium-doped fiber amplifier, a 976 nm fundamental transverse mode diode laser can achieve high-efficiency coupling with fiber, improve the output performance of the fiber amplifier, and effectively reduce the cost of the fiber amplifier. It plays a very important role in improving the application of erbium-doped fiber amplifier in fiber communication and other fields. However, since the ridge waveguide in the ridge diode laser uses the weak refractive index guiding mechanism to suppress the higher-order transverse mode, it will be greatly affected by the lateral diffusion of carriers and the self-heating effect and eventually lead to the decline of the ridge waveguide mode guiding and the increase of the far-field angle. To further improve the coupling efficiency of diode lasers in fiber laser pumping applications and reduce the application cost of fiber lasers, it is still important to realize low far-field divergence angle and low power consumption of fundamental transverse mode ridge diode lasers.MethodsUsing InGaAs/GaAsP material as the strain-compensated quantum well structure, and GaAsP with high bandgap width as the barrier material can effectively reduce the carrier leakage effect of quantum well, provide strain compensation for InGaAs compressive strain quantum well, and improve the epitaxial growth quality. To achieve low loss, we optimize high output optical power and low far-field angle, the thickness of waveguide layers by using asymmetric large optical cavity epitaxial structure. The doping concentrations of the epitaxial layer materials are optimized to reduce the series resistance of the device, to achieve high power, high conversion efficiency, and low far-field output of the ridge diode laser. To achieve fundamental transverse mode output, we use the effective refractive index method to design and study the width and depth of the ridge waveguide and map the optical field distribution inside the device. Finally, according to the technological conditions, the ridge waveguide structure is selected with a width of 5 μm and a depth of 0.85 μm.Results and DiscussionsAfter the laser chip is designed and prepared, the output performance of the device is tested at 25 ℃. The device threshold current is about 51.2 mA, and a maximum continuous output power of 422 mW can be obtained at 550 mA injection current, with a maximum electro-optic conversion efficiency of 53.6% (Fig. 3). The peak wavelength is 973.3 nm at 550 mA injection current, and the corresponding spectral line width (FWHM) is 1.4 nm. When the injection current is 500 mA, the vertical and horizontal far-field distribution diagrams of the device are drawn (Fig. 5), and the corresponding vertical and horizontal far-field divergence angles (FWHM) are 24.15° and 3.9°, respectively. This indicates that the prepared ridge diode laser has a good fundamental transverse model property, which is conducive to improving the coupling efficiency between the diode laser and the fiber. Subsequently, we analyze the temperature characteristics of the device at the operating temperature of 15-35 ℃ and obtain a relatively high characteristic temperature of about 194 K. This is because GaAsP material with high band gap width is added to both sides of the InGaAs quantum well as the barrier layers, and there is a larger band level between the two materials, which can better suppress carrier leakage in the quantum well. The injection current utilization, the luminous efficiency, and the temperature stability of the laser device are improved. Similarly, the horizontal far-field changes little at 15 ℃, 25 ℃, and 35 ℃, and the corresponding horizontal far-field divergence angles are 3.45°, 3.90°, and 3.90°, respectively, which is conducive to increasing the coupling efficiency in optical pumping applications (Fig. 7).ConclusionsWe design and fabricate a 976 nm fundamental transverse mode ridge diode laser. To improve the conversion efficiency of the device, we improve high-bandgap GaAsP materials on both sides of the InGaAs compressive strain quantum well as a tensile strain barrier to improve the internal gain of the device, inhibit the carrier leakage in the quantum well, and improve the current utilization rate. In addition, we optimize the waveguide layer thickness and doping concentration in the device epitaxial structure to reduce the far-field divergence angle and achieve high-efficiency output by using an asymmetric large optical cavity epitaxial structure design. A 976 nm strain-compensated low far-field fundamental transverse mode ridge diode laser with a ridge width of 5 μm and a cavity length of 1500 μm is fabricated. At the operating temperature of 25 ℃, the maximum continuous output power of 422 mW can be obtained, the peak wavelength is 973.3 nm, and the spectral line width (FWHM) is 1.4 nm. When the injection current is 500 mA, the vertical and horizontal far-field divergence angles (FWHM) are 24.15° and 3.90°, respectively. In the operating temperature range of 15-35 ℃, the far-field divergence angle of the ridge diode laser is tested and analyzed. It is found that the far-field distribution of the device changes little with the increase in the test temperature, and the far-field divergence angle can be kept small.

    Apr. 25, 2024
  • Vol. 44 Issue 8 0814002 (2024)
  • Meng Zhang, Xin Wang, Suhui Yang, Bao Li, Zhuo Li, Jinying Zhang, and Yanze Gao

    ObjectiveQuantum cascade laser (QCL) is a semiconductor laser based on sub-band electronic transition, which results in a broad emitting wavelength covering from 3 to 300 μm. QCL is an ideal light source in the fields of gas sensing, environmental monitoring, medical diagnosis, and photoelectric countermeasures. However, the relatively low output power (1-3 W) of the single transverse mode QCL is a major limitation for its applications. Laser beam combining technology is an effective way to enhance the output power. At present, the power beam combining of mid-infrared and long-wave infrared QCLs is heavily limited by the low-loss optical materials and component preparations. Beam combining with high efficiency and low loss is challenging, and few research results have been reported. Therefore, the fiber combining of long-wave infrared QCLs in the 7.6-7.8 μm wavelength band was studied in this paper. The laser power was combined with a 4-in-1 single-mode hollow-core fiber bundle.MethodsIn order to realize the high-efficiency single-mode fiber coupling of QCLs, the optical fiber coupling system was designed. The optical fiber system was composed of a QCL collimator and a fiber coupler. Due to the large QCL emitting area and large divergence angle, an aspheric collimator with a large numerical aperture was designed and fabricated. During the optical design and optimization, the QCL was assumed to be an extended light source. Using the optimized collimator, a fiber coupling efficiency of 88.9% was obtained. To combine the laser beams from individual QCL, a 4-in-1 fiber combiner was fabricated using AgI/Ag single-mode hollow-core fiber, which had a high damage threshold and low transmission loss. During the preparation, the outer protective layer of the fiber was stripped away, and the four fibers were tightly arranged in the sleeve and fixed. Finally, the fiber was protected by metal armor. The input terminals of the fiber combiner were four independent SMA905 fiber connectors, and a unified SMA905 connector was made at the output end.Results and DiscussionsThe optical fiber coupling experiments are conducted using the designed optical fiber coupling system and the prepared long-wave single-mode hollow-core fiber combiner. When the QCL output power is 642 mW, the laser power throughout the single-mode fiber is 438 mW. The corresponding fiber coupling efficiency is 68%. In addition, we experimentally compare the coupling efficiency using a point-source collimator and an extended-source collimator. Using the extended-source collimator with a large numerical aperture, the fiber coupling efficiency is increased from 59% to 68%, as shown in Fig. 10. An infrared camera is used to observe the collimated QCL spot and the beam spot out of the single-mode fiber. In addition, the beam propagation quality factor M2 after the fiber coupling is calculated. After the fiber coupling, a symmetric Gaussian distribution is observed, and the beam quality is improved to 1.2, compared to the M2 in Table 7. On the basis of the single-channel optical fiber coupling experiment, the optical fiber combining experimental setup of a four-channel long-wave infrared QCL is built. When the total input power from four QCLs is 2.27 W, the fiber combining power is 1.5 W. The corresponding combining efficiency is 66%. In order to evaluate the beam quality of the combined beam, the lens is used to focus the output light, and the intensity distribution of the output spot of the beam combiner is measured within two times Rayleigh distance. The results are shown in Figs. 14 and 15. The transmission quality factors of the combined beam are calculated as MX2=2.67 and MY2=2.56, which meant a good beam quality.ConclusionsIn this paper, the long-wave infrared QCL beam combining technology based on single-mode hollow-core fiber is studied. Considering the large emitting area and big divergent angle of the fundamental transverse mode long-wave infrared QCL, a QCL collimator with a large numerical aperture is used. During the design, the QCL is treated as an extended light source. To obtain the optimized collimation result, both surfaces of the collimator are aspheric. A 4-in-1 fiber combiner is fabricated using the AgI/Ag single-mode hollow-core fiber. The fiber has no end face reflection loss and low transmission loss. The experimental results show that the single-mode fiber coupling efficiency is 68%. After the fiber coupling, the beam propagation quality factor M2 is 1.2. In addition, the power combining of four QCLs in the wavelength band of 7.6-7.8 μm is realized. When the input power is 2.27 W, the combined output power is 1.5 W. The beam combining efficiency is 66%. The transmission quality factors of the combined beams are MX2=2.67 and MY2=2.56. The low-loss working band of the fiber combiner ranges from 7 to 15.5 μm. The output optical power can be further increased by increasing the number of QCLs in the beam combining, which provides an effective way to expand the output power and wavelength range in the long-wave infrared wavelength band.

    Apr. 25, 2024
  • Vol. 44 Issue 8 0814003 (2024)
  • Xiaohua Xia, Yusong Cao, Haoming Xiang, Shuhao Yuan, and Zhaokai Ge

    ObjectiveShape from focus is a passive three-dimensional reconstruction technology that restores three-dimensional topography from multi-focused image sequences of target objects. To improve the reconstruction accuracy of this technology in practical applications, the existing methods mostly remove image jitter noise, improve focus measure operator and evaluation window, and optimize data interpolation or fitting algorithms. Although these methods can improve the accuracy of shape from focus, the influence of imaging parameters on reconstruction accuracy is not considered, and the accuracy of shape from focus should be further improved. We explore the influence of imaging parameters on the accuracy of shape from focus of large-depth objects and then clarify the improvement measures of the imaging system when the reconstructive accuracy of shape from focus does not meet the requirements in practical applications. Finally, our study helps select imaging parameters in the application of shape from focus technology to obtain better reconstruction accuracy.MethodsBased on constructing the evaluation index of 3D reconstruction accuracy of shape from focus, we firstly analyze the influence degree of focal length, F-number, pixel size, and other parameters in the imaging system on the accuracy of shape from focus by the equal-level orthogonal experiment of a single index. Meanwhile, the primary and secondary orders of the influence of these imaging parameters on the accuracy of shape from focus are determined. Then, the influence of main and sub-main imaging parameters on the 3D reconstruction accuracy is analyzed emphatically by experiments, and the relationship between the optimal imaging parameters and the sampling interval of multi-focus images is revealed. Finally, considering that the change of imaging parameters affects the restoration accuracy of shape from focus by changing the depth of field of the system, it is necessary to explore the influence of imaging parameters on the restoration accuracy of shape from focus of large-depth objects via the depth of field. The experiments help establish the empirical formula between the sampling interval of multi-focus images and the optimal depth of field, providing a theoretical basis for setting imaging parameters of the system.Results and DiscussionsAccording to the orthogonal experiment results (Table 3), focal length and F-number are the main and sub-main parameters affecting the accuracy of shape from focus, the influence of pixel size is less than focal length and F-number, and the influence of blank column is the least, which means that there are no important parameters that have not been analyzed. In practical applications, adjusting the focal length and F-number can be realized by adjusting the zoom lens with variable apertures, and meanwhile adjusting the pixel size usually requires replacing the camera, which is costly and usually not considered. Thus, the pixel size is regarded as a non-main influencing parameter. Analyzing the influence of main and sub-main parameters on the accuracy of shape from focus shows that there is the best focal length (Table 4) and the best F-number (Table 5) for the highest reconstruction accuracy under a given multi-focus image sampling interval, and with the decreasing sampling interval, the best focal length increases (Fig. 3) and the best F-number reduces (Fig. 4). Considering that the change of imaging parameters affects the accuracy of shape from focus by changing the depth of field of the system, we establish an empirical formula between the sampling interval of multi-focus images and the optimal depth of field. The fitting accuracy of the empirical formula is 97.28% (Table 6), and the verification accuracy is 94.76% (Table 7), which can be adopted to calculate the optimal depth of field. The optimal depth of field can significantly improve the accuracy of shape from focus (Table 9), which provides a new way for improving the accuracy of shape from focus of large-depth objects.ConclusionsThe primary and secondary orders of the influence of imaging parameters on the accuracy of shape from the focus of large-depth objects are discovered, including focal length, F-number, and pixel size. The influence of main and sub-main imaging parameters, focal length, and F-number is analyzed emphatically. It is known that the root mean square error of object reconstruction results decreases first and then increases with the rising focal length or F-number in a given multi-focus image sampling interval, and there is an optimal focal length and F-number that leads to the highest reconstruction accuracy. With the decreasing sampling interval, the optimal focal length increases and the optimal F-number reduces. We consider that the change of imaging parameters affects the accuracy of shape from focus by changing the depth of field of the system. The experiments indicate that the empirical formula between the optimal depth of field and the sampling interval of multi-focused images is obtained. The accuracy of the empirical formula obtained by the verified data is 94.76%, which can be employed to calculate the optimal depth of field. Our experiments show that adjusting the focal length and F-number of the imaging system according to the optimal depth of field can significantly improve the 3D reconstruction accuracy of large-depth objects.

    Apr. 25, 2024
  • Vol. 44 Issue 8 0815001 (2024)
  • Yaozu Yang, Feixiang Huang, Fengming Xie, Qiang Zhang, Guo Yuan, Yingyuan Hu, and Xin Zhao

    ObjectiveTo obtain novel and efficient thermally activated delayed fluorescence (TADF) materials, BPQPXZ and BPQTPA are synthesized using dibenzopyridoquinoxaline (BPQ) as acceptor (A) and triphenylamine (TPA) and phenoxazine (PXZ) as donors (D). The results show that the two materials have typical delayed fluorescence characteristics, a smaller energy gap (ΔEST) between singlet and triplet states, and a larger oscillator strength ( f ). The device based on BPQPXZ combined with a strong acceptor and a strong donor achieves deep-red emission with λEL at 660 nm. However, due to the influence of the energy-gap law, the external quantum efficiency (EQE) is only 1.0%. BPQTPA combined with a strong acceptor and a weak donor has a larger fluorescence quantum yield (82.7%) because of the weaker rigidity of TPA than that of PXZ. As a result, the donor and acceptor of BPQTPA have less distortion, more orbital overlap, and larger f . At the same time, the intramolecular charge transfer effect of BPQTPA is weakened, and the electron-donating ability of TPA is weaker than that of PXZ. BPQTPA exhibits a blue-shifted emission compared with BPQPXZ. Therefore, the device based on BPQTPA exhibits yellow emission with λEL at 555 nm. Compared with BPQPXZ, the turn-on voltage of BPQTPA is reduced to 2.8 V; the maximum current efficiency and power efficiency are increased by 32-fold and 36-fold, respectively, and the EQE is increased by 6-fold to 7.0%.MethodsIn this study, BPQPXZ and BPQTPA materials are synthesized using the Suzuki reaction and Buchwald-Hartwig reaction. The photophysical properties, electrochemical properties, thermal properties, and device performance of the two materials are investigated. Comparative analysis is conducted on the luminescent properties of two materials.Results and DiscussionsThe structures of two materials, BPQPXZ and BPQTPA, are verified by 1H nuclear magnetic resonance (1H NMR) spectroscopy and high-resolution mass spectrometry (HRMS). BPQPXZ exhibits deep-red emission with λPL at 655 nm, and BPQTPA exhibits yellow emission with λPL at 585 nm (Fig. 3). Compared with BPQPXZ, BPQTPA exhibits blue-shifted emission because of weaker electron-donating ability of TPA than PXZ. Similarly, the rigidity of TPA is weaker than that of PXZ, resulting in a greater degree of overlap between the HOMO and LUMO of BPQTPA, a higher oscillator strength, and a larger fluorescence quantum yield (82.7%) for BPQTPA, which is consistent with the density functional theory simulation results (Fig. 2). As shown in the transient PL decay spectra (Fig. 4), the delay component is observed, and such phenomena are typical behaviors of TADF. As shown in the cyclic voltammogram (Fig. 5), the HOMO levels of BPQTPA and BPQPXZ are -5.38 eV and -5.25 eV, respectively. The calculated LUMO levels are -3.00 eV and -3.27 eV for BPQTPA and BPQPXZ, respectively. BPQTPA shows better thermal stability with a higher decomposition temperature (Td, with 5% weight loss) of 492.6 ℃ than BPQPXZ (Td=439.2 ℃). The higher thermal stability of BPQTPA can be ascribed to its better planarity than that of BPQPXZ. These devices based on BPQTPA and BPQPXZ achieve good performance (Fig. 7). The device based on BPQTPA exhibits much higher EQE (7.0%) than the device based on BPQPXZ (EQE is 1.0%), especially.ConclusionsIn this study, BPQTPA and BPQPXZ materials are designed and synthesized using BPQ with a highly rigid conjugated planar structure as an acceptor and TPA and PXZ as donors. The results show that two materials have typical delayed fluorescence characteristics. BPQTPA and BPQPXZ achieve good orbital separation between HOMO and LUMO, as well as a certain degree of orbital overlap, resulting in a smaller ΔEST and a larger oscillator strength. The device based on BPQPXZ combined with a strong acceptor and a strong donor achieves deep-red emission with λEL at 660 nm. However, due to the influence of energy-gap law, non-radiative decay is serious, with an EQE of only 1.0%, as well as low current and power efficiency. The device based on BPQTPA combined with a strong acceptor and a weak donor is less rigid than that based on BPQPXZ, making the degree of donor and acceptor distortion of BPQTPA less than BPQPXZ, and the degree of overlap between HOMO and LUMO orbitals of BPQTPA increases, so oscillator strength of BPQTPA is 2.2 times that of BPQPXZ. As a result, BPQTPA has a higher PLQY (82.7%). Meanwhile, due to the much weaker electron-donating ability of TPA than PXZ, the intramolecular charge transfer effect of BPQTPA is weakened, resulting in a significant blue-shift in both photoluminescence and electroluminescence peaks. The device based on BPQTPA exhibits yellow emission with λEL at 555 nm. Compared with BPQPXZ, the turn-on voltage of the device based on BPQTPA is reduced to 2.8 V, and the current efficiency and power efficiency are significantly improved by 32-fold and 36-fold, respectively. The EQE is increased by 6-fold to 7.0%. In particular, we investigate the effects of reasonable combinations of donor and acceptor on the photophysical and electroluminescent properties of materials through structure-activity relationships, and the study is of certain reference significance for the research on long-wavelength TADF materials.

    Apr. 25, 2024
  • Vol. 44 Issue 8 0816001 (2024)
  • Chunyu Yu, Mingrui Liu, and Ningning Sun

    ObjectiveBreast cancer ranks first in female malignant tumors and seriously threatens the life and health of women. However, early diagnosis and treatment can effectively prolong the life of patients. Digital breast tomosynthesis (DBT) is a new three-dimensional imaging technology employed for breast disease diagnosis and scans within a small angle range and reconstructs breast tomography images by collecting a few low-dose projections at equal angle intervals. Compared to computed tomography (CT), it is more suitable for conducting imaging on special human parts such as the breasts that are not easy to scan at large angles and feature low-dose and low-cost imaging. Hologic Selenia Dimensions is a DBT product first certified by the Food and Drug Administration (FDA) in 2011, followed by DBT products from several companies such as GE, Siemens, and Fujifilm. The reconstruction method of DBT plays a vital role in its imaging quality, and currently, the main methods are based on shift and add (SAA) reconstruction, and analytic reconstruction (AR) and iterative reconstruction (IR) methods derived from electronic CT. Among them, SAA calculates the mean of multi-angle projection based on the displacement shift to enhance the information of the focusing plane and weaken the information of the non-focusing plane. However, it is rarely utilized due to the severe out-of-plane interference in the reconstrution slice. Filtered back projection (FBP) is a representative method of the AR class, which makes image details clearer by projection filtering. In particular, the fast reconstruction speed and stable numerical values make it suitable for medical diagnosis. Therefore, it is currently selected as a commercial method. However, FBP can cause serious artifacts and noise in limited-angle scanning DBT, which is unfavorable for breast disease diagnosis. The maximum likelihood expectation maximization (MLEM) method is considered the best reconstruction method in the IR class, providing a good balance between the high- and low-frequency parts of the image. However, the IR method has a longer running time and is difficult to apply in clinical practice before improving the reconstruction speed. Therefore, we seek a DBT reconstruction method that can reduce reconstruction artifacts and improve reconstruction speed. The multi-angle projection is divided into multiple observation vectors, and the BSS technology is adopted to extract the focusing information for reconstructing the focusing plane.MethodsWe propose to adopt blind source separation (BSS) to separate any focusing information from multi-angle projections. First, multi-angle projections are collected by DBT imaging machine, and logarithmic transformation is performed on these projections. Then, based on the central projection, the multi-angle projections are focused on the reconstrution slice at depth z via the displacement according to the imaging geometry. Finally, the multi-angle projections after displacement are regarded as a group of linear combinations composed of the focusing information and a lot of outer information. Meanwhile, by selecting a weight-adjusted second order blind identification (WASOBI) that is efficient in separating observation signals with temporal structures, the focusing plane information is extracted from multi-angle projections, and external interference, such as noise and artifacts, is separated. By shifting the multi-angle projection to any depth z, all slices within the thickness range are reconstructed.Results and DiscussionsThe focusing information is separated using BSS to quickly reconstruct any slice within the breast thickness range. By taking central projections as a reference, SAA, FBP, and MLEM are compared with the proposed method. All these four improve the original in reducing noise by 13.4%, 18.8%, 88.5%, and 73.6%, and reduce image contrast (IC) by 83.7%, 81.4%, 74.6%, and 10.7%, respectively. Feature similarity index measure (FSIM) of the reconstrution slice and the central projection is 0.841, 0.866, 0.861, and 0.886, respectively, and the structural similarity index measure (SSIM) is 0.596, 0.594, 0.628, and 0.787, respectively. Additionally, the mean value (MV) of artifact diffusion is 0.571, 0.254, 0.189, and 0.146, respectively. The reconstruction speed of the proposed method is lower than that of SAA and FBP, but it is 56.0% higher than that of MLEM with two iterations. The reconstruction method BSFP is based on BSS, which regards the obtained multi-angle projection as a linear combination of information within a focusing plane and several kinds of information outside the slice at depth z. Then, the focusing information is separated using WASOBI, which is sensitive to temporal observation signals in the BSS, to reconstruct the focusing information. A comparison of the three DBT reconstruction methods, SAA, MLEM, and FBP, shows that BSFP has less residual out-of-plane information, such as artifacts in the reconstrution slice. This is because BSS has a strong separation and filtering effect on out-of-plane interference while separating the reconstruction, which leads to a stronger sense of hierarchy and clearer details in the reconstruction slice. Due to its filtering processing, FBP has higher clarity in its reconstrution slice compared to SAA and MLEM. SAA is equivalent to a simple BP method without filtering. If the filtering processing is added during the reconstruction, the reconstruction results will be similar to SAA, while if filtering is added during the MLEM reconstruction, its contrast will also be improved. The small metal balls which have simple structures are taken as the object to study the artifacts in reconstruction. However, when the object shape is complex, complicated flaky artifacts will be formed, and the artifacts in the SAA, MLEM, and FBP reconstrution slices are more likely to connect into flakes, which can cause severe image blurring. Therefore, it can be concluded that to eliminate external interference in the BSFP reconstrution slice, we can choose effective methods, such as more effective filtering before reconstruction, setting multi-projection weights based on the imaging geometry, correcting the displacement shift formula in the three-dimensional direction based on the imaging geometry irradiated by cone beam rays, and taking into account the small swing angle of the DBT detector.ConclusionsOur DBT reconstruction method BSFP can improve the original image in reducing noise by 73.6% and improve contrast-to-noise ratio (CNR) by 137.2%. Meanwhile, its reconstruction speed is lower than that of SAA and FBP but is 56.0% higher than that of MLEM with two iterations. This method features sound performance in image noise reduction, detail preservation, artifact suppression, and reconstruction speed. It can continuously improve the separation and reconstruction performance with the rapid development of BSS theory and computer hardware. Therefore, it is a practical and promising DBT reconstruction method. Since the separation accuracy of the focusing information depends on the BSS establishment, the operational efficiency of BSFP depends on the selection and optimization of the BSS method. Additionally, the operational speed of BSFP heavily depends on the hardware environment. Therefore, windowing operations, method optimization, code simplification, and utilization of graphics processing unit (GPU) can all improve the BSFP performance.

    Apr. 25, 2024
  • Vol. 44 Issue 8 0817001 (2024)
  • Yihan Sun, Shenjiang Wu, Bo Wang, Xiaowei Chen, and Yiming Zhang

    ObjectiveHead-up display (HUD) systems reduce driver visual fatigue and improve driving safety. Existing HUD only alleviates visual fatigue by reducing the times a driver looks at the dashboard. However, conventional augmented reality heads-up display (AR-HUD) system can usually only image at a fixed projection distance. When drivers approach an object, it is perceived that the image passes through the object. This causes drivers to constantly distinguish the distance between the object and the virtual image, which will lead to visual fatigue. Indeed, there is a need for a new type of HUD that makes the projection distance variable. Existing designs use an off-axis three-mirror system, by adjusting the position of the first mirror to achieve the projection distance variable. However, it will change the down angle and also need the picture generation unit (PGU) with variable angle of light output. This will increase the cost and design difficulty. It is necessary to study another way to realize the function of variable projection distance.MethodsUnlike traditional co-axial zoom optics, for HUD with variable projection distances, changing focus is not the only way. Changing the object distance and size can also realize it. We use changing magnification to realize variable projection distance and only change the position and size of the PGU. This is because, like looking in a mirror, as the object distance is larger, the virtual image is farther away. Furthermore, the distance between the PGU and the first mirror decreases at the same time as the projection distance decreases. Since none of the images of PGU emitting by the way of parallel light, as the PGU is closer to the reflector, the image reflected by the mirror is smaller. As the projection distance decreases, the tensor angle of the virtual image relative to the human eyes decreases. This design does not cause the down angle to change, and there is no need to use the PGU with a variable angle of light output. The off-axis three-mirror system generally determines the initial structure by calculating the curvature and optical spacing of the co-axial tri-reflector system. Then we adjust the off-axis angle and optimize the free-form mirror. It is complicated to obtain the different position parameters of the PGU through theoretical calculations. The theoretical data also need to be modified in combination with simulation, which will increase the workload. We design“macro”to obtain the relevant parameters directly through simulation to improve efficiency. The optimization is based on the projection distance of 10 m to optimize the free-form surface and determine the off-axis angle. The change step of the projection distance is 50 mm, and the change step of the image source size is 0.01 mm. We use random sampling to verify the function of continuously variable projection distance of the virtual image. The image quality evaluation includes transverse vertical (TV) distortion, rhombic distortion, trapezoidal distortion, grid distortion, dynamic distortion, binocular parallax, image tilt, aspect ratio distortion, MTF, spot diagrams, and field of view distortion. We evaluate the image quality in detail to better assess the imaging situation after actual manufacturing and guide suppliers to improve their products. The above information is represented in the tables and simulation diagrams.Results and DiscussionsWe use“macro”to obtain parameters related to the continuous variation of the projection distance. The image quality is analyzed when the projection distance is 10 m and 3 m respectively. The image quality is analyzed by random sampling to randomly select any projection distance within the changing range. The RMS radii of the spot diagrams in the three cases are all within the Airy spot and less than 25 μm. In addition, the grid distortion, TV distortion, rhombic distortion, and trapezoidal distortion are all less than 5%. Dynamic distortion and binocular parallax are less than 5'. The field of view distortion and aspect ratio distortion are less than 5%. Mutually using parameters of a projection distance apart 50 mm does not result in a dramatic change in image quality. In the range of the variable projection distance, the image quality meets design requirements. The designed projection distance can be continuously varied and the change step of the projection distance setting is reasonable.ConclusionsThe AR-HUD with variable projection distance can further reduce the driver's visual fatigue and improve driving safety. The means of equipment and software correction can affect the criteria for assessing image quality, but the design direction is the same. The design value of each distortion parameter should be as small as possible when conditions allow. This can give the actual manufacturing enough tolerance range. Within the space constraints of the car, we need to trade off dynamic distortion and mesh distortion. Under limited conditions, it is not possible to consider both distortions at the same time. In general, the focus should be on controlling dynamic distortion rather than mesh distortion. Appropriate distortion can be corrected by software. Compared to several design approaches, this design does not pose the risk of down angle perspective changes. The method of this design is applicable to all kinds of HUD systems based on the off-axis three-mirror system. In addition, we give the optical detection methods for reference. While our HUD architecture is relevant, the method of realizing variable projection distances is general in nature.

    Apr. 25, 2024
  • Vol. 44 Issue 8 0822001 (2024)
  • Heming Wei, Wenchen Hu, and Fufei Pang

    ObjectiveWith the widespread utilization of light-sensing systems and imaging systems, miniaturized and lightweight optical systems have become increasingly popular in the automotive market, industry, and medical and consumer electronics. The development of small-volume optical lenses has become crucial. Traditional optical lenses usually have large volume, low focusing efficiency, large full-width half-maximum of the spot, and poor performance in lenses with high numerical aperture (NA). The optical metasurface with sub-wavelength structures has powerful control over the light phase. Compared to traditional lenses, metalenses feature smaller volumes, thinner thickness, and better focusing performance. In the metalens design, the inverse design method has less computational complexity than the traditional design method, and meanwhile this method can provide an optimal solution for the device in a larger searchable space and improve the design efficiency. We propose an objective-first algorithm-based inverse design approach to design a low refractive index-based metalens structure. At a working wavelength of 1550 nm, the metalens has a thickness of 3.2 μm in the propagation direction, a focusing efficiency of 72%, and a high NA of 0.82. Compared with the traditional design method, this approach has low computational complexity and high efficiency. The designed devices can be rapidly manufactured by the high-precision micro-nano printing technique. Considering possible errors during the metalens fabrication, the effects of metalens contour offset and 3D rotation operations on the designed 2D metalens are further discussed.MethodsIn the objective-first algorithm, we define a simulation design area on a two-dimensional plane, and the device function can be determined by giving the incident and exit conditions on the design area. During the metalens design, we require that the device can convert the incident parallel wavefront into a spherical wavefront during exiting. After determining the phase distribution of the input and output, we iteratively update the design area by the objective-first inverse design method. This method employs the norm of Maxwell's equations as the objective function of the optimization algorithm, and the value of this objective function is called the physical residual. During the optimization iteration, we interpolate the dielectric constant, allowing it to continuously change within the design area. The advantage is that the algorithm has a larger searchable space. Meanwhile, we achieve rapid transformation between continuous and binary structures by adding a penalty function.Results and DiscussionsThe materials that make up the lens are air with a dielectric constant of 1 and a low-refractive index polymer material with a dielectric constant of 1.52. The focal length of the metalens is set to 11.3λ, the width of the device along the propagation direction is 2.1λ, and the length is 32.2λ. The grid in the design area is a square with a side length of 0.065λ, and the corresponding NA is 0.82. By utilizing the scalability of Maxwell's equations, we therefore scale the lens to a wavelength of 1550 nm. In theory, the metalens optimized by this method can be scaled as needed to meet the focusing requirements of different wavebands. At the operating wavelength of 1550 nm, the focal length of the metalens is 17.5 μm, the width is 3.2 μm, and the length is 50 μm. The grid in the design area is a square with a side length of 100 nm, and the focusing efficiency is 72%. The 3 dB bandwidth is calculated as 1447 to 1667 nm, and the half-maximum width is 0.9 μm, slightly lower than the 0.96 μm limit imposed by diffraction. Within the offset range of plus or minus 50 nm of the hyperlens profile, the focusing efficiency is above 60%. It can be concluded that the focusing performance of the lens remains essentially unchanged within the offset range of 50 nm. When the lens profile is shifted by plus or minus 100 nm, the focusing efficiency drops to around 50%, and the focusing performance of the lens starts to decline significantly. A metalens with a negative profile shift exhibits a shorter focal length, while a metalens with a positive profile shift exhibits a longer focal length.ConclusionsTo address the problems associated with traditional lenses, such as their large volume, low NA, and insufficient focusing efficiency, we focus on optimizing the structure of polymer lenses with low refractive index. This is achieved by adopting the objective-first inverse design method and the focusing characteristics of metalenses. The goal is to design a metalens structure featuring high NA and focusing efficiency, with the limitations imposed by optical diffraction performance considered. Additionally, the objective-first inverse design method is employed to relax the restrictions of Maxwell's equation and utilize it as the objective function. By breaking down the objective function into two sub-problems, the optimization process can be carried out efficiently without getting stuck in low-efficiency local solutions. Additionally, structural variables are limited to ensure that the finalized structure obtained via interpolation during the optimization is binary in nature and highly efficient. Meanwhile, we discuss the potential for contour deviation during the micro-nano printing preparation of the metalens due to manufacturing tolerances. We analyze the performance changes of the metalens within a profile deviation range of 100 nm. The results demonstrate the robustness of our optimization approach to manufacturing tolerances, with the metalens showing relatively sound focusing performance even with small deviations from the desired profile. Furthermore, we conduct three-dimensional FDTD simulations by rotating the metalens, which reveals that the metalens exhibits polarization-independent characteristics and achieves a focusing efficiency of 62% under incident conditions of circularly polarized light.

    Apr. 25, 2024
  • Vol. 44 Issue 8 0822002 (2024)
  • Xin Zhang, Huazhong Xiang, Lefei Ma, Zexi Zheng, Jiabi Chen, Cheng Wang, Dawei Zhang, and Songlin Zhuang

    ObjectiveA freeform progressive addition lens (PAL) is an optical lens composed of different optical powers, with a curvature that is not constant. It can achieve smooth focusing within a range of focal distances, from distant to near, providing a more natural adjustment for users. This type of lens fully meets both physiological and psychological needs, making it increasingly favored by the middle-aged and presbyopic population. The design of the meridional power distribution plays a crucial role in the astigmatic distribution, the lens' distance and near vision area, the width of the corridor, and the astigmatic gradient, all of which are essential for the wearer's comfort. The channel width of the lens is closely related to the design of meridional power, contour line distribution, and sagittal height surface profile. Lenses with a wide channel design exhibit lower image distortion, chromatic aberration, and spherical aberration. Moreover, they also feature smoother transition zones, reducing the adaptation period, and offering a more accurate and natural visual experience while minimizing eye fatigue and dizziness. The current design of meridional power does not adequately consider its impact on the overall channel width, and there is a lack of efforts to broaden the channel width. This results in lenses having a relatively high level of astigmatism within a single pupil size. Therefore, we propose a new meridional power distribution based on cumulative distribution functions and analyze the curvature of the function's impact on the width of the image dispersion center. Additionally, the overall sagittal surface shape overlay and the reduction of meridional power distribution curvature are employed to increase the channel width of progressive addition lenses.MethodsTo widen the progressive channel of progressive addition lenses, we propose a novel approach. Firstly, a method utilizing the cumulative distribution function is introduced for designing the meridional power distribution, and a comparative analysis is conducted with the commonly used octave polynomial function and trigonometric function. Subsequently, we superimpose sagittal height surface profiles calculated from different channel lengths of meridional power and contour lines to achieve the optimization of channel width, facilitating a smooth transition in gradient changes. Then, we superimpose two functions to derive a new function, thereby altering the curvature of the meridional power function to optimize the channel width. Finally, we fabricate and evaluate three sets of lenses using a freeform machining tool and measure instruments to analyze the impact of this optimization method on the meridional power, astigmatic distribution, astigmatic gradient, and other optical performance aspects of progressive addition lenses.Results and DiscussionsThe proposed meridional power distribution based on the cumulative distribution function is feasible. Compared to the octave polynomial function, the curvature values of the cumulative distribution function decrease, leading to an increased width of astigmatism. However, the channel width is smaller than that of the trigonometric function (Fig. 3). Sagittal height surface profiles are calculated by cumulative distribution functions with different channel lengths and two types of contour line distributions, weighted and superimposed, resulting in a new surface profile (Fig. 4). This significantly widens the progressive channel width, achieves a smooth transition in astigmatic gradient changes, and results in maximum astigmatism distributed on both sides of the nasal area of the lens (Fig. 11). From the perspective of meridional power, a linear combination of two functions forms a new function (Fig. 5), achieving a smooth transition in the central meridional power, widening the progressive channel width, reducing the rate of focal power change, and minimizing peripheral maximum astigmatism (Fig. 12, Table 4). The machining results align closely with simulation results, demonstrating that this optimization method effectively achieves the optical performance enhancement of freeform progressive addition lenses.ConclusionsWe propose a new meridional power distribution based on the cumulative distribution function. We conduct a comparative analysis to assess the impact of this function, as opposed to an octave polynomial function and a trigonometric function, on the absolute curvature values affecting the width of intermediate astigmatism. To address the issue of narrow channel width, we employ different channel lengths of the cumulative distribution function and two types of contour line distributions for sagittal height surface profile calculations. Weight values are assigned for weighted superimposition, leading to the creation of a new surface profile and a significant widening of the channel width. Additionally, from the perspective of meridional power, a linear combination of two functions is employed to form a new function, facilitating a smooth transition in the central meridional power and reducing the rate of focal power change. Finally, we conduct optical simulations to analyze lens focal power and manufacture and quality-test three designed lenses using a freeform machining tool to validate the accuracy of experimental results. Building on our findings, future research can further focus on optimizing the design methods for meridional power, aiming to discover more effective mathematical functions or technical approaches to enhance overall lens width, thus achieving superior optical performance in the design of progressive addition lenses.

    Apr. 25, 2024
  • Vol. 44 Issue 8 0822003 (2024)
  • Lizhi Zhang, Qiuping Lu, Fanlin Duan, Xing Dai, and Dayong Qiao

    ObjectiveThe actual working environment of vehicle-mounted LiDAR is complex, including seasonal long-time limit temperature and rapid changes in indoor and outdoor high and low temperatures. These temperature variations possibly change the internal optics and structure of the lens, which results in image plane drift and reduces imaging quality. For telephoto lenses, the image plane drift can be more obvious with the ambient temperature changes. The current passive athermalization design method has the problems of complex structure and large volume caused by multi-layer lens barrels, or the introduction of diffractive elements and aspheric lenses to increase the production cost, and the narrow temperature range of thermal-free, which is difficult to adapt to the practical applications in the complex environment of vehicle-mounted LiDAR. Therefore, it is necessary to reduce the image plane temperature drift and improve the environmental adaptability of telephoto lenses with an advanced athermal design method.MethodsThe telephoto lens which can accurately capture distant targets and magnify the details is integrated with a line array detector to improve the resolution. Meanwhile, based on the fact that the total length of the optical system of the telephoto structure is smaller than the focal length, the volume of the telephoto receiving optical system is compressed to a certain extent to realize the requirements of lightweight miniaturization and high resolution of the vehicle-mounted LiDAR. Aiming at the problem that the telephoto lens is susceptible to temperature, we improve the two-group compensation design method of the passive optical and mechanical athermalization to maximally offset the optical focal length change of the optical parts from the thermal difference brought by thermal expansion and contraction of the structural parts, and to reduce the image plane drift, thus realizing the athermalization of the telephoto lens. Finally, the image plane drift of the as-designed lens is less than the depth of focus over a wide temperature variation range from -40 to 100 ℃. This is conducive to ensuring the imaging quality of the lens, and the designed structure has a simple preparation process and is easy to engineer and produce.Results and DiscussionsDifferent combinations of optical materials and optical focal length distributions are determined, structural components of different thermal expansion coefficients (TCEs) are matched, and the thermal difference of the optical system compensates for each other, with the system athermalization design achieved. Without the thermal expansion and contraction of the barrel holder taken into account, the focal shift of the lens with temperature change is always minimized and the image plane drift is 0.075 mm when the temperature increases to 100 ℃ (Fig. 7). The thermal expansion and contraction of the barrel holder is considered as a material to compensate for thermal aberration to make the sensor detecting surface always in the image plane. In the wide temperature range from -40 to 100 ℃, with the temperature change, the receiving optical system obtained from the selected optics and structural component materials has almost no significant focal shift, even when the temperature is as high as 100 ℃, and meanwhile the amount of focal shift is only 0.021 mm, smaller than its depth of focus at room temperature (0.074 mm), and the field curvature and distortion of this optical system have small changes (Fig. 8). The MTF at 30 lp/mm is all larger than 0.5 for each field of view (FOV), and the focal plane shifts are all small, which indicates that the designed lenses can maintain sound image quality over a wide range of temperatures from -40 to 100 ℃ (Figs. 9 and 10). The diffuse spot radius in the full FOV is smaller than 7 μm, which reveals that the focal shift of the lens is little affected by temperature (Fig. 11). The results of photographing vehicles traveling on the road show clear imaging of the vehicles and obvious feature areas such as the outer contours of the vehicles (Fig. 14). The above results prove that the imaging quality and temperature adaptability of the lens can be guaranteed by the above athermalization design to compensate the system thermal difference.ConclusionsWe employ the telephoto lens with a long focal length and small FOV to subdivide the scanning area and integrate a line array detector to achieve an image-level imaging effect. Based on the characteristic that the total length of the optical system of telephoto structure is smaller than the focal length, a receiving optical system with a telephoto ratio of 0.38 is designed, which has a smaller lens length and lower cost and meets the requirements of vehicle-mounted LiDAR in terms of high resolution, light weight, and small size. Given the large temperature difference in the working environment of vehicle-mounted LiDAR and the image plane drift of the telephoto lens, a passive optical and mechanical athermalization is implemented to confirm the reasonable combination of multi-plane spherical glass lens and structural components. Finally, a four-piece telephoto lens optical system with a simple structure and a focal shift of 0.021 mm less than the depth of focus of 0.074 mm over a wide temperature range from -40 to 100 ℃ is designed. The MTF of each FOV at 30 lp/mm is larger than 0.5 and the diffuse spot radius in the full FOV is smaller than 7 μm. The vehicle imaging is clear, and the outer contour of the vehicle and other characteristics of the area are obvious, which achieves athermalization and shows favorable environmental adaptability.

    Apr. 25, 2024
  • Vol. 44 Issue 8 0822004 (2024)
  • Lu Jie, Haisu Li, Yajing Liu, Jianshuai Wang, Guobin Ren, and Li Pei

    ObjectiveTerahertz waves featuring broad bandwidths play an increasingly important role in next-generation communication systems. For both terahertz wired and wireless communications, terahertz waveguide integrated devices providing“on-line”signal processing functionalities are in vital demand. In the next-generation communication system, the multiplexing technology around the electromagnetic wave physical parameters such as terahertz radiation polarization, frequency, and phase is an effective solution to enhancing spectrum efficiency. Additionally, the future Internet of Everything information system should have a real-time monitoring function of communication environmental parameters (temperature, humidity, etc.). Given the above technical requirements of multidimensional multiplexing and environmental sensing, novel terahertz functional integrated devices for high-speed information transmission-manipulation-perception fusion should be studied urgently. Several terahertz silicon-based waveguide devices have been demonstrated, but the planar structure restricts the spatial degree of freedom for the devices, which means the devices can only be integrated in a 2D plane. Terahertz fiber-based devices are substrate-free to provide terahertz wave routing abilities along any spatial direction. Nevertheless, most of the reported terahertz fiber devices offer a single functionality, remaining a significant scope of multiple device integration. Thus, we propose a polarization-maintaining subwavelength fiber-based multidimensional multiplexing and sensing integrated device which is composed of fiber bends, a 50/50 Y-splitter, directional couplers, and Bragg gratings. The proposed fiber device provides (de)multiplexing in an additional direction that is orthogonal to the 2D space in contrast to the planar devices. Meanwhile, the device integrates multiple functionalities, including frequency- and polarization-(de)multiplexing, dispersion compensation, and surrounding refractive index sensing. In a nutshell, the integrated device provides exciting perspectives for boosting transmission capacity and developing communication-sensing integration of the next-generation communication systems.MethodsThe finite element analysis method and finite-difference time-domain model are employed in our study. First, the finite element analysis method is adopted to calculate the transmission parameters (fractional power in the core, loss, group velocity dispersion, etc.) of terahertz subwavelength fibers with different cross-sectional parameters to design polarization-maintaining fibers supporting low-loss and low-dispersion transmission. Then, S-shaped and 90° bending fibers are designed using Bessel curves. Furthermore, the finite-difference time-domain model is utilized to analyze the transmission characteristics of S-shaped and 90° bending fibers. Then, two S-shaped bending fibers with the same bending radius are utilized to form a Y-splitter. In the next step, we leverage the supermode theory as a theoretical guide and the finite element analysis method as a tool to calculate the required coupling length of the directional couplers. Additionally, the finite element analysis method is employed to investigate the effects of grating cell cross-section parameters, thickness, and number of cycles on grating performance (e.g., transmission and dispersion compensation), and thus accomplish the structural design of uniform gratings. Finally, the performance of the proposed terahertz devices including directional couplers, Y-splitter, uniform grating, and phase-shifted grating is simulated and analyzed using the time-domain finite difference method.Results and DiscussionsWe present terahertz subwavelength rectangular fibers, bending fibers, Y-splitter, directional couplers, uniform grating, phase-shift grating, and multidimensional-multiplexing and refractive-index-sensing integrated devices. First, the subwavelength fiber supports low-loss (below 0.051 dB/mm) and high-birefringent (beyond 0.03) transmissions at a target bandwidth over 0.24-0.28 THz as shown in Fig. 2. Second, the transmission of S-shaped bending fibers with a bending radius of 10 mm is slightly higher than that of 90° bending fibers in the frequency range of 0.24-0.28 THz, whose transmission is higher than -1 dB at the operating frequency of 0.25 THz [Fig. 4(d)]. Thanks to the high transmission of x-bent S-shaped bends, a 50/50 Y-splitter can be readily designed using two bends with the same radius of 10 mm. Third, for the x-placed x-bent directional coupler, high transmission (above -3 dB) and high ER (above 7 dB) are obtained for both x-polarization and y-polarization modes when coupling lengths are in the range of 11.8-12.3 mm. Finally, the integrated device achieves simultaneously polarization and frequency (de)multiplexing, with high transmissions [drop port: -5.94 dB for 0.25 THz x-polarization with dispersion compensation; through port: -7.20 dB for 0.25 THz y-polarization, -2.02 dB for 0.27 THz x-polarization] and high ER (drop port: 15.16 dB; through port: 8.06 dB). Additionally, the device integrates fiber Bragg gratings, allowing both zero-GVD dispersion compensation and refractive-index sensing (sensitivity of 0.181 THz/RIU) abilities (Figs. 17 and 18).ConclusionsWe propose and analyze a terahertz multidimensional-multiplexing and sensing integrated device based on subwavelength birefringent fibers, composed of fiber bends, 50/50 Y-splitter, directional couplers, and Bragg gratings. First, a directional coupler with high transmission and high extinction ratio is designed by introducing a bent fiber to achieve polarization and frequency (de)multiplexing functions. Second, the uniform grating and phase-shift grating in the integrated device enable dispersion compensation and environmental refractive index sensing respectively. Terahertz subwavelength fiber devices feature dense integration in 3D space and efficient transmission, which provides novel design solutions for the integrated terahertz wave transmission-manipulation-sensing information system. Meanwhile, we also envision that many more components such as modulators and imaging devices will be further integrated on the terahertz fiber platform.

    Apr. 25, 2024
  • Vol. 44 Issue 8 0823001 (2024)
  • Meimei Kong, Yuan Dong, Chunsheng Xu, Yue Liu, Yinyan Xue, Mingyang Li, and Shuhan Zhang

    ObjectiveAs an important branch of microfluidic optics, microfluidic optics has become a key technology to promote the development of highly miniaturized and functional optics. Liquid lenses are a common form of microfluidic optical lenses, and as an important part of optical systems, they have obvious advantages over solid lenses, such as a reconfigurable geometry and tunable refractive index. At present, researchers have explored various techniques and tuning mechanisms to make liquid lenses, including fluid pressure lenses, electromagnetic wave lenses, electrowetting lenses and dielectrophoresis lenses. Unlike electrowetting driven liquid lenses, dielectrophoresis driven liquid lenses do not require conducting liquid and do not produce problems such as evaporation or microbubbles. To correct aberrations in practical applications, it is necessary to design a liquid lens with an aspherical surface, which has the advantages of a simple structure and easy realization because of the use of continuous electrodes compared with patterned electrodes.MethodsBased on the dielectrophoresis effect, we design an aspherical combined liquid lens based on flat electrode. A certain dielectric constant difference exists between the two liquid materials filled with miscible pairs in the cavity. When the external voltage is applied, the droplet with high dielectric constant will move along the electric field direction and squeeze the droplet with low dielectric constant, and the curvature radius of the liquid-liquid interface will change. By adjusting the voltage applied to the two indium tin oxide (ITO) conductive glass flats in the middle, the curvature radius of the liquid-liquid interface can be changed to adjust the focal length. First, a model of an aspherical combined liquid lens based on a parallel flat electrode under different voltages is built by COMSOL, and the surface profile data of the aspherical interface are obtained. Then, the aspherical surface profile data and aspherical formula are fitted by MATLAB to obtain the corresponding aspherical coefficient. Finally, on this basis, the optical model of the aspherical combined liquid lens based on flat electrode is built by Zemax, and the focal length of the aspherical combined liquid lens under different voltages is obtained.Results and DiscussionsFirst, we compare the aspherical combined liquid lens based on the flat electrode with the aspherical single liquid lens, which has the same liquid material and droplet volume as the aspherical composite liquid lens. The results show that the aspherical combined liquid lens has a smaller focal length and stronger focusing ability than the aspherical single liquid lens and is more suitable for camera lenses requiring a large depth of field (Fig. 5). In order to further study the characteristics of aspherical combined liquid lens based on flat electrode, COMSOL software is used to simulate the change of the interface profile of aspherical combined liquid lens based on flat electrode under different parallelism. In the simulation process of the model, the lower flat is set to be placed horizontally. When the upper and lower flat are not parallel, that is, the upper flat and the horizontal direction have a certain tilt angle, the electric field distribution in the liquid lens model is analyzed (Fig. 6). The interface profile data obtained in COMSOL is derived, then MATLAB is applied to fit the profile, and the comparison and analysis of aspherical combined liquid lenses with different parallelism is carried out by using Zemax. It is found that the focal length of the aspherical combined liquid lens is little affected when the flat electrode has a small inclination (1° to 4°) (Fig. 7).ConclusionsBased on the dielectrophoresis effect, an aspherical combined liquid lens based on flat electrode is designed in this study. The liquid lens consists of four ITO conductive flat glass plates, cavities, dielectric layers and hydrophobic layers parallel up and down. The focal length of the aspherical combined liquid lens under different voltages is calculated by using the relevant optical model, and the results show that the focal length of the aspherical combined liquid lens is smaller than that of the aspherical single liquid lens, and the imaging quality is better. The influence of the parallelism of the flat electrode on the focal length of the aspherical combined liquid lens is also discussed. The aspherical combined liquid lens is prepared experimentally, and its focal length and imaging resolution are measured. When the operating voltage increases from 0 to 280 V, the focal length varies from 28.7135 mm to 20.1943 mm, which is basically consistent with the simulation. The feasibility of the lens structure is verified by experiments. The imaging resolution is up to 49.8244 lp/mm. The designed aspherical combined liquid lens based on a flat electrode can provide a new scheme for the high-quality imaging of liquid lenses and their applications and can expand the application.

    Apr. 25, 2024
  • Vol. 44 Issue 8 0823002 (2024)
  • Gongli Xiao, Miao Li, Hongyan Yang, Bowen Wang, Jiarong Zhang, Kang Chen, and Xingpeng Liu

    ObjectiveColor filters are often made of thin films or dye coatings, but are limited by optical diffraction restrictions, they have low resolution and are sensitive to high temperatures and prolonged ultraviolet (UV) ray exposure. Meanwhile, microstructure-based color filters are more adaptable under application scenarios since they may get the necessary light wavelengths by modifying structural or material factors. Microstructure-based color filters have the advantages of stable performance, tunability, and manufacturability over conventional color filters. Additionally, they are usually made of inorganic or high-temperature durable materials, which have a longer service life with extensive applications in many fields, such as complementary metal oxide semiconductor (CMOS) image sensors, liquid crystal screens, and pixel development.MethodsA top double-layer cross circle and a bottom buffer layer comprise the color filter presented in our study. The metal Al and the hydrogen silsesquioxane (HSQ) polymers make up the double-layer cross circle. The bottom is made up of three dielectric layers: a buffer layer of TiO2, a waveguide layer of Al2O3, and a substrate layer of SiO2. The finite difference time domain (FDTD) method is adopted to conduct a comparative investigation into the transmission spectra and color display patterns of four different structural filters. The effects of the structural period, cross ring diameter, cross width, and polarization angle on the transmission spectra and filtering characteristics are also examined.Results and DiscussionsCompared to single-layer construction, the designed color filter using a double-layer cross circle structure has a greater transmittance and reduced full width at half maximum (FWHM). The filter can achieve high transmittance of up to 90.5% at vertical incidence [Fig. 5(b)] and the minimum FWHM at the structural period is L1=W1=360 nm [Fig. 1(a)]. Meanwhile, the resonance wavelength of the transmission peak remains essentially constant over the polarization range of 0-90° (Fig. 5), and the transmittance remains above 50 when the angle of incidence is varied from 0 to 30° (Fig. 6).ConclusionsWe propose a polarization-insensitive and highly selective color filter. FDTD numerical simulation is adopted to investigate and compare the cross ring double-layer subsurface structure to the single-layer plasmonic structure. The results show that combining the double-layer subsurface structure with the dielectric can result in a higher transmittance and narrower FWHM, thus leading to a more effective color selection in the visible wavelength range and robust interference resistance. When the angle of the polarized light is changed from 0 to 90°, the resonance peak position of the simulated transmission spectrum varies somewhat, and the accompanying chromaticity coordinate point moves within a small range. This means that the output properties of the filter are consistent across polarization angles, allowing the filter to be employed in structural applications.

    Apr. 25, 2024
  • Vol. 44 Issue 8 0824001 (2024)
  • Yuan Su, Ailing Tian, Hongjun Wang, Bingcai Liu, Xueliang Zhu, Siqi Wang, Kexin Ren, and Yuwen Zhang

    ObjectiveAspherical optical elements are widely employed in optical systems due to their large degree of design freedom, and the surface shape accuracy of the elements directly affects the performance of the optical system, but the normal aberration properties result in difficult detection of aspherical surfaces. Annular subaperture stitching interferometry is non-null interferometry for detecting the surface shape of aspherical surfaces, does not need to completely compensate for the normal aberration of aspherical surfaces, but relies on high-precision mechanical motion mechanisms and complex positional error algorithms. Therefore, we propose a method for synchronous annular subaperture interferometry (SASI) to synchronously obtain the interference pattern of two subapertures. Meanwhile, SASI does not need a complex motion mechanism and can increase the dynamic direct detection range of aspherical surfaces by the interferometer to some extent. Furthermore, it can effectively improve the detection speed and reduce the influence of motion error on measurement accuracy.MethodsWe adopt the theoretical analysis and the combination of simulations and experiments to carry out this research. Firstly, according to the Nyquist sampling theorem, the theory of the SASI method is analyzed to determine the focal distance principle, and the reference unified model is built by coordinate change and Zemax assisted modeling to realize the surface shape reconstruction. Secondly, the measurement of SASI is simulated and verified, the Zemax is adopted to assist in building the measurement system model, and the interference images obtained by the SASI method and interferometer direct detection are simulated respectively. Additionally, the fringe density of the two interference images is compared, and the aspherical surface shape is reconstructed in the simulated measurement experiments to verify the correctness of the SASI method. Finally, we actually measure the aspherical surface and obtain the interference pattern, and the aspherical surface is placed in the best position and measured directly with the interferometer. Furthermore, the interference fringes measured by SASI method are compared with the result of Luphoshcan method, which can further verify the correctness and validity of the SASI.Results and DiscussionsOur SASI method can accomplish the detection of aspherical surfaces without a complex motion mechanism, and it can also increase the dynamic range of the interferometer for direct detection of aspherical surfaces to a certain extent. Firstly, the SASI theory is analyzed, and a unified model is proposed for reconstructing the surface shape. Secondly, simulation experiments are carried out to detect the surface shape of an asphere with a vertex radius of curvature of 250 mm and an aperture of 80 mm. The simulation results show that the density of interferometric fringe patterns obtained by the SASI is reduced compared with that obtained by the interferometer (Fig. 4). Meanwhile, by adopting the proposed baseline unified model, the reconstructed surface shape results with the original surface shape of the residual PV of 0.0282λ, RMS of 0.0045λ are shown in Fig. 6, which initially verifies the validity of the proposed method. Secondly, the aspherical surface with vertex curvature radius of 317 mm and aperture of 90 mm is measured experimentally, and the density of the SASI method is still reduced compared with that of the interferometer directly detecting the same asphere (Fig. 8). Additionally, in Fig. 9 and Table 3, comparison of the reconstructed surface shape with the Luphoshcan result shows that PV is 0.0362λ and RMS is 0.0091λ of absolute surface error, and the residual deviation of the surface shape is 0.0926λ (PV) and 0.0098λ (RMS), which further verifies the correctness of the proposed SASI method.ConclusionsThe proposed SASI method can effectively realize the surface shape detection of aspherical surfaces. On the one hand, the method does not need to move the interferometer or the element to be measured, which utilizes a bifocal lens to form two measurement wavefronts to match different subaperture of the aspherical surface, and then realizes the synchronized annular band subaperture interferometry of the aspherical surface. Finally, this simplifies the measurement device, shortens the measurement time, and reduces the effect of the motion error on the measurement accuracy. On the other hand, this method increases the dynamic range of the interferometer for direct detection of aspherical surfaces to a certain extent. Combined with the aspherical surface example of the SASI method for simulation and measurement experiments to verify the SASI method, the density of interferometric fringe pattern under the detection of the SASI method is significantly reduced. Additionally, the results of the surface reconstruction are consistent with the actual surface results, which further verifies the correctness and validity of the proposed SASI method.

    Apr. 25, 2024
  • Vol. 44 Issue 8 0826001 (2024)
  • Wenni Ye, Juntao Hu, Zhihao Ying, Yishu Wang, and Yixian Qian

    ObjectiveVortex beam (VB) has attracted great attention due to its unique optical properties including a helical wavefront, phase singularity, and ability to carry orbital angular momentum (OAM) of l? per photon, where l is the topological charge, and ? is the reduced Planck constant. VBs have been widely used in super-resolution imaging, laser microfabrication, optical manipulation, and ultra-large capacity optical communication. Especially, OAM can twist molten material and can be used for chiral nanostructure microfabrication. In recent years, chiral structured light fields with twisted intensity distribution and OAM have attracted great research interest due to their advantages in optical microfabrication. Alonzo et al. constructed a spiral cone phase using the product of the spiral phase and cone phase and generated a spiral cone light field with chiral intensity distribution. Subsequently, Li et al. proposed a spiral light field with an automatic focusing effect by a power-exponential phase. However, these structures of chiral optical fields are simple. Therefore, generating a flexible and tunable chiral structured light field becomes important. However, the generation of flexible chiral light fields remains challenging. In this paper, we proposed a simple and efficient approach for generating tunable chiral structured beams (TCSB), which exhibited flexible adjustability and multi-ring chiral structures. Such light fields would be beneficial to flexible chiral structure micromachining, optical manipulation, and optical communications.MethodsWe proposed and generated a TCSB by constructing an annular phase (AP) which consisted of multiple annular sub-phases (ASPs). Specifically, every sub-phase was constructed by introducing an equiphase and radial phase based on a classical spiral phase, and then a monocyclic TCSB was generated by imposing such ASP on an incident Gaussian beam. The number and direction of the twisted intensity lobes were flexibly and individually controlled by manipulating the topological charge, equiphase, and radial phase. Moreover, we used multiple ASPs to generate multi-ring chiral optical fields, which could be more flexible in practical applications. Experimentally, chiral light fields could be generated by phase modulation and observed via the CCD, as described in Fig. 5.Results and DiscussionsThe structures of the tunable chiral beams can be flexibly manipulated by controlling the topological charge (Fig. 6). The number and direction of the twisted intensity lobes are determined by the number and sign of the topological charge. By controlling the equal phase, the twisted lobe direction can be arbitrarily controlled (Fig. 9). More complex chiral structured beams with three-ring and four-ring structures are constructed, and this validates the effectiveness of our proposed approach. Additionally, the equal phase gradient is employed to control dynamically the rotation of the light fields (Video 1). The advantage of this rotation also makes this chiral beam beneficial for twisting transiently molten matter, machining complex chiral nanostructures, and sorting multiple particles.ConclusionsIn summary, we have developed an effective method to generate TCSBs by multiple ASPs. The properties of the twisting lobes, including the twisting directions, lobe orientations, and lobe number can be freely manipulated by controlling the topological charge sign, magnitude, and equiphase, respectively. Our findings offer a novel promising technology to manufacture chiral microstructures. Moreover, the flexible TCSBs also provide an innovative method for optical manipulation and optical communications.

    Apr. 25, 2024
  • Vol. 44 Issue 8 0826002 (2024)
  • Zhiyong Wang, Zhiguo Jia, Guangcun Shao, Anran Li, Kaiqiang Zhang, Yukun Ji, and Mingyu Zhong

    ObjectivePhotonic tunneling can be regarded as an optical analog for the quantum-mechanical barrier penetration of material particles. As the photon field has no charge and is not subject to the Pauli exclusion principle, some physical problems (such as tunneling time) become easier to study through photonic quantum tunneling, arousing great interest in the study of the quantum tunneling effect of photons. However, up to now, the quantum resonance tunneling phenomena of photons through a double-barrier have not been studied thoroughly. Photons in a state of quantum tunneling correspond to evanescent waves (i.e., surface plasmon polaritons) that are the core concept of nanooptics. Thus, research on photonic resonance tunneling can reveal new physical laws in nanooptics and has potential application value in optical devices (such as optical sensors and optical transistors). Therefore, it is necessary to develop a systematic theory of photonic resonance tunneling through a double-barrier. The application of the resonance tunneling effect of photons in the design of pulse and phase laser ranging systems is an important subject worth studying.MethodsA photonic double-barrier structure is formed by a rectangular waveguide with dielectric discontinuities (Fig. 1). Seeing that the electromagnetic waves propagating along the waveguide satisfy the Helmholtz equation and can be expanded as a superposition of the waveguide modes transverse electric (TE) and transverse magnetic (TM), one can take the TE10 mode as an example. In this case, the electric field component and its first derivative for z are continuous at the boundaries between the two different media inside the waveguide, based on which and using the concept of the Poynting vector one can obtain the quantum tunneling probability formula of photons through the double-barrier. By employing the analytic method and numerical simulation, we can obtain the physical conditions required for the resonance penetration effect of propagating-wave and evanescent-wave photons, respectively. In addition, we can clarify the dependence of the tunneling probability on the geometric size of the double-barrier, the refractive index of the filling medium, and the photon frequency. The parameters in the tunneling probability expression of photons through the double-barrier are related to each other. As a result, the parameter design makes it easy to make a mistake in the numerical analysis, which can be overcome by resorting to the original definitions of these parameters. To explore the potential application of the quantum resonance tunneling effect of photons in optical devices, we provide two new designs for the receivers of pulse and phase laser ranging systems (Figs. 6 and 7). To be specific, the double-barrier structure shown in Fig. 1 is placed in the receiving device of the laser ranging system. Its geometric sizes and the refractive index of the filling media are designed so that the resonant tunneling frequency is equal to the center frequency of the output signal of the laser ranging system.Results and DiscussionsThe quantum tunneling probability of evanescent-wave photons through the double-barrier is given by Eq. (7), and in this case, the double-barrier corresponds to the two cut-off waveguides. The quantum tunneling probability of propagating-wave photons through the double-barrier is given by Eq. (9), and the double-barrier is formed by two normal-sized waveguides. Both Eq. (7) and Eq. (9) show that there are resonant penetration effects, namely that, the tunneling probability can be equal to one and photons can pass through the double-barrier completely. The resonant tunneling conditions of evanescent-wave photons are presented in Eq. (10), while the resonant penetration conditions of propagating-wave photons are provided in Eq. (11) or Eq. (12). The numerical simulation results are given in Figs. 2-5, where the tunneling probability curves containing resonance peaks show that their full widths at half maximum decrease sharply with the variation of parameters (such as the barrier width, the refractive index of the filling medium, and the photon frequency). In particular, when the double-barrier is formed by two cut-off waveguides, a tiny change in frequency or the structure parameters of the double-barrier can make a huge impact on the tunneling probability of photons. As for the laser ranging systems shown by Figs. 6 and 7, the resonant frequency is equal to the center frequency of the output signal. Since the frequency of the echo signal reflected by a static target is basically unchanged, the echo signal can smoothly pass through the double-barrier and enter the next module to complete the timing or phase measurement. Other light waves from the environment, with frequencies usually different from the working frequency of the laser ranging system, will be filtered out by the double-barrier structure. Thus, the received echo signal can be guaranteed to be true. On the other hand, the laser pulse has a non-zero spectral width (there is a frequency distribution around its central frequency). The closer the frequency of a component in the pulse is to the center frequency, the more likely it is to pass through the double-barrier. Therefore, when the echo signal passes through the double-barrier structure, its spectrum becomes narrowed, and its monochromaticity is enhanced. When the object to be measured is a moving object, the influence of the Doppler effect on this design is typically negligible.ConclusionsA photonic double-barrier can be constructed via an electromagnetic waveguide with dielectric discontinuities. For a given frequency, by choosing appropriate parameters, the tunneling probability of photons through the double-barrier structure can be equal to one (resonant penetration effect). When the resonance phenomenon occurs, a small change in the frequency or structural parameters of the double-barrier can significantly influence the tunneling probability of photons through the double-barrier. These physical properties may provide some new-type design principles for some optical devices, such as band-pass filters, optical sensors, and optical transistors. Especially, it can present a new design for the receiving device of a laser ranging system, which is conducive to ruling out spurious returning signals and enhancing the monochromaticity of the true returning signals.

    Apr. 25, 2024
  • Vol. 44 Issue 8 0827001 (2024)
  • Hui Liu, Yuqing He, Xiuqing Hu, and Chunli Sun

    ObjectiveThe conventional recognition of nighttime fires typically relies on infrared brightness temperature data, which often presents issues such as limited accuracy and challenges in identifying small fires. On the other hand, low-light detectors excel at capturing bright targets in settings of low illumination or during night conditions, making their observational data a valuable supplement to nighttime fire recognition. Consequently, the integration of low-light-assisted infrared technology in nighttime fire recognition holds considerable research significance. In this context, we introduce a novel fire recognition algorithm named FRJLI (nighttime tiny fires recognition by joint low-light-assisted infrared remote sensing data). This algorithm aims to integrate low-light data that eliminates interference from urban lights into fire recognition processes and establish thresholds for both low-light and infrared data to enhance the detection accuracy of small nighttime fires.MethodsGiven the heightened intensity and destructiveness of forest and grassland fires compared to fires in other vegetation types, our investigation delves into the atypical behavior exhibited by the visible infrared imaging radiometer suite (VIIRS) data within the medium-resolution infrared channel (M-band) and the low-light channel (DNB) of the U.S. Next Generation Meteorological and Environmental Satellite (NPP) when facing forest and grassland fires. Our methodology involves fusing VIIRS DNB data to extract monthly city light background information, projecting both M-band and DNB data simultaneously to the study area, preprocessing the projected remote sensing data to derive standardized data, and executing multiband threshold discrimination, absolute fire recognition, and contextual discrimination on the processed data to culminate in a comprehensive joint low-light and infrared nighttime fire recognition process.Results and DiscussionsBy implementing the FRJLI algorithm on forest fire in the Republic of Korea and grassland fire in Mongolia, we daily map out the distribution areas of these fires (Fig. 5). Our evaluation process focuses on two key aspects: first, a false color image that integrates low-light radiation values with mid-infrared brightness temperatures; second, the utilization of vegetation indices for a more accurate depiction of the affected fire zones. Ensuring the accuracy of our recognition outcomes, we visually compare the recognition results obtained through the FRJLI algorithm with those yielded by the NASA official algorithm, the MODIS Collection4 algorithm, and the FILDA algorithm (Fig. 9). The FRJLI algorithm demonstrates remarkable consistency with the identification outcomes and false color imagery, enabling the detection of minor fires at the fire line periphery. In a detailed analysis, the identification results from all four algorithms are scrutinized in terms of quantity and area coverage (Fig. 10). The findings affirm that the FRJLI algorithm not only identifies a greater number of fires but also offers superior quality compared to other methods, thus providing crucial technical support for more efficient and precise fire detection processes. Furthermore, an innovative examination of the correlation and sensitivity discrepancies between low-light and infrared data in the daily identification of fires is provided (Fig. 12).This analysis confirms the general patterns observed in fires, validates the trend accuracy of the FRJLI algorithm’s identification outcomes, and highlights its ability to identify colder and smaller fires in contrast to NASA’s findings. Significantly, this study concludes that low-light data is more responsive to the fire’s burning status, while infrared data is more adept at revealing fire trends, showcasing the FRJLI algorithm’s capability to leverage the complementary strengths of low-light and infrared fire detection techniques. Finally, through the insights gleaned, we speculate on and verify the varying states of fire identification achieved by the FRJLI algorithm (Figs. 14 & 15). These figures vividly portray the algorithm’s advantages in accurately identifying fire quantities, pinpointing fire centers and boundaries, as well as capturing critical trends in fire-related data.ConclusionsTaking into account the peculiar behavior exhibited in mid-infrared brightness temperature, the discrepancy in mid-infrared and long-wave infrared brightness temperatures, and variations in low-light radiation values during fires, we leverage the available data to introduce a novel algorithm for nocturnal tiny fire recognition through joint low-light-assisted infrared technology. Our methodology involves merging monthly city light data with low-light information to mitigate city light interference in low-light fire detection. By leveraging both low-light and infrared data concurrently for fire recognition, we aim to enhance the detection accuracy of small fires, including those concealed in shaded areas. Experimental validation is performed on forest fire occurring in March 2022 in the Republic of Korea and grassland fire in April 2022 in Mongolia, successfully enabling the identification of colder and smaller fires. The proposed algorithm significantly advances the capability to detect these colder and smaller fires, thereby enhancing the quantity and quality of nighttime fire recognition. Furthermore, it offers more precise and timely insights into fire location, fire center coordinates, fire line positions, and trend analysis, making it particularly valuable for forest and grassland resource protection applications. This innovative approach holds immense potential and practical value in bolstering fire management strategies for forest and grassland ecosystems.

    Apr. 25, 2024
  • Vol. 44 Issue 8 0828001 (2024)
  • Wei Yuan, Yarui Xi, Chuandong Tan, Chuanjiang Liu, Guorong Zhu, and Fenglin Liu

    ObjectiveComputed tomography (CT) is an imaging technique that employs X-ray transmission and multi-angle projection to reconstruct the internal structure of an object. Meanwhile, it is commonly adopted in medical diagnosis and industrial non-destructive testing due to its non-invasive and intuitive characteristics. Parallel translational computed tomography (PTCT) acquires projection data by moving a flat panel detector (FPD) and a radiation source in parallel linear motion relative to the detection object. This method has promising applications in industrial inspection. Due to the limitations of the inspection environment and the structure of the inspection system, there are scenarios where it is difficult to realize multi-segment PTCT scanning and imaging, and only single-segment PTCT scanning and imaging can be performed. Since the single-segment PTCT can only obtain the equivalent projection data at a limited angle, its reconstruction problem belongs to limited-angle CT reconstruction. Images reconstructed by traditional algorithms will suffer from serious artifacts. Deep learning-based limited-angle CT image reconstruction has yielded remarkable results, among which model-based data-driven methods have caught much attention. However, such deep networks with CNNs as the main structure tend to focus on the local neighborhood information of the image and ignore the non-local features. Additionally, research on iterative algorithms shows that non-local features can improve detail preservation, which is important for limited-angle CT reconstruction.MethodsTo address the limited-angle artifact in PTCT image reconstruction, we propose a deep iterative unfolding method (STICA-Net, Fig. 3) that learns local and non-local regular terms. The method unfolds a gradient descent algorithm with a fixed number of iterations to a neural network and utilizes convolutional modules with the coordinate attention (CA) mechanism and Swin-Transformer modules deployed as iterative modules in alternating cascades to form an end-to-end deep reconstruction network. The convolution module learns local regularization, in which CA is leveraged to reduce image smoothing. The Swin-Transformer module learns non-local regularization to improve the network's ability to restore image details. Among neighboring modules, iterative connection (IC) is adopted to enhance the model's ability to extract deeper features and improve the efficiency of each iteration. The employed experimental comparison methods are FBP, SIRT, SwinIR, FISTA-Net, and LEARN. The quality of the reconstructed image is comprehensively evaluated by utilizing three sets of quantitative indicators of root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural similarity index (SSIM). Meanwhile, comparison experiments are conducted on both simulated and real datasets to verify the feasibility of the proposed method. Additionally, we perform ablation experiments to confirm the effectiveness of each component of the network.Results and DiscussionsWe present the results of a contrast experiment of 90° limited-angle rotational scanning CT using the simulation data 2DeteCT dataset. The results demonstrate the effectiveness of the STICA-Net method for limited-angle reconstruction (Fig. 7). It is noted that PTCT image reconstruction is a limited-angle problem. To verify STICA-Net's effectiveness in PTCT limited-angle reconstruction, we employ the same dataset to generate projection data with an equivalent scanning angle of 90° via PTCT scanning, and then compare different methods. The results of both subjective image evaluation (Fig. 8) and quantitative evaluation index (Table 2) show that STICA-Net can solve the limited-angle problem of PTCT and achieve high-quality image reconstruction. By building the PTCT experimental platform (Fig. 6), the actual dataset of carbon fiber composite core wire (ACCC) is obtained. The two example results (Fig. 11) of the ACCC dataset indicate that the reconstructed images of the traditional method still contain a significant number of artifacts in the absence of large-angle data. However, the artifacts in the reconstructed images of FISTA-Net and LEARN have been significantly reduced. Although FISTA-Net produces better reconstruction results than LEARN, the details are still somewhat blurred. Compared with the suboptimal SwinIR, the PSNR of STICA-Net increases by 4.72% and 5.53%, the SSIM rises by 2.88% and 1.59%, and the RMSE decreases by 15.94% and 19.32% respectively. Meanwhile, ablation experiments verify the effectiveness of different network structures in PTCT limited-angle reconstruction. Figure 10 demonstrates clear improvement in the numerical values of each index as network structures are added incrementally.ConclusionsTo deal with the difficulty of PTCT image reconstruction, we theoretically conclude that PTCT image reconstruction is a limited-angle problem by building a PTCT geometric model, and then propose the STICA-Net model. Ablation experiments confirm the effectiveness of each model component in improving the reconstructed image. Compared to the contrast algorithm, the proposed method significantly improves image quality and yields the best quantitative evaluation indicators across different data types. Additionally, comprehensive results demonstrate that the proposed method outperforms the contrast algorithm in terms of PTCT limited-angle artifact suppression and detail recovery, and high-quality image reconstruction can be achieved. This is beneficial for promoting the in-service detection application of PTCT. However, the method's limitation is that although the ablation experiments demonstrate that the inclusion of the Swin-Transformer structure enhances image results, more memory is needed to store weights and intermediate features, which restricts the utilization of higher-resolution images in our study. In the future, the network module will be further improved to make the network more lightweight.

    Apr. 25, 2024
  • Vol. 44 Issue 8 0834001 (2024)
  • Please enter the answer below before you can view the full text.
    6-4=
    Submit