Acta Optica Sinica
Co-Editors-in-Chief
Qihuang Gong
Ying Zhang, Zhongfeng Xu, Xing Wang, Jieru Ren, Yanning Zhang, Cexiang Mei, Xianming Zhou, Changhui Liang, Wei Wang, and Xiaoan Zhang

Objective129Xeq+(q=17, 20, 23, 25, 27) highly charged ions with a kinetic energy of 1360 keV are incident on the surface of metal Al and Ti solid targets respectively. The near-infrared spectral lines (800-1700 nm) of excited Xe atoms and low ionized Xe ions, and the spectral lines of excited target atoms and excited by ionization are measured during the interaction between the highly charged ions and the surface to achieve surface electron neutralization. The experimental results show that during the process of high charged ion incident on the metal surface, the potential energy carried by the ion instantly (in the femtosecond range) deposits on the target surface, ionizing and exciting the target atoms. Due to the strong Coulomb potential energy, the target atoms can form a highly ionized state and complex electronic configuration to de-excite the emission spectrum. As the charge state of the incident ion increases, the measured spectral line intensity rises, and the increasing trend is generally consistent with the growing trend of the potential energy of the incident ion, which indicates that the classical over-the-barrier model is valid in the near Bohr velocity energy region. We also hope that our experimental data can provide basic support for related research and provide new methods for spectral measurement.MethodsThe experiment is performed at the Heavy Ion Research Facility in Lanzhou (HIRFL) and the experimental platform is shown in Fig. 1. Gaseous 129Xe atoms repeatedly collide with electrons in an 18 GHz microwave field in the ECR, gradually peeling off to form highly charged 129Xeq+ ions. They are introduced at the required voltage for the experiment, and the required projectile ions are selected by analytical magnets based on the charge-to-mass ratio. The beam spot is controlled to be less than 5 mm using a beam splitter, quadrupole lens, and aperture, and the beam intensity is recorded via a Faraday tube. The beam enters a metal ultra-high vacuum chamber with magnetic shielding (vacuum degree maintained at 10-8 Pa). The chemical purity of sample Al or Ti is 99.99%, and the surface has been purified with a target area of 15 mm×15 mm and thickness of 0.1 mm. The infrared optical window and monochromator incident slit are perpendicular to the beam direction and form a 45° angle with the target surface. The experiment employs an infrared spectrometer SP-2357 produced by ARC (Action Reserve Corporation) in the United States, with a grating density of 600 g/mm and a flashing wavelength of 1.6 μm. The InGaA-C detector is selected with an effective range of 800-1700 nm and an integration time of 3000 ms. To improve the signal-to-noise ratio and measurement accuracy, we adopt a phase-locked amplifier (SR830) and a chopper (SR540). Additionally, we have to operate in the darkroom or the dark cover screening to eliminate or reduce the background of spectral measurement.Results and DiscussionsThe near-infrared spectral lines (800-1700 nm) emitted from the interaction between high charge state 129Xeq+ (q=17-27) and metal solid targets are measured (Table 1). These spectral lines can be adopted in the research on the damage of space-charged particles to aerospace devices, high-precision optical clocks, and in the infrared background radiation of the universe in laboratory astrophysics. Xe ion emission near-infrared is an important basis for manipulating Hall thrusters, with space stations performing attitude calibration and other actions in space.Under the action of highly charged ions, it is possible to ionize and excite transitions between complex configurations of target atoms, and electric dipole forbidden transitions (magnetic dipole and electric quadrupole transitions). Additionally, we measure spectral lines of 842.42 nm and 1525.03 nm for helium like (Al XII) Al ions (i.e. Al11+) radiation, and 1251.08 nm for lithium like (Al XI) Al (Al10+) ions, which belong to electric dipole transition radiation. The 989.01 nm spectral line for Ti XVIII (i.e. Ti17+) de-excitation radiation belongs to magnetic dipole transition radiation. To our knowledge, these spectral lines are predicted by theoretical predictions from 1987 and 2013, and there have been no reports of experimental data so far.The classical over-the-barrier model for the interaction between highly charged ions and metal solid targets in the Bohr velocity energy region has been validated. The trend of single particle fluorescence yield increasing with the potential energy of the incident ion is measured, which is roughly the same as the charge state trend of the incident ion rising with the potential energy (Fig. 3). The classical over-the-barrier model suggests that the charge state of the incident ion plays an important role in the ionization excitation of the target atom and the neutralization process of the target electron captured by the incident ion.ConclusionsHighly charged ions are incident on a metal solid target surface and deposit the carried energy in the nano-space of the target surface within the femtosecond time scale, which ionizes and excites the target atoms and results in the emission of spectral lines. Some of the spectral lines are transitions between complex electronic configurations, leading to strong electric dipole forbidden transitions. There have been no experimental data reports on the 842.42 nm and 1525.03 nm spectral lines emitted by helium-like Al ions, as well as the 1251.08 nm spectral lines emitted by lithium-like Al ions, and the 989.01 nm spectral lines of Ti XVIII (i.e. Ti17+) ion de-excitation radiation since the theoretical calculation results were published in 1987. The relative intensity of the spectral lines (single ion fluorescence production) we measure increases with the growing charge state of the incident ion, and the increasing trend is generally consistent with the potential energy trend of the incident ion increasing with the charge state. This indicates that the classical over-the-barrier model holds true in the energy region of the incident ion's kinetic energy near the Bohr velocity. The research methods for elastic and inelastic scattering caused by collisions between ions and gas target atoms are different. Due to many novel phenomena generated by highly charged ions incident on solid surfaces, such as the controversy over Auger and ICD processes, the energy shift caused by multiple ionization of target atoms, and the dissipation channels of total energy and gain energy of incident ions, a large quantity of work should be done. Meanwhile, we sincerely hope that our study can provide basic data and support for related research, and propose new methods for spectral measurement.

Apr. 10, 2024
  • Vol. 44 Issue 7 0702001 (2024)
  • Wenzhong He, Jiaxuan Liu, Xiongwei Yang, Yi Wei, Kaihui Wang, Wen Zhou, and Jianjun Yu

    ObjectiveWith the continuous advancement of wireless communication and information technology, mobile data transmission volume has nearly doubled each year. Simultaneously, the proliferation of access devices and the widespread adoption of emerging technologies such as the Internet of Things (IoT), high-definition live streaming, virtual reality (VR), and augmented reality (AR) have intensified the pressing demand for high-speed communication. Nevertheless, meeting the substantial data transmission requirements remains a formidable challenge given the current communication frequencies and bandwidth limitations. The currently utilized sub-6 GHz frequency band has become relatively congested, while the frequency range spanning from 6 GHz to 300 GHz in the millimeter wave spectrum remains largely untapped, offering an exceptionally abundant spectrum resource. Furthermore, in comparison to the lower microwave frequency bands currently in commercial use, the absolute bandwidth available in the millimeter wave frequencies significantly surpasses that of the lower microwave bands. In recent years, transmission systems combining radar sensing with communication have garnered increasing attention. To mitigate the strain on the limited spectrum resources and reduce power consumption, radar and wireless communication emerge as paramount and pivotal applications within the domain of radio frequency (RF) technology. However, as technology continues to evolve, radar and communication are converging towards integrated design, whereas they are initially developed and designed independently, each catering to their distinct functions and frequency bands.MethodsIn this study, we presented an experimentally photonics-aided integrated radar and communication system. On the transmission side, the integrated signal was generated by encoding a quadrature phase shift keying (QPSK) signal onto a linear frequency-modulated (LFM) signal in the baseband, with the primary objective of eliminating the need for digital-to-analog conversion (DAC) in the intermediate frequency (IF) band. Subsequently, the joint radar communication (JRC) signal was modulated onto an optical carrier and mixed with another external cavity laser (ECL) to generate the millimeter wave LFM-QPSK signal. The adoption of QPSK encoding ensured a constant envelope for the JRC signal, a crucial aspect of long-distance radar sensing. On the receiving side, a W-band horn antenna (HA) captured a portion of the JRC signal for transmission purposes. This signal was then down-converted to an IF band by using a W-band mixer. Following de-chirping and a series of digital signal processing (DSP) steps, the QPSK signal was recovered. For radar sensing purposes, the echo signal was initially down-converted to the baseband and subsequently processed through a matched filter. Due to the well-preserved cross-correlation characteristics of the original LFM signal in the resulting millimeter wave JRC signal, precise radar synchronization was obtained through pulse compression. Consequently, this system could achieve both high-resolution radar sensing and high-speed communication functions.Results and DiscussionsWe introduce a W-band communication-aware integrated system, and its schematic diagram and algorithmic process are depicted in Fig. 2. This system successfully achieves robust communication and sensing capabilities through offline processing at the radar and communication receiver. As shown in Fig. 4, employing the de-chirping operation at the communication receiver allows us to successfully extract high-quality communication sequence signals from the integrated waveform. Subsequent offline DSP algorithms enable us to achieve communication with a significantly lower error rate than that of the hard decision threshold. As shown in Fig. 5(a), (b), and (c), we conduct experiments in different scenarios at distances of 2, 10, and 50 m, respectively. When the input power into the PD exceeds -1 dBm, each component of the integrated signal achieves high-quality communication below the hard decision threshold. Additionally, by introducing an extra frequency offset error component, the integrated signal maintains high communication quality, as demonstrated in Fig. 5(d), proving the system's robustness. On the radar sensing side, we employ pulse compression techniques to detect single and dual targets with a radar accuracy of approximately 2.0 cm. Figure 7 displays the pulse compression output results for a single target at different distances, while Fig. 8 shows the results for dual targets at varying distances. In a word, clear target detection is achieved at the radar end. These experimental results underscore the effectiveness of the proposed W-band communication-aware integrated system.ConclusionsIn this study, we have proposed and demonstrated a photonics-aided system for joint communication and radar sensing. The baseband signal is achieved by encoding an LFM with a QPSK signal. The orthogonal properties of the LFM signal enable signal demodulation, while pulse compression is utilized for radar detection. Experimental results indicate that, through signal-sharing techniques, we can achieve a distance resolution of 2.0 cm and high-quality transmission at speeds of up to 20 Gbit/s within the 91 GHz frequency band, with transmission distances of up to 50 m. Furthermore, this system allows for flexible signal type adjustments as needed, making it a promising candidate for future millimeter-wave communication applications.

    Apr. 10, 2024
  • Vol. 44 Issue 7 0706001 (2024)
  • Xiaoxue Gong, Wenling Xiao, Qihan Zhang, Tiantian Zhang, Xing Yin, and Lei Guo

    ObjectiveMaximizing the transmission capacity of individual wavelength channels is necessary to meet the increasing capacity and distance requirements of metro optical networks. Orthogonal frequency division multiplexing (OFDM) technology can tolerate certain chromatic dispersion when signals are loaded onto each subcarrier, thus maximizing the transmission capacity within limited bandwidths during optical fiber transmission. In addition, intensity modulation-direct detection (IM-DD) is currently the most widely used method in metro optical network access layers. However, it is severely affected by fiber chromatic dispersion and cannot meet the needs of long-distance transmission in other layers of metro optical networks. Therefore, the IM-DD OFDM system combining the two technologies has received increasing attention. However, as the capacity and distance requirements of next-generation metro optical networks increase further, the dispersion problem will exceed the tolerable limit of OFDM, and the impact of nonlinear effects will become more obvious, causing a serious decline in system performance. Digital back-propagation (DBP) and optical phase conjugation (OPC) technologies are commonly used to compensate for chromatic dispersion and nonlinear effects simultaneously. However, DBP requires solving the inverse non-linear Schrodinger equation of the fiber channel, which has a high computational cost. When using OPC technology, when two sections of fiber have the same length, the even-order chromatic dispersion and pulse broadening caused by nonlinear effects accumulated in the first section of fiber will be completely recovered in the second section of fiber theoretically. However, traditional OPC schemes based on single-pump degenerate four-wave mixing (DFWM) have signal wavelength shifts at the phase conjugator, which changes the group velocity dispersion parameters in the second section of the fiber link. As a result, the OPC needs to be slightly shift from the midpoint of the fiber link to achieve complete signal impairment compensation. There is also a polarization sensitivity problem that reduces the efficiency of four-wave mixing (FWM), thus affecting the compensation performance of OPC waves in the system.MethodsWe propose a wavelength-shift-free OPC compensation scheme based on orthogonal polarization pumping non-degenerate four-wave mixing (NFWM) for IM-DD OFDM optical communication systems. It simultaneously compensates for chromatic dispersion and suppresses the impact of the nonlinear effects. First, we theoretically derive the principle of generating an OPC using orthogonal polarization pumping NFWM in a highly nonlinear fiber (HNLF). Based on the above principle, we design a wavelength-shift-free OPC implementation method to obtain an OPC wave with the same wavelength as the original signal in the orthogonal polarization state. Then, the factors that affect the power of the generated OPC wave are specifically analyzed. Finally, according to the optimized parameter settings, a simulation verification is performed.Results and DiscussionsThe pump optical power, the nonlinear coefficient, and the length of HNLF play a key role in the performance of IM-DD OFDM systems based on orthogonal polarization pumping NFWM for generating OPC. First, the impact of pump optical power is analyzed. Fig. 3 shows that the bit error rate (BER) varies with the change in the signal optical power injected into the OPC at different pump optical power values. It can be seen that a larger pump power will cause a sudden increase in the BER as the optical signal power continues to increase. The main reason for this is that the increase in pump power will lead to a large amplified spontaneous emission noise within the bandwidth of the generated OPC wave. The noise cannot be filtered out by an optical filter and will affect its compensation effectiveness. Next, the impact of the nonlinear coefficient and length of the HNLF on the system performance is analyzed. As shown in Fig. 4, with an increase in the nonlinear parameters, the BER is lower when the HNLF is shorter. However, its performance degrades as the length of HNLF increases. Finally, we compare the performance of the traditional OPC scheme without calculating the shift value at the midpoint, the traditional OPC scheme with midpoint-shift-value calculation, and our wavelength-shift-free OPC scheme. The BER curves varied with the received optical power (ROP), as shown in Fig. 5. It can be seen that our proposed system can achieve a 7% HD-FEC threshold at a rate of 114.375 Gbit/s through a standard single-mode fiber link with a length of 240 km, and the constellation points are relatively clear with few noise points.ConclusionsWe theoretically analyze and verify the feasibility and effectiveness of wavelength-shift-free OPC compensation for IM-DD OFDM optical communication systems based on orthogonal polarization pump NFWM. To achieve better performance for the system, we study various parameters that affect system performance. The performance comparison between the proposed scheme and the traditional OPC scheme is conducted, and it is found that the system based on wavelength-shift-free OPC transmission achieves a BER of 7% for HD-FEC threshold at an ROP of -10 dBm, while the system based on traditional OPC scheme cannot achieve the decision threshold even after midpoint-shift-value calculation under this transmission condition. Our scheme can provide a theoretical basis for the design of high-speed long-distance IM-DD OFDM optical communication systems.

    Apr. 10, 2024
  • Vol. 44 Issue 7 0706002 (2024)
  • Yong Chen, Zhimin Yao, Huanlin Liu, Junpeng Liao, Li Xu, and Yanqing Feng

    ObjectiveThe cardiovascular health status of the human body can be reflected through pulse waves. Important physiological parameters such as heart rate, blood pressure, and the degree of vascular sclerosis can be obtained through the analysis of these waves. The sensor predominantly used for pulse measurement is the photoelectric sensor, which is capable of detecting pulses at various measurement positions, thus making it extensively used in wearable sports equipment for heart rate detection. However, during the process of measuring pulse waves with photoelectric sensors, there are often various noise interferences such as motion artifacts, power interference, and respiratory effects. Moreover, this measurement method is primarily invasive, which can make people uncomfortable. Therefore, it is necessary to select appropriate sensors to avoid discomfort to the human body during the measurement process and denoise the collected signals.MethodsWe designed a pulse wave signal acquisition platform based on fiber Bragg grating (FBG) sensors. The platform was composed of FBG sensors embedded in nylon wristbands. Initially, the FBG wristband was secured at the radial artery of the left hand to gather pulse wave signals for demodulation. The collected pulse wave signals were subject to baseline drift. Hence, integrated empirical mode decomposition (EMD) and cubic spline interpolation were used for detrending prior to denoising. Subsequently, the amplitude of Gaussian white noise added to the complementary ensemble EMD (CEMMD) was optimized using particle swarm optimization (PSO) algorithm. The CEEMD algorithm decomposed the pulse wave signal into a series of intrinsic mode function (IMF) components. An improved wavelet threshold function was then applied to process these IMF components. The correlation coefficient between each IMF component and the original pulse wave signal was calculated, and this coefficient was used to determine the effectiveness of each component. Finally, all effective signals were reconstructed to obtain a smooth pulse wave signal.Results and DiscussionsTo validate the performance of the method proposed in this study, simulation experiments are conducted using three comparative algorithms. The denoising performance is evaluated using signal noise ratio (SNR) and root-mean-square error (RMSE). Gaussian white noise with an SNR ranging from 5 to 25 dB is added to the simulation signal. The denoising performance is also verified on actual collected pulse wave signals. The simulation results (Table 1 and Table 2) show that even when 5 dB noise is added, the SNR after denoising can still reach 15.785 dB, and RMSE can be reduced to 1.251. When 25 dB noise is added, the SNR after denoising is 31.959 dB, and RMSE is 0.215. Even if the SNR is low, compared with other methods, the algorithm proposed in this study performs better on these two evaluation indicators and has better denoising performance. The results of determining the amplitude of Gaussian white noise (Fig. 4) intuitively display that when the amplitude of Gaussian white noise added in CEEMD is 0.35, the average mutual information of IMF components is the lowest. This indicates that the denoising effect is the best at this time. The actual experimental results are shown in Fig. 9. The signal obtained after denoising by the proposed algorithm is smoother; the amplitude is not distorted, and it effectively removes spikes and high-frequency noise in the signal. This is because the PSO algorithm optimizes the amplitude of white noise added to CEEMD, overcoming problems such as modal aliasing, endpoint effects, and new harmonic components introduced by inappropriate Gaussian white noise in the decomposition process of CEEMD. Using correlation coefficients to select valid and invalid signals successfully removes most invalid signals (Table 5). In general, the proposed algorithm can better remove noise in signals than other algorithms.ConclusionsWe propose a method for collecting pulse wave signals using FBG sensors. By considering the various noise interferences in the pulse wave signal, a joint denoising algorithm of PSO-CEEMD-IWT is proposed. Different amplitudes of white noise are added to both the simulation signal and the actual signal. We determine 0.35 as the optimal amplitude of white noise added to CEEMD, which further suppresses the modal aliasing phenomenon, compared with the amplitude selected based on experience. The average mutual information obtained by the method in this paper is lower than that obtained by selecting the white noise amplitude according to experience. The results show that the SNR, RMSE, and other indicators obtained by the proposed algorithm are the best; there is no waveform and amplitude distortion, and the denoised signal is smoother, which proves that the performance of the pulse wave denoising proposed in this paper is more outstanding. The signal has a higher degree of restoration to the pulse wave signal, which is of great significance for later combination with feature extraction and objectification of pulse diagnosis. We also propose a feasible way to obtain high-quality pulse waves.

    Apr. 10, 2024
  • Vol. 44 Issue 7 0707001 (2024)
  • Shichang Ju, Junjie Cai, and Wenlin Gong

    ObjectiveThe property of the measurement matrix has a great influence on the image reconstruction quality of single-pixel compressive imaging. Optimizing the measurement matrices is a core and crucial technology for single-pixel imaging. However, current optimization methods for measurement matrices often face the problems of local optimization and limited applicability. Additionally, existing analytical theories and methods based on the measurement matrix often fail to explain or predict the image reconstruction quality in many scenarios, and the quantitative relationship among measurement matrix characteristics, target properties, and image reconstruction results is unclear. For example, the reconstruction results vary obviously among different kinds of Hadamard encoding measurement matrices. Therefore, after combining optical imaging systems with compressive sensing theory, it has become an urgent issue for single-pixel compressive imaging to construct a characteristic function that can predict image reconstruction quality. We propose a characteristic function of high-quality image reconstruction for single-pixel compressive imaging to predict the imaging quality of targets with different sparsity, which is helpful for the optimal design of measurement matrices in single-pixel imaging systems.MethodsUnder the same sampling rate, the image reconstruction quality is significantly different for various kinds of Hadamard encoding measurement matrices, which can not be explained by existing compressive sensing theories. By combining compressive sensing theory with the characteristic parameters described in Ref. [23], the Gram matrix is obtained from the measurement matrix and then the relationship between the Gram matrix and the system's point spread function is clarified. Next, according to the point spread function and compressive sensing theory, four characteristic parameters are proposed, including the peak value of the strongest sidelobe, overlapped sidelobe peak value, spatial distance, and spectral cosine similarity. Based on these parameters, an image reconstruction characteristic function F(η) adopted for high-quality single-pixel compressive imaging is constructed. Meanwhile, by calculating the F(η) values of the random Hadamard encoding matrix in different sampling rates η and conducting data fitting, the relationship between the target's sparsity and the characteristic function is established. Furthermore, by changing the target's sparsity, sampling rate, and the type of encoding measurement matrices, the validity of the proposed characteristic function is verified by numerical simulations and experiments.Results and DiscussionsTo demonstrate the validity of the proposed characteristic function, we conduct both numerical simulations and experiments based on the scheme in Fig. 1. Firstly, when the sampling rate η=0.6 is fixed, the sparsity thresholds for Natural, CC, RD, Random, and MP Hadamard encoding matrices are obtained and random grayscale point targets can be stably reconstructed at their respective sparsity thresholds Sε [Fig. 7(a)]. However, the sparsity threshold Sε for the Random Hadamard encoding matrix is much larger than that of the other four Hadamard encoding matrices. What's more, under S>Sε, Natural, CC, RD, and MP Hadamard encoding matrices can not recover the image of the slit shaped target [Figs. 7(b) and 7(c)]. Secondly, according to Fig. 6, under Sε=0.25000, the corresponding sampling rates η for the five kinds of Hadamard encoding matrices above are 0.89100, 0.88600, 0.86600, 0.72800, and 0.89100 respectively. Numerical simulations and experimental results demonstrate that random grayscale point targets can be perfectly reconstructed by all the five kinds of Hadamard encoding matrices when the target's sparsity S=0.25000 (Fig. 8). Additionally, when the sampling rate is η=0.728, only the random sequence Hadamard encoding matrix can accurately restore the radial target with the sparsity S=0.25000. Finally, the universality of the proposed characteristic function is further verified by Bernoulli random encoding matrices, Gaussian random encoding matrices, and Gaussian orthogonal encoding matrices in different representation bases (Tables 2 and 3, and Fig. 9). Meanwhile, Fig. 9 demonstrates that the relationship described by Equation (8) is valid for other common random encoding matrices, which means that the characteristic function can be employed as the objective function in optimizing measurement matrices for single-pixel compressive imaging systems.ConclusionsCombined with the compressed sensing theory, four characteristic parameters based on the point spread function are proposed, including the peak value of the strongest sidelobe, overlapped sidelobe peak value, spatial distance, and spectral cosine similarity. A high-quality image reconstruction characteristic function of single-pixel compressive imaging is constructed and its validity is verified by numerical simulations and experiments. Both numerical simulation and experimental results demonstrate that the proposed characteristic function can not only explain the differences in single-pixel compressive imaging quality for Hadamard coding matrices with different sorting methods but also predict the image reconstruction results of a given measurement matrix. Additionally, the relationship between the proposed characteristic function and the target sparsity in high-quality image reconstruction is established. The characteristic function can serve as a criterion during the optimization of measurement matrices for single-pixel imaging.

    Apr. 10, 2024
  • Vol. 44 Issue 7 0711001 (2024)
  • Ying Wang, Yubo Ni, Zhaozong Meng, Nan Gao, Tong Guo, Zeqing Yang, Guofeng Zhang, Wei Yin, Hongwei Zhao, and Zonghua Zhang

    ObjectiveFringe projection profilometry is widely employed to reconstruct the three-dimensional (3D) shape of an object surface. However, when this method is utilized to measure objects with color reflective surfaces, the image captured by the camera is oversaturated with pixels due to ambient lighting and reflections from the projected fringes, which results in the inability to measure the surface of the reflective area. This problem is mainly due to the unevenly varying reflective of surfaces, which is affected by both the roughness and the surface color. To solve the problem of eliminating the interference of the object surface color and complete the 3D shape measurement method based on the reflectivity change of colored highly reflective surfaces, we propose an adaptive generation of complementary color sinusoidal fringes method. By different absorption of colors by the object surface color to be measured, a complementary color of lighting is projected onto the highly reflective area to reduce the surface reflectivity of the region and suppress the exposure phenomenon.MethodsWe put forward a method to measure the 3D shape of colored objects with high reflectivity, which is based on adaptively encoded complementary color fringes. Firstly, the highly reflective region of the object to be measured should be located. The image of the object surface is captured by the camera when the projector projects the strongest white light, and the coordinates of the oversaturated pixel points are extracted by an inverse projection technique. The location of the highly reflective region in the coordinate system of the projected image is obtained via the matching relationship between the projector and the camera. Then, the optimal color adopted for projecting the highly reflective region of the object is calculated by the color image of the object surface and then captured by the camera. The projecting color obtained in the previous step is employed to generate an image that is projected to the highly reflective region on the measured surface. The saturation value of the adopted projecting color is adjusted according to the magnitude of the adjacent light intensity values at either end of the boundary encoded color until the adjacent light intensity values are less than 20. Finally, after sinusoidal fringes on the V component of the HSV color space are encoded, and meanwhile adaptive complementary color sinusoidal fringe patterns are generated and projected onto the object surface to be measured. The complete 3D shape of the object surface to be measured is recovered by solving the unwrapped phase.Results and DiscussionsThe proposed method employs adaptively encoded complementary color fringes. It reduces the reflectivity of the highly reflective region on the surface, solves the unwrapped phase loss after utilizing traditional fringe projection profilometry, and finally obtains the complete 3D shape of the yellow ceramic cup (Fig. 5). Additionally, the phase resolution results of the yellow ceramic cup are compared and analyzed by traditional gray fringes and the proposed complementary color-coded fringes under different exposure time. The results show that when the exposure time is greater than 40 ms, the phase recovery completeness of the region D is maintained at 100% (Fig. 6) by applying the proposed method. The purpose of measuring the complete 3D shape of the surface of a color highly reflective object by projecting only a set of adaptively encoded complementary color sinusoidal fringe patterns is achieved. Meanwhile, the mean error of the proposed method is 0.5281 mm, smaller than that of the traditional multiple exposure method. In conclusion, this method is not only more efficient than the traditional multiple exposure method in the measurement process but also improves the measurement accuracy.ConclusionsTo address the challenges in measuring the 3D shape of colored highly reflective objects, we propose a novel fringe projection profilometry method based on adaptive color encoding. The proposed method encodes and projects fringe structured light complementary to the measured surface color into the highly reflective region in the HSV color space based on the theory of photometric complementarity. As a result, it reduces the surface reflectivity of the highly reflective region and achieves 3D shape measurement of colored highly reflective objects. The experimental results show that this method reduces the number of projected images during the measurement compared with the traditional multiple exposure methods. Only a set of adaptively encoded complementary color sinusoidal fringe maps should be projected to obtain a complete 3D shape of the surface of a colored highly reflective object. The proposed method shows certain advantages in measurement efficiency and accuracy.

    Apr. 10, 2024
  • Vol. 44 Issue 7 0712001 (2024)
  • Xiangyu Zhang, Ailing Tian, Zhiqiang Liu, Hongjun Wang, Bingcai Liu, and Xueliang Zhu

    ObjectivePrecision optical components are widely employed in various optical systems, and the surface shape quality of optical components directly affects the performance of optical devices. Therefore, surface shape detection of optical components is of great significance. Interferometry is widely recognized as the most effective method for surface shape detection, among which phase-shifting interferometry has higher detection accuracy. However, during the continuous collection of multiple interferograms with phase differences, it is constrained by the performance of the phase-shifting components and easily affected by such objective factors as mechanical vibration and air disturbance in the environment, which decreases the detection accuracy. Therefore, it is not suitable for on-site production testing. In recent years, researchers have proposed a method that combines carrier interferometry with Fourier analysis technology to achieve phase extraction of a single interferogram. However, generally, there are still shortcomings such as large tilt direction edge errors, stripe stacking phenomenon, and low recovery accuracy. To solve the problem of low accuracy in phase extraction of single interferograms in the above phase-solving methods, we propose a new single interferogram phase extraction method based on light intensity iteration. Meanwhile, simulations and experimental research are conducted, with the stability of the algorithm analyzed.MethodsWe adopt a combination of simulation and experiment methods, analyze and explore the principle of the light intensity iteration method, and employ MATLAB to write algorithm programs while conducting simulation verification. The feasibility, stability, and noise resistance of the algorithm are explored via simulations to ensure the algorithm performance. By conducting 100 sets of simulation simulations, the final phase residuals are compared, and the convergence conditions suitable for solving single interference fringes and the solution interval with the best measurement performance are obtained. To ensure the innovation and optimization ability of the algorithm, we conduct a comparison with the Fourier transform method. Finally, multiple experiments are carried out using the ZYGO-Verifire PE phase-shifting interferometer to measure optical components. Multiple sets of experiments are conducted in an experimental environment with temperature of 23 ℃ and air humidity of 75.3%. Meanwhile, a single interference fringe pattern is collected and the phase is solved using the proposed algorithm. The results are compared, and the effectiveness of the algorithm is evaluated by residual PV and RMS values to achieve phase extraction of the single interference fringe.Results and DiscussionsOur algorithm can ensure the algorithm stability while improving detection accuracy. By adopting the Bernsen algorithm to binarize the original interferogram and further obtain a stepped predicted phase (Fig. 3), initial information is provided for subsequent light intensity iterations. The use of binarization to predict phases provides a new approach for iterative methods. The feasibility and anti-noise ability of this method are demonstrated by comparing it with the Fourier transform method (Fig. 4). Compared with the Fourier transform method, the proposed method has higher solving accuracy and faster solving speed. Meanwhile, its anti-noise ability is not significantly different from that of the Fourier method, both of which have sound anti-noise ability. By conducting hundreds of simulation experiments, convergence conditions that do not affect computational efficiency and avoid excessive iterations are obtained. The study on the effect of the fringe number on the accuracy of the algorithm solution shows that generally the size of the algorithm residual presents a trend of first decreasing and then increasing with the rising number of fringes (Fig. 8). Data comparison shows that the algorithm has the highest solution accuracy when processing a single interference fringe pattern with 4 to 5 fringes.ConclusionsWe propose a phase solution method based on the light intensity iteration method. Firstly, the original interferogram is binarized and the initial phase is obtained by phase unwrapping. Then, the background light and modulated light are preliminarily estimated by adopting the least square method using the interference intensity expression. The measured phase is calculated using a variation of the interference intensity expression. The measured phase is compared with the initial phase as a convergence judgment. The initial phase is replaced with a surface shape that does not meet the accuracy requirements. The background light and modulated light are updated, and the phase solution process is repeated. By light intensity iteration, the phase is extracted from a single interferogram. Meanwhile, solution accuracy, noise resistance, and algorithm stability are simulated and analyzed. Experimental measurements are conducted on a 100 mm planar element, and the results show that the obtained phase distribution of the proposed method is consistent with the phase obtained by the four-step phase-shifting algorithm of the ZYGO-Verifire PE phase-shifting interferometer. Compared to those obtained by the interferometer, the residual PV and RMS values obtained by the light intensity iteration method are 2.49 nm and 0.35 nm respectively. This indicates that the proposed method featuring high stability and efficiency can extract phase distribution from single fringes and can meet the testing needs of the production site environment.

    Apr. 10, 2024
  • Vol. 44 Issue 7 0712002 (2024)
  • Youpeng Su, Jianhua Chang, Tianyi Lu, Zhiyuan Cui, Qian Tu, and Yunhan Zhu

    ObjectiveUltra-fast pulse fiber lasers are extensively employed in fields such as fiber optic communication, medicine, and precision material processing due to their compact structure and high beam quality. Lock mode technology is an effective method for achieving ultra-short pulses. Actively mode-locked lasers introduce active modulation devices into the laser cavity and adopt external modulation signals to change the optical signal characteristics, achieving a laser mode-locked pulse output. They feature flexibility, controllability, and stable output pulses. In recent years, graphene has been extensively studied due to its excellent electro-optical properties. Research has shown that an external electric field can alter the Fermi level of graphene to achieve light absorption modulation. Therefore, graphene-based electro-optic modulators have the potential to achieve actively mode-locked lasers. We present the construction of an efficient and high-speed graphene all-fiber mode-locked device, which achieves high-speed adjustment of graphene's optical performance with low modulation power consumption and high modulation efficiency.MethodsThe device is composed of graphene, single-mode optical fiber, polydimethylsiloxane (PDMS), and silver (Ag) film, forming a capacitive device (GCD) structure. By selecting glass as the substrate and leveraging magnetron sputtering technology to deposit 50 nm silver on the glass as the bottom electrode, silver has sound conductivity, which is beneficial for reducing the device resistance. The insulation layer is a spin-coated 200 nm PDMS layer, and the thinner insulation layer can effectively reduce the capacitance value of the device. Meanwhile, hydrofluoric acid (HF) with a concentration of 20% is utilized to corrode standard single-mode optical fibers to 15 µm with a corrosion length of 5 mm, and the corroded single-mode fiber is transferred to PDMS. The device is placed in a UV ozone cleaning machine (multi-frequency, CCI UV250-MC) for 10 minutes to improve the hydrophilicity of the insulation layer PDMS. There is graphene dispersion with a selected concentration of 0.1 mg/mL (Nanjing Xianfeng Nanomaterial Technology Co., Ltd., XFZ20 dispersion). Graphene solution is dropped onto the optical fiber, dried, and then inkjet printed with a silver electrode layer on the device using a microelectronic printer (Power Supply Technology Co., Ltd., MP1100). Finally, the prepared device is connected to the circuit board using silver wire. The GCD device is connected between the isolator and polarization controller by fusion, and a spectrum analyzer, digital oscilloscope, power meter, autocorrelator, and spectrum analyzer are adopted to record the locked pulse signal, including spectrum, pulse width, repetition rate, and output power.Results and DiscussionsThe GCD device is connected to the fiber laser system and a pump power of 80 mW is employed for actively mode-locked experiments. At a pump power of 80 mW, the AC signal amplitude increases from 0 V to 5 V, and the average output power decreases from 1.328 mW to 1.130 mW. After calculation, the insertion loss of the device increases from 1.54 dB to 2.46 dB (Fig. 4). Subsequently, a periodic AC signal (12.2 MHz) that is consistent with the resonant frequency of the laser cavity is applied to the graphene device. Under low voltage amplitude, the control of the graphene Fermi level is limited, resulting in a limited range of dynamic changes in the absorption of graphene devices. Therefore, unstable mode-locked pulse signals are observed. When the voltage amplitude increases to 5 V, the most stable mode-locked pulse signal is observed to achieve active mode locking of the laser. The narrowest pulse width of the mode locking signal is 298 ps (Fig. 5). Meanwhile, by increasing the modulation frequency to twice the resonant frequency of the laser cavity (24.4 MHz), the optical signal inside the cavity undergoes frequency doubling oscillation under the graphene device control, leading to harmonic mode-locked operation. The mode-locked pulse signal is slightly unstable, possibly due to the insufficient response speed of graphene to support the migration time of charge carriers at higher modulation rates, resulting in modulation depth changes in the device. This is also the reason for the wider pulse width of 315 ps, corresponding to a frequency of 24.4 MHz, which achieves active repetition frequency control in mode-locked lasers.ConclusionsWe introduce an actively mode-locked fiber laser based on a graphene all-fiber structure. It combines single-mode optical fibers that have undergone lateral corrosion treatment with graphene capacitor structures and utilize the evanescent wave coupling effect of optical fibers to interact with graphene, achieving efficient modulation. Meanwhile, by utilizing the inherent high carrier mobility of graphene, high-speed adjustment of optical performance can be achieved with lower modulation power consumption. The experimental results show that under a modulation signal of ±5 V, the fiber laser obtains controllable repetitive mode-locked pulses at a fixed pump power of 80 mW, with frequencies of 12.2 MHz and 24.4 MHz respectively. The corresponding mode-locked pulse widths are 298 ps and 315 ps respectively, and the laser center wavelength is 1558 nm. Meanwhile, by changing the amplitude of the AC signal (0-5 V), the average output power of the laser can be adjusted within the range of 1.328 mW to 1.130 mW. The research results provide references for achieving low-power and integrated actively mode-locked lasers, and have practical significance for developing efficient and integrated actively mode-locked laser systems.

    Apr. 10, 2024
  • Vol. 44 Issue 7 0714001 (2024)
  • Jianming Chen, Dingjian Li, Xiangjin Zeng, Zhenbo Ren, Jianglei Di, and Yuwen Qin

    ObjectiveRGB and thermal infrared (RGBT) tracking technology fully leverages the complementary advantages of different optical modalities, providing effective solutions for target tracking challenges in complex environments. However, the performance of many tracking algorithms is constrained due to the neglect of information exchange between modalities. Simultaneously, as the tracking template remains fixed, existing tracking methods based on Siamese networks face limitations in adapting to variations in target appearance, resulting in tracking drift. Therefore, enhancing the performance of target trackers in complex environments remains challenging.MethodsThe proposed algorithm adopts the Siamese network tracker as its foundational framework and introduces a feature interaction module to enhance inter-modal information exchange by reconstructing information proportions of different modalities. Based on the anchor-free concept, a prediction network is directly constructed to perform classification and regression on the target bounding box at each position point in the search region. To address the mismatch between the target and template during the tracking of the Siamese network tracker, we propose a template update strategy, which dynamically updates the tracking template using the predicted results from the previous frame.Results and DiscussionsQualitative and quantitative experiments are carried out on SiamCTU and advanced RGBT target tracking models, with ablation experiments analyzed. Meanwhile, comparative experiments are conducted by evaluating the proposed target tracker against state-of-the-art target trackers on three benchmark datasets (GTOT, RGBT234, and LasHeR) to assess the tracking performance of the algorithm. Figs. 6, 7, and 9 respectively display the quantitative comparison results between SiamCTU and advanced RGBT tracking algorithms on the three benchmark datasets. Compared with advanced RGBT target tracking algorithms, the experimental results on three baseline datasets demonstrate outstanding tracking performance of SiamCTU, fully exhibiting the effectiveness of the proposed method. Specifically, on the GTOT and LasHeR datasets, the proposed tracking algorithm secures top rankings in both PR and SR. Fig. 8 and Table 1 respectively present the experimental results based on challenge attributes for the tracking algorithm on the GTOT and RGBT234 datasets. The experimental results show that SiamCTU exhibits excellent tracking performance under various challenging attributes, suggesting that the proposed tracker is effective in handling complex target tracking scenarios. To provide a more intuitive demonstration of the tracker's tracking performance, we visualize the tracking results in Fig. 10. In the LightOcc sequence [Fig. 10(a)], the proposed tracking algorithm utilizing the template update strategy maintains continuous and stable tracking of the target even under such challenges as occlusion and low illumination. For scenarios involving significant scale variations [Fig. 10(b)], the proposed tracker outperforms the comparative tracker, demonstrating the advantages of constructing a prediction network based on the anchor-free concept. The visual results in Figs. 10(c) and 10(d) reveal that the proposed tracker can leverage the complementary advantages of RGB and T modalities, reducing interference from similar objects. Meanwhile, the comparative tracking efficiency analysis of the tracker on the GTOT dataset (Table 2) indicates that SiamCTU significantly improves tracking accuracy with minimal tracking speed loss. Furthermore, the proposed tracker exhibits higher speed and precision advantages over the advanced MDNet-based tracker. In further ablation experiments (Table 3), the performance of the proposed tracker surpasses that of the baseline tracker, which underscores the substantial contributions of various modules designed in the algorithm and collectively enhances the tracker's ability to handle complex tracking scenarios. Specifically, when the feature interaction module is removed, the overall performance of SiamCTU decreases by 3.1% on the more complex RGBT234 dataset. Additionally, by varying template update parameters to study their influence on tracking performance, experimental results (Table 4) indicate that with an appropriate value of λ as the update parameter, the feature-level template update method can significantly enhance the tracker's performance.ConclusionsTo address the target tracking challenges in complex environments, we propose a cross-modal optical information interaction method for RGBT target tracking. The tracking model adopts the Siamese network as its foundational framework and incorporates a feature interaction module. This module enhances the inter-modal information exchange by reconstructing information proportions of different optical modalities, mitigating the effect of complex backgrounds on tracking performance. Subsequently, by dealing with the relationship between the tracker's initial template and the online template, we introduce a template dynamic updating strategy. This strategy dynamically updates the tracking template using predicted results, capturing the real-time status of the target and improving the algorithm's robustness. Evaluation results on three benchmark datasets including GTOT, RGBT234, and LasHeR demonstrate that the proposed method surpasses current advanced RGBT target tracking methods in terms of tracking accuracy. Additionally, it meets real-time tracking requirements and holds potential for broad applications in optical information detection, perception, and recognition of targets in complex environments.

    Apr. 10, 2024
  • Vol. 44 Issue 7 0715001 (2024)
  • Yantao Xu, Haitao Guo, Xusheng Xiao, Man Li, and Mengmeng Yan

    ObjectiveWith the continuous development of infrared optics, the demand for infrared laser transmission in such fields as national defense and security, biomedicine, and advanced manufacturing is becoming increasingly urgent, and therefore infrared energy transmission fibers are receiving increasing attention. The chalcogenide glass, as an excellent infrared material, features a wide transmission range, stable physic-chemical properties, and easy fiber formation, which makes it an ideal material for infrared energy transmission fibers. The high optical loss of domestically produced chalcogenide glass fibers currently limits their widespread applications. The origin of the optical loss for chalcogenide glass fibers mainly includes the absorption loss of C, H, O, and other impurities; scattering loss caused by heterogeneous particle impurities and striae; scattering loss caused by the interface defects between the core and cladding. For suppressing the absorption loss and scattering loss in chalcogenide glasses and obtaining ultra-low loss fibers, gas (chlorine gas)-gas (glass vapor) and solid (aluminum)-liquid (glass melt) chemical reactions are employed to reduce the absorption loss of fibers. A three-dimensional laser microscopic imaging system is established and adopted to detect micron- and submicron-sized defects inside the glass and fiber, and the preparation process is correspondingly optimized to reduce the scattering loss of fibers. The laser energy transmission experiments of fiber laser (wavelength is 2.0 μm) and dual wavelength optical parameter oscillator (OPO) laser (wavelength is 3.8 μm and 4.7 μm) are also carried out.MethodsHigh purity S and As elements are utilized to prepare rod (As40S60) and tube (As39S61). S distilled at 200 ℃ and As sublimed at 350 ℃ are encapsulated in the ampoule and then melted at 750 ℃ for 12 h to obtain preform glasses. Further, hydrogen impurities with the high purity Cl2 are eliminated. Cl2 is introduced into the molten glass and the quantity of flow is 5 ml/min for 300-600 s. The glass is melted again to allow a reaction between the Cl2 and hydrogen ions. Then the melted product is distilled under a dynamic vacuum to eliminate any gaseous byproducts from the reaction with Cl2. The third step is to eliminate oxygen impurities with elemental aluminum. Al foils with a mass fraction of 0.3% are introduced into the glass and melted at 600 ℃. Oxygen impurities react with Al foils to form Al2O3 which is left on the surface of Al foils, thus obtaining high-purity glasses. The optical fiber is prepared by the rod-in-tube method. The core and cladding diameters are 200 μm/250 μm for multi-mode fiber and 9 μm/140 μm for single-mode fiber, respectively. The single-mode fiber can maintain single-mode transmission in the 3-5 μm band. The fiber is drawn at about 320 ℃ in a nitrogen-protected environment. The optical fiber loss is measured by the cutback technique and the scattering intensity of the chalcogenide glasses and fibers are examined by a highly sensitive InGaAs detector from the direction perpendicular to the light path (Fig. 4).Results and DiscussionsThe additive amounts of Cl2 are 300, 480, and 600 s, and the samples are recorded as C1, C2, and C3, respectively. The absorption spectra of C1, C2, and C3 samples show that with the increasing Cl2, the absorption intensity at 4.1 μm decreases significantly while the absorption intensity rises gradually at 7.6 μm (Fig. 5). Hydrogen impurities are effectively removed when Cl2 is employed to purify the chalcogenide glasses for reducing the H—S absorption at 4.1 μm. However, more oxygen impurities are also introduced into the glass due to the hydrophility of Cl2, which enhances the absorption intensity of As—O impurities at 7.6 μm. For further elimination of oxygen impurities, aluminum is introduced into the C3 glass, with the sample signed as C3A. The absorption intensity at 7.6 μm decreases significantly and the mass fraction of oxygen impurities reduces from 1.55% to 0.22% (Fig. 6). There is a linear relationship between the mass fraction of oxygen and absorption coefficient at 7.6 μm in chalcogenide glasses (Fig. 7). The striae of the glass is compared for three samples quenched from three different temperatures of 400, 450, and 500 ℃, and the results show that the sample quenched at 450 ℃ has the best uniformity (Fig. 8). The scattering intensity of these three samples also confirms the above conclusions. The gray values of the scattering image for samples quenched at 450 ℃ are more concentrated in the low grade region, which means that the background scattering intensity at 450 ℃ is the lowest (Fig. 9). The fiber attenuation is 0.150 dB/m, 0.087 dB/m at 4.778 μm for C3 and C3A samples respectively (Fig. 11). A laser power output of 6.10 W is obtained in a single-mode fiber when the input power is 12.30 W at 2.0 μm wavelength. The transmission efficiency is about 50%. The output power of 6.12 W is obtained in a multi-mode fiber when the input power is 10.20 W at 3.8 μm and 4.7 μm wavelength. The transmission efficiency is about 59% (Fig. 13).ConclusionsThe purification technique of chalcogenide glasses is studied. Cl2 is introduced in chalcogenide glasses to eliminate the hydrogen impurities, and the absorption caused by hydrogen impurities decreases with the Cl2 input volume. However, the As—O absorption intensity rises gradually at 7.6 μm, and the absorption coefficient is linearly proportional to the mass fraction of oxygen. The mass fraction of oxygen impurity in the glass is reduced from 1.55% to 0.22% by introducing the reducing agent aluminum. A detection system is set up for examining the defects in the glass using the scattering technique. The glass quenched at 450 ℃ has the least defects. The glass fiber with a loss of 0.087 dB/m (@4.778 μm) is prepared. The output power of 6.10 W is obtained when the input power is 12.30 W at 2.0 μm wavelength for single-mode fiber, and the transmission efficiency is about 50%. Meanwhile, the transmission efficiency is about 59% for multi-mode fiber at 3.8 μm and 4.7 μm wavelength. The laser damage of the end face is mainly caused by the position deviation generated by thermal expansion, which restricts the transmission power of optical fibers. The transmission power of optical fibers is expected to be further improved by adding a fiber cooling system and reducing energy penetration.

    Apr. 10, 2024
  • Vol. 44 Issue 7 0716001 (2024)
  • Fangfang Ruan, Fangying Tang, Jinhong Wang, Lü Yanfei, Jiawei Li, Xinxin Wang, Yuhui Yan, Liangbi Su, and Lihe Zheng

    The temperature decrease rate in bulk Nd∶YAG is 34 ℃/mm along the radial direction from the central axis of Nd∶YAG to the thermal sink copper. In the case of gradient Nd∶YAG, the temperature decreasing rate is around 14 ℃/mm.ObjectiveNd∶YAG with a uniform dopant of Nd3+ can generate gradient temperature distribution along laser propagation under high-power semiconductor diode lasers (LDs), which may cause a thermal lens effect, and thus reduce laser output power and beam quality. Regulating the gradient dopant of Nd3+ in Nd∶YAG is paid great attention to for improving the efficiency and beam quality. The traditional regulation method is to fabricate Nd∶YAG with gradient dopant by a unique dual-crucible technology from the Czochralski method. With the development of room temperature bonding technology, it is flexible to obtain designed gradient dopants of Nd3+ with specific sample thicknesses in a monolithic structure. We propose a numerical simulation method by establishing heat source equations. The temperature distribution in Nd∶YAG with uniform and gradient dopants of Nd3+ under kilowatt pump power is reported accordingly. We hope that the basic strategy can help design a new gradient doped Nd∶YAG monolithic gain media and understand the relationship between temperature distribution and Nd∶YAG with specific dopants along laser propagation.MethodsNd∶YAG is employed for numerical simulation of temperature distribution along laser propagation under high pump power. The aperture of Nd∶YAG is 10 mm×10 mm cut along the crystallographic axis [100]. In the case of bulk Nd∶YAG with a uniform dopant of Nd3+, the absorption coefficient is set as 5.78 cm-1 with a bulk length of 8 mm to ensure over 99% absorption of the pump light after single path propagation. In the case of gradient Nd∶YAG, each segment has 1 mm thickness and various absorption coefficients. Meanwhile, a quarter geometric model is built to compare the temperature distribution in the central axis of bulk Nd∶YAG and gradient Nd∶YAG along laser propagation. The initial pump power is 1000 W and the pump pulse width time is 46 μs, with the repetition frequency of 1 kHz. The flat-top pump light is employed for temperature distribution calculation and heat source expression.Results and DiscussionsFollowing the pump energy of 46 mJ at 1 kHz, the temperature distribution along laser propagation in the central axis of bulk Nd∶YAG decreases from 185 to 26 ℃. The temperature is reduced to 106, 51, and 29 ℃ at the positions of 2, 4, and 6 mm in bulk Nd∶YAG, respectively. This indicates that the temperature close to the pump light is the highest in a bulk Nd∶YAG. By adjusting the absorption coefficient to 1.5, 2.1, 3.3, and 9.7 cm-1 for each segment with 1 mm thickness in gradient Nd∶YAG, the constant distribution of temperature around 86.5 ℃ is obtained. The maximum temperature is 88.5 ℃ when the temperature difference between maximum and minimum value is 7.5 ℃. Additionally, by properly designing the sample thickness and absorption coefficient of the gradient Nd∶YAG, the total thickness can be shortened to 4 mm, which is beneficial for ultrashort pulse generation in microcavity.ConclusionsA numerical simulation method by establishing heat source equations is proposed for temperature distribution evaluation in bulk Nd∶YAG and gradient Nd∶YAG. The temperature distribution in gradient Nd∶YAG shows a constant distribution of temperature around 86.5 ℃ under pump energy of 46 mJ at a repetition rate of 1 kHz. This confirms that the design of monolithic gain media such as gradient Nd∶YAG can help understand the temperature distribution along the central axis of Nd∶YAG along laser propagation.

    Apr. 10, 2024
  • Vol. 44 Issue 7 0716002 (2024)
  • Xing Han, Lun Jiang, Yanwei Li, and Junchi Li

    ObjectiveTo achieve the design of high-precision deep ultraviolet lithography projection lenses, we propose a method for opto-mechanical thermal integration analysis and optimization of deep ultraviolet lithography projection lenses. This method can analyze the influence of factors such as gravity, mechanical support structure, and temperature variations on the image quality of the optical system during the design phase. A novel support mechanism combining axial multi-point and circumferential three-point adhesive supports is designed to meet the requirements of ultra-high-precision positioning of the optical elements. Meanwhile, sensitivity analysis is conducted on individual optical elements using the sensitivity analysis method to optimize the image quality in opto-mechanical thermal integration analysis conditions, which provides insights and directions for improving the image quality of the optical system.MethodsInitially, an innovative support mechanism combining axial multi-point and circumferential three-point adhesive supports is employed to achieve ultra-high precision positioning requirements for a 212.51 mm aperture optical element. Subsequently, the thermal-mechanical coupling analysis of the novel support structure is conducted using the finite element analysis method. The obtained results are adopted in a developed Fringe Zernike polynomial fitting program to compute the surface peak valley (PV) and root mean square (RMS) of the optical element and thus validate the rationality of the opto-mechanical structure. Furthermore, the SigFit software serves as the opto-mechanical interface software, enabling the analysis of individual optical element sensitivity and the influence of overall optical element surface deformations on the wavefront aberration RMS value and calibration of F-tan θ distortion within the opto-mechanical thermal integration analysis framework. Finally, localized optimization is performed on elements with high sensitivity to reduce their sensitivity and ultimately optimize the image quality of the entire optical system.ConclusionsIn thermal-mechanical coupling conditions (reference temperature of 22.5 ℃, ±2.5 ℃, gravitational force), the maximum surface profile RMS value of the optical elements is verified to be ≤9.86 nm, which satisfies the stringent ultra-high precision positioning requirements. In opto-mechanical thermal integration analysis conditions (reference temperature of 22.5 ℃, ±2 ℃ limit operating temperature, gravitational force), the optimized wavefront aberration RMS value of the optical system is determined to be 10.50 nm, with a corresponding F-tan θ distortion calibration of 6.00 nm. Compared to pre-optimization results, the wavefront aberration RMS demonstrates a remarkable improvement of 46.98%, while the corresponding F-tan θ distortion shows an impressive enhancement of 77.69%, successfully meeting the design specifications.

    Apr. 10, 2024
  • Vol. 44 Issue 7 0722001 (2024)
  • Yufeng Tang, Shan Mao, Yichen Song, Tao Lai, Peiqi Yuan, Xiaowei Ding, and Jianlin Zhao

    ObjectiveCompared with the traditional visible light imaging technology, infrared spectral thermal imaging technology utilizes the thermal radiation emitted by objects to obtain images of target objects, with unique advantages in target detection and tracking. Particularly in the long-wave infrared (LWIR) bands, it demonstrates superior transmittance, increased propagation distance, and enhanced detection performance. Consequently, infrared cameras possess application significance in the high-end commerce, monitoring, and other fields. However, conventional thermal imaging cameras are limited by a fixed focal length, which enables observation only within a specific field of view or area and hampers search and observation capabilities. To this end, the infrared zoom thermal imaging camera is developed. By continuously adjusting the field of view and magnification relationship via a continuous zoom optical system, seamless range size adjustment is achieved, with stability and image clarity maintained. However, the majority of existing LWIR zoom systems incorporate diffractive surfaces, which results in complex design requirements, elevated processing and assembly demands, and increased system costs. Furthermore, the design of certain LWIR zoom systems overlooks the influence of ambient temperature on image quality, subsequently compromising practicality. Thus, it is imperative to devise a low-cost, compact, and uncooled long-wave infrared continuous zoom optical system that preserves excellent image quality across a wide range of temperature variations and exhibits strong practicality. We aim to make the design outcomes contribute to advancements in military weapon targeting, handheld thermal imaging cameras, unmanned vehicles, and related fields.MethodsTo meet the requirements of the specific application environment, we have determined the appropriate initial structure for the design. The mechanical positive group compensation method is chosen as the compensation technique for the system. Additionally, the introduction of sulfur glass helps control chromatic aberration and minimize thermal defocus within the system. Meanwhile, the temperature compensation group employs the smallest aperture lens in the system to address temperature variations and maintain image quality. We incorporate the electro-mechanical active non-thermalization method, allowing the temperature compensation mirror group to be adjusted and ensuring excellent imaging quality across a wide temperature range. Additionally, we utilize Zemax OpticStudio software to optimize the design to help control the system size and improve overall image quality. By adopting this iterative process, we design a non-thermalized continuous zoom optical system for LWIR. The designed system takes into account the practicality of implementation, cost-effectiveness, and compactness while delivering excellent image quality and addressing thermal variations. This design has significant potential for applications in handheld thermal imaging cameras, unmanned vehicles, and other related areas.Results and DiscussionsAfter implementing the Zemax OpticStudio software for optimization, a continuous zoom optical system for LWIR consisting of seven lenses is designed. The materials chosen for the lenses are ZnSe and ZnS for the first and second lenses, IRG24 for the fourth lens, and Ge for the third and fifth to seventh lenses (Table 2). In this optical system, five even-ordered aspherical surfaces are employed, and their feasibility for machining is analyzed (Table 3 and Fig. 6), with the remaining surfaces being standard spherical surfaces. The evaluation of the system's imaging quality produces the following results. The modulation transfer function (MTF) exceeds 0.32 at all focal lengths, which is close to the diffraction limit (Fig. 3). The aberration values are also found to be less than 1.8% at the short focus and 0.4% at the intermediate and long focuses (Fig. 4). Furthermore, the energy of the field-of-view envelope is more than 82% at the short focus, 70% at the intermediate focus, and 77% at the long focus for a pixel size of 25 μm×25 μm. At short focal length, it exceeds 70% and is greater than 77% at the long focal length (Fig. 5). The out-of-focus amount of the image plane of the optical system at different temperatures is within the depth of focus of the system (Table 4). Additionally, tolerance analysis demonstrates that the system is easily machinable and has a high degree of realizability (Tables 5 and 6). Meanwhile, the cam curve of the lens displays a smooth trend without any inflection point (Fig. 7).ConclusionsFor the LWIR 320 pixel×320 pixel infrared detector, a continuous zoomable non-thermalized uncooled LWIR optical system is designed by mechanical positive group compensation and electromechanical active compensation. The system achieves MTF values close to the diffraction limit at all focal lengths, indicating excellent image sharpness. It has a compact structure, minimal aberrations, long working distance, and high overall imaging quality. The design strategy focuses on cost reduction by incorporating only aspherical surfaces while maintaining system performance. This approach helps minimize the system's size and weight, simplifies its complexity, and ensures smooth motion curves for both the zoom and compensation groups. The cam mechanism chosen for this design is relatively straightforward to process. Given these features and advantages, the system holds application significance in various fields such as searching, tracking, and detecting, and can be effectively utilized in scenarios where high-quality infrared imaging is crucial.

    Apr. 10, 2024
  • Vol. 44 Issue 7 0722002 (2024)
  • Linsen Duan, Hongbo Xie, Jun Ma, and Lei Yang

    ObjectiveThe Risley prism scanning system is a useful supplement to traditional rotating frame and mirror scanning systems. It features a compact structure, low optical loss, excellent dynamic performance, and a large scanning field of view, and has broad application prospects in lidars, laser communication, and laser guidance. In the practical applications of this system, it is important to select the scanning trajectory reasonably, which will directly affect the scanning efficiency of the system and the acquisition probability of the target. When the parameters and relative positions of the Risley prism are determined, the rotation velocity ratio of the Risley prism is variable and controllable to obtain the scanning trajectories of different shapes. We aim to study the relationship between the velocity ratio with the number of scanning points and petals, then summarize the internal rules of the velocity ratio and scanning trajectory, and evaluate the scanning time and coverage rate of the scanning trajectory under different velocity ratios. Therefore, our study has a guiding significance for selecting the scanning trajectory that meets the scanning efficiency requirements.MethodsFirstly, the forward problem of the Risley prism is solved by the non-axial ray tracing algorithm, and the scanning trajectories under different velocity ratios can be obtained. Secondly, the number of scanning points is calculated according to the rotation velocity of the Risley prism and sampling interval, and the number of scanning petals is calculated according to the number of minimum points of the distance curve between scanning points and coordinate origin. Then, the velocity ratio is classified according to its absolute value and fractional part, and the formula for calculating the number of scanning petals by the velocity ratio is established. The scanning trajectory rules of the 2-element Risley prism are analyzed, and the scanning time and coverage rate under different velocity ratios are evaluated. Finally, the scanning trajectory of the 3-element Risley prism is regarded as the superposition and cancellation of the scanning trajectory of the 2-element Risley prism, and the scanning time and coverage rate can be evaluated according to the scanning trajectory rules of the 2-element Risley prism. Additionally, the condition for the 3-element Risley prism to obtain a regular symmetry scanning trajectory without a large scanning blind zone is proposed by analyzing the velocity ratio.Results and DiscussionsThe scanning trajectory of 2-element Risley prism has the following rules (Table 1 and Fig. 4). When M is positive, the scanning trajectory is inner petal, and the trajectory is outer petal under negative M. When M is an integer, the scanning time under different velocity ratios is the same, and when M is a decimal, the scanning time under each type of velocity ratio is the same if the number of decimal places is the same. Under the different numbers of decimal places, the larger number of decimal places leads to longer scanning time. Therefore, the number of decimal places should not be too large. For each type of velocity ratio, when M is of the same sign, the larger |M| brings a larger coverage rate. The scanning trajectory of 3-element Risley prism has the following rules (Table 4 and Fig. 10): only when the scanning petals of 2-element Risley prism are doubled (1-2 times) with the velocity ratio of M1 and M2, and the scanning points are also doubled (1-2 times), the scanning trajectory of 3-element Risley prism is regular symmetry and has no large scanning blind zone. When M1 and M2 are both positive, the scanning trajectory is inner petal. When M1 and M2 are both negative or different signs, the scanning trajectory is the outer petal. Additionally, the scanning time and coverage rate of the 3-element Risley prism can be evaluated according to the scanning trajectory rules of the 2-element Risley prism.ConclusionsAs the scanning trajectory of the Risley prism determines the scanning efficiency of the system and the acquisition probability of the target, it is important to study the method of selecting the scanning trajectory by analyzing the velocity ratio. Based on the non-axial ray tracing algorithm, the forward problem of the Risley prism scanning system is solved. Then the petal-shaped scanning trajectories under different velocity ratios are obtained, and the number of scanning points and scanning petals are calculated, which is then adopted to summarize the rules between the scanning trajectory and velocity ratio. The scanning time and coverage rate of the scanning trajectory under different velocity ratios are evaluated. Meanwhile, the condition for the 3-element Risley prism to obtain a regular symmetry scanning trajectory without a large scanning blind zone is proposed. The obtained rules and conclusions can be employed to reasonably determine the velocity ratio in the practical applications of the Risley prism scanning system to select the scanning trajectory that meets the scanning efficiency requirements. However, the scanning trajectory of the Risley prism is sensitive to the velocity ratio, and there will be deviations between the actual and set velocity ratios in the rotation control. Therefore, the influence of such deviations on the scanning trajectory can be further explored.

    Apr. 10, 2024
  • Vol. 44 Issue 7 0722003 (2024)
  • Shasha Liao, and Junxian Wu

    ObjectiveFlat-top filters have been widely employed as channel selectors in wavelength division multiplexing systems due to their unique flat-top response characteristics, which can reduce the crosstalk among wavelengths and improve the rapidity and accuracy of channel optical detection. A large number of integrated schemes have been proposed and demonstrated in recent years, and most of them are based on silicon-on-insulator (SOI) platforms due to their capability for integration with electronics. However, these schemes have some disadvantages, with the schemes based on photonic crystal, waveguide grating, and cascaded microring resonators having a small fabrication tolerance. Meanwhile, schemes based on multistage cascaded Mach-Zehnder interferometer (MZI) have large footprints. Schemes based on microring resonator (MRR)-assisted MZI are proposed to achieve flat-top passband and small footprints, but in most previous schemes, an external phase shift of π or π/2 should be applied on the MRR or the long arm of MZI, which is difficult to achieve in practical fabrication due to the variations of effective refractive index and fabrication error. Additionally, some performance indexes such as the shape factor and ripple factor are not analyzed in these schemes. Therefore, we theoretically analyze and experimentally verify a flat-top filter with a high shape factor and low-complexity fabrication processing. Our scheme is based on the SOI platform and consists of a racetrack MRR (RMRR) and an asymmetric MZI. In our scheme, no external phase shift is needed. In addition, we analyze all key indicators of the filtering performance of a flat-top filter, especially the indicators that evaluate the filter shape including the shape factor and the ripple factor. Our scheme features a high shape factor, low-complexity fabrication processing, small size, light weight, and low power consumption. It can not only be widely adopted in high-speed optical network communication but also be designed as a part of the wavelength multiplexer by multi-stage cascading.MethodsOur flat-top filter consists of an asymmetric MZI coupler and an RMRR, with the MZI consisting of a pair of 2×2 multi-mode interferometers (MMIs). The input signal is divided into two light beams by the first MMI and transmits along the upper and lower arms of the MZI. The light beam in the upper arm is coupled into the RMMR to form an all-pass RMRR, and then the output light beam interferes with the light beam in the lower arm at the second MMI. A rectangular spectrum is generated ultimately. Due to the difficulty in realizing a phase shift of π or π/2 in practical fabrication, we ignore it and optimize the performance of our filter by adjusting other structural parameters such as the gap between RMRR and the short arm of MZI or the length of the coupling waveguide. A micro-heater is fabricated on the RMRR to investigate the effect of the phase shift introduced by RMRR on the performance of our flat-top filter.Results and DiscussionsThe bandwidth of 3 dB of our filter is 1.94 nm. The ripple factor and the sidelobe suppression ratio are about 2.40 dB and 7.45 dB respectively. The insertion loss and FSR are about 1.82 dB and 3.94 nm respectively (Fig. 6). It is irrational to measure the bandwidths of 10 dB and 15 dB under the sidelobe suppression ratio of less than 10 dB. However, the shape factor is a crucial performance indicator of the flat-top filter, and the sidelobe suppression ratio can be significantly improved by controlling the micro-heater fabricated on the coupling area of RMRR. Therefore, we still calculate the shape factor by the same method in the simulation. The widths of the passband are 2.02 nm and 2.06 nm when the passband power declines by 10 dB and 15 dB respectively. As a result, the shape factor is 0.96 (1.94/2.02) and 0.94 (1.94/2.06). Additionally, we also measure the output spectra while tuning the voltage applied on the RMRR. The central wavelength of our filter gradually experiences redshift, and the filtering performance periodically varies in the trend of degradation, improvement, and degradation, which is mainly because of the periodical variation of the phase shift introduced by RMRR with the increasing temperature. It indicates that tuning the phase shift introduced by RMRR can effectively control the output spectra of the filter.ConclusionsWe propose and demonstrate a flat-top filter with a high shape factor and low-complexity fabrication processing, and analyze all key indicators of the filtering performance of the flat-top filter, especially the indicators that evaluate the filter shape including the shape factor and the ripple factor. A filter with 3 dB bandwidths of 1.94 nm is realized. The corresponding shape factor 1 and shape factor 2 are 0.96 and 0.94 respectively. The ripple factor, the sidelobe suppression ratio, and the insertion loss are about 2.40 dB, 7.45 dB, and 1.82 dB respectively. Our scheme does not have energy loss throughout the transmission process as the spectra of the two output ports are complementary. Furthermore, we investigate the influence of the phase shift introduced by RMRR on the performance of our flat-top filter and verify that the performance of our filter will vary periodically by adjusting the voltage applied to the RMRR. Our scheme is characterized by a high shape factor, low-complexity fabrication processing, small size, light weight, and low power consumption. Additionally, it can not only be widely utilized in high-speed optical network communication but also be designed as a part of the wavelength multiplexer by multi-stage cascading.

    Apr. 10, 2024
  • Vol. 44 Issue 7 0723001 (2024)
  • Le Tang, Liangping Xia, Man Zhang, Weiguo Zhang, Hao Sun, Chunyan Wang, Suihu Dang, and Chunlei Du

    ObjectiveIn the field of optics, the miniaturization and integration of optical systems and optical chips are inevitable trends. Micro lenses, as core devices, are widely used in optical imaging, homogenizing lighting, and optical communication. The accuracy of the surface shape determines the optical properties of micro lenses, making the detection of surface shape errors crucial. During fabrication, the nonlinear effect of photoresist often leads to the appearance of convex or concave annular errors on the micro lens surface. These annular errors significantly impact the optical performance of micro lenses, necessitating the development of a method to quickly detect them. Compared with the traditional profiler, Hartmann wavefront detection, and interferometry methods, this method ensures a simpler test light path, easier operation, and improved test efficiency.MethodsThe study focused on the impact of surface shape errors on the distribution of light fields, based on the structure model of the banded error. The position of the boundary (R1) of the light spot formed by different banded errors was calculated following the principles of geometrical optics. Additionally, a method was proposed to determine the surface shape error of the band by analyzing the ratio of light intensity inside and outside the boundary. Through simulations of far-field light spots under different error models, the relationship between the ratio of light intensity inside and outside the boundary and the error value of the band was established. To validate the findings, micro lens arrays with various banded errors were fabricated using micro-nano machining technology. A test light path was then constructed to measure the spot energy distribution under different banded errors. The measured results were basically consistent with the simulated values.Results and DiscussionsBased on the 3D model structure of the girdle error, the peak to valley (PV) value of the girdle surface error of the micro lens obtained through optical software simulation and experimental testing, is found to be consistent with the interferometer test results. This confirms the validity of the theory of the girdle error, which involves dividing the region by the boundary line (R1) and determining the girdle error using the light intensity ratio inside and outside the region.ConclusionsWe examine the relationship between the PV value of the girdle surface error of the micro lens and the far-field spot. We present the principle of quickly determining the girdle error using the far-field spot and establish a structure model for the girdle error of the micro lens. The energy distribution of the micro lens spot is simulated under different error models, and the relationship between the light intensity ratio in specific regions and the girdle error value is determined. Furthermore, micro lens models with different banded error structures are fabricated using micro-nano machining technology. A test light path consistent with the simulation is constructed, demonstrating the feasibility of analyzing the far-field spot of the micro lens to obtain the girdle surface error. This method can guide the compensation of error values in the micro lens machining process, improve machining accuracy, and facilitate the screening of finished products.

    Apr. 10, 2024
  • Vol. 44 Issue 7 0723002 (2024)
  • Le Chen, and Mingyang Chen

    ObjectiveThe special wavelength position of terahertz waves makes them the link between microphotonics and macroscopic electronics. However, the terahertz wave transmission in free space is easily affected by the water vapor absorption in the air. The optical fiber structure is proposed to transmit terahertz waves and realize effective transmission control. Among them, the hollow-core THz fiber based on the anti-resonant principle can limit the wave transmission in the air fiber core, which greatly reduces the influence of material absorption. The optical fiber coupler is the key device for beam splitting and transmission tailoring. Due to the existence of absorption loss in THz fiber, the insertion losses of THz fiber couplers are usually large and affect their utilization. Although the transmission loss of THz fibers with hollow-core structures is low, the design of THz couplers with broadband beam splitting is generally difficult.MethodsWe propose a hollow-core anti-resonant fiber coupler based on the three-core symmetry structure. By employing its structure symmetry, broadband beam splitting can be realized, and the transmission loss of the coupler can be reduced via the hollow-core structure. The mode and coupling characteristics of the coupler are analyzed by the finite element method, and the relationship between the coupling length and the fiber structure parameters is verified. The mode loss characteristics of the coupler are analyzed, and then the beam splitting structure with low loss and wide bandwidth is obtained.Results and DiscussionsWe design a hollow-core three-core anti-resonant fiber. At the frequency of 1 THz, the coupling length increases with the rising core distance (Fig. 4). As can be seen from the relation curve between D1 and binding loss, since the even mode field expands more to the intermediate core, the increase in D1 leads to the growing mode field, and the improvement in the binding ability of the core reduces the binding loss, while the odd mode is less affected by D1 [Fig. 5(a)]. With the increasing D1, the total loss of even mode obviously shows a downward trend, while the change of odd mode is not obvious. Compared with the mode loss of single-core fiber, the total odd-mode loss of the three-core structure is always smaller than that of the single-core structure in the shown interval. Therefore, using the three-core structure can actually obtain smaller mode losses. This is also consistent with the theory that the larger core size leads to lower mode losses [Fig. 5(c)]. The coupling length decreases slowly with the rising dr when the separation hole spacing d is small. As dr spacing increases, the coupling length decreases linearly (Fig. 6). Since the odd-mode field extends more to the intermediate core, the dr increase is easier to increase its mode field, thereby reducing the mode binding loss, while the even-mode field is relatively independent of each core, and thus the change of dr has little effect on it [Fig. 7(a)]. With the increasing dr, the absorption loss shows an obvious downward trend, and the total mode loss is mainly determined by the absorption loss [Fig. 7(b)]. With the rising spacing dr, the total loss of odd mode shows an obvious downward trend, while that of even mode mainly decreases under small dr values, and the further increase in dr value has little influence on it [Fig. 7(c)]. The relationship between the output optical power under the influence of mode transmission loss and transmission distance and coupling length is shown in Eq. (6) and Fig. 8. In all bandwidth ranges, the polarization-related loss is lower than 0.2 dB. The two polarization curves indicate that the coupler is not sensitive to polarization, the insertion loss is less than 3.5 dB in the range of 0.82-1.34 THz, and the bandwidth can reach 0.52 THz. This coupler is found to feature wide bandwidth and low loss transmission (Fig. 9). Transmission loss has less effect on the loss of the device (Fig. 10). The two-core coupler is in the range of 0.9-1.1 THz, the insertion loss is less than 3.45 dB, and the bandwidth is 0.2 THz. In the working bandwidth range, the two output ports cannot output the same power, and when the output power difference between the two ports is less than 0.1 dB, the corresponding bandwidth is 1 THz, with a large output power difference. The working bandwidth of the three-core structure coupler and the two-core structure coupler is relatively narrow (Fig. 12).ConclusionsWe design an anti-resonant air-core three-core terahertz fiber coupler with cycloolefin copolymer as the base material. The mode analysis and calculation of the new structure of the terahertz waveguide are carried out by COMSOL multi-physics simulation coupling software, and the mode field distribution among waveguides and the mode coupling characteristics between fiber cores are analyzed. Finite element analysis and full vector beam propagation method are employed to analyze the structural parameters, coupling length and loss effects, bandwidth, and other characteristics. The results show that the coupling length increases with the rising core spacing and decreases with the growing hole spacing, and the mode transmission loss has little effect on the insertion loss of the device. Due to the symmetry, the three-core structure can realize the uniform beam splitting of 1×2 light, the working bandwidth reaches 0.52 THz, and the insertion loss is less than 3.5 dB, with the polarization-related loss less than 0.2 dB.

    Apr. 10, 2024
  • Vol. 44 Issue 7 0723003 (2024)
  • Pengxing Guo, Zhengrong You, Weigang Hou, and Lei Guo

    ObjectiveThe optical neural network (ONN) based on the Mach-Zehnder interferometer (MZI) has widespread applications in recognition tasks due to its high speed, easy integration, scalability, and insensitivity to external environments. However, errors resulting from manufacturing defects in photonic devices accumulate as the ONN scale increases, consequently diminishing recognition accuracy. To address the decreased accuracy caused by MZI phase errors and beam splitter errors in the MZI-based ONN (MZI-ONN), we introduce a progressive training scheme to reconfigure the phase shift of the MZI feedforward ONN.MethodsDue to the cascaded arrangement of MZIs in MZI-ONN (Fig. 1), the progressive training scheme gradually determines the phase of each column within a certain number of iterations. Based on determining the phase, the phase error and beam splitter error carried by the MZI are considered. After starting the iteration again, the phase value of the undetermined phase shifter is utilized to offset the phase error and beam splitter error carried by the fixed MZI. This training process is repeated until the last column of the grid, and the phase values obtained by progressive training can counteract the inaccuracies caused by imperfect photonic devices, thereby improving the recognition accuracy of MZI-ONN. Importantly, this progressive training scheme reduces inaccuracies caused by optical components without altering the topology of MZI-ONN.Results and DiscussionsWe employ the Neuroptica Python simulation platform to construct a cascaded MZI-ONN and validate the efficacy of the proposed training scheme. The error range of the MZI phase shifter is set between 0.05 and 0.10, with a fixed beam splitter error value of 0.10. Results demonstrate that the proposed progressive training scheme based on the Iris dataset enhances the recognition accuracy of a three-layer 4×4 MZI-ONN from 32.50% to 96.65% (Fig. 5). During the application in the MNIST dataset, the accuracy of three-layer ONNs with grid scales of 4×4, 6×6, 8×8, and 16×16 is elevated by 2.00%, 22.33%, 37.00%, and 36.25%, respectively (Fig. 7), significantly improving the error-resistance performance of the ONN. To substantiate the advantages of the proposed method, we compare the proposed progressive training optimization scheme with traditional genetic algorithm (GA) training, the error correction scheme using a redundant rectangular grid (RRM), and a hardware optimization scheme. Notably, compared with the RRM-based error correction scheme and hardware optimization scheme, the proposed scheme exhibits the capability to conserve more MZI units and detectors. Furthermore, while the traditional GA training scheme enhances the recognition accuracy of the Iris dataset with four features and the MNIST dataset with eight features by 23.10% and 32.40%, respectively, the proposed scheme achieves improvements of 64.15% and 37.00% under the same scale (Table 2). In a comprehensive evaluation, this scheme enhances the recognition accuracy of the ONN without augmenting hardware costs and demonstrates superior error-resistance performance.ConclusionsWe introduce a progressive training scheme designed to alleviate recognition errors in MZI-ONN. The scheme improves the recognition accuracy of the ONN without modifying the topology grid structure and parameters, thus causing no additional hardware costs. To validate the effectiveness of this scheme, we conduct simulations by adopting the Neuroptica Python simulation platform as a proof of concept. The error parameters of photon devices are pre-trained, and the MZI-ONN phase is fixed based on the number of iterations. Subsequent phases are then utilized to compensate for errors introduced by the fixed phase. Simulation analyses are performed on ONNs of scales 4×4, 6×6, 8×8, and 16×16, which demonstrates that the proposed progressive scheme can enhance the recognition accuracy of MZI-ONN by up to 64.15% with an average increase of 39.93%, improving the error-resistant performance of MZI-ONN.

    Apr. 10, 2024
  • Vol. 44 Issue 7 0720001 (2024)
  • Hui Yu, Xinhui Ding, Dawei Li, Qiong Zhou, Lü Fengnian, and Xingqiang Lu

    ObjectiveIn the realm of beam shaping, diffractive optical elements (DOEs) can manipulate the laser intensity distribution by altering the laser phase through microstructures. Beam shaping algorithms play a significant role in the design of diffractive optical elements. Specifically, the most representative is the Gerchberg-Saxton (GS) algorithm proposed by R.W. Gerchberg and W.O. Saxton. It involves an iterative process of performing a Fourier transform between the input plane and output plane, while simultaneously imposing known constraints on both planes. To enhance the convergence effect of the algorithm, an input-output algorithm and the phase-mixture algorithm have been developed based on the GS algorithm. In recent years, there have been advancements in mixed-region amplitude freedom algorithms, particularly those demonstrating superior convergence effects in the signal region, as well as offset mixed-region amplitude-freedom algorithms. Global optimization algorithms, such as the feedback GSGA (Gerchberg-Saxton genetic algorithm) and the last place elimination GSGA, have also gained attraction. These algorithms are derived from the GS algorithm. However, the phase complexity of these designs is high, presenting significant challenges to the physical processing of DOEs. Furthermore, as the number of limiting conditions increases, the computational time required also escalates, especially for the feedback GSGA and last place elimination GSGA.MethodsTo optimize time efficiency and reduce phase complexity, we discover that in conventional laser applications, the use of the Hankel transform is more effective than the Fourier transform in numerical calculations when both the incident beam and target beam exhibit circular symmetry. We introduce a beam shaping algorithm, the pQDHT-GS algorithm, for a circularly symmetric beam system based on the GS algorithm. The implementation process is based on the characteristic that the Hankel transform is solely related to the radial coordinate. This is achieved by iteratively alternating between the input plane and output plane to perform the Hankel transform. We employ a quadratic surface type phase as the initial phase, select a Gaussian beam with a full width at half maximum of 2 mm as the input light source, set an iteration time of 500, and use sample numbers of 512×512 (where the sample number in the pQDHT-GS algorithm is 1×256). We then apply this algorithm and the traditional GS algorithm to shape the incident beam into a circular Airy (CA) beam, Bessel beam, and Laguerre-Gaussian (LG) beam respectively. We compare the root mean square error and energy utilization efficiency of these two algorithms. Subsequently, we set the LG beam as the target beam, adjust the iteration times to 500 and 1000, and sample numbers to 512×512 and 1024×1024 respectively, to further analyze the computational performance of both algorithms. Furthermore, we evaluate the shaping performance of the pQDHT-GS algorithm by using the CA beam as the target beam in our experimental system.Results and DiscussionsIn terms of the phase distribution of DOEs, the phase calculated by the GS algorithm exhibits rotational symmetry, and it contains some high-frequency components. In contrast, the phase of DOEs computed by the pQDHT-GS algorithm displays circular symmetry, simplifying the DOEs structure and reducing processing complexity. When examining the intensity distribution at the focal plane (output plane), both algorithms exhibit superior intensity profiles. However, compared to the results obtained from the GS algorithm, the shaping beam output calculated by the pQDHT-GS algorithm is smoother (Fig. 3). In identical conditions, the pQDHT-GS algorithm achieves rapid convergence within fewer iterations and saves computational time (approximately 100 times) (Table 2). Furthermore, experimental results indicate that the shaping beam intensity distribution aligns closely with the target beam intensity distribution, with consistent light intensity curve trends. The shaping beam exhibits noticeable burrs, with a root mean square error of 0.545 and an energy utilization efficiency of 78.07%. After being filtered, these burrs are substantially reduced, and the light intensity curve distribution becomes smoother, exhibiting a root mean square error of 0.491 and an energy utilization efficiency of 78.14% (Fig. 6).ConclusionsIn this study, we introduce a beam-shaping algorithm based on the pQDHT proposed for the circularly symmetric beam system. This approach achieves the circular symmetry of the DOEs structure by substituting the Fourier transform in the conventional GS algorithm with the Hankel transform. We select the CA beam, the Bessel beam, and the LG beam as target beams. A numerical simulation method is employed to juxtapose and assess the performance of both the GS algorithm and the pQDHT-GS algorithm in terms of shaping outcomes. Our findings indicate that, in comparison to the GS algorithm, the pQDHT-GS algorithm converges rapidly with fewer iterations. Moreover, it refines the intensity of the output-shaped beam, ensuring that the DOEs phase exhibit a circular symmetry distribution and thereby simplifying processing. Given that the pQDHT-GS algorithm requires significantly fewer sampling points than the GS algorithm, it significantly reduces matrix operation time, leading to nearly two orders of magnitude reduced computational time. Conclusive experiments on the CA beam validate the efficacy of this method. In conclusion, the pQDHT-GS algorithm exhibits rapid and precise capabilities in circular symmetric beam shaping, holding significant implications for the design and processing of DOEs. Its potential applications extend to various areas of beam shaping, including the choice of initial phase values through the integration of global search algorithms, deep learning, neural networks, and other intelligent algorithms. Furthermore, its utility is evident in designing phase plates within large aperture lasers. Future research will further explore this area.

    Apr. 10, 2024
  • Vol. 44 Issue 7 0726001 (2024)
  • Xue Zou, Junhao Fan, Binbin Luo, Fumin Zhou, Decao Wu, Zufan Zhang, and Mingfu Zhao

    ObjectiveCardiovascular disease (CVD) is the most important cause of human death, of which hypertension is the most common chronic disease in people's life and is one of the most important risk factors for CVD. With the socio-economic development and accelerating population aging and urbanization, hypertension is on the rise. According to research, the presymptoms of hypertension are not obvious, and a considerable portion of patients do not have any uncomfortable clinical symptom such as dizziness, headache, and shortness of breath. When blood pressure is elevated for a long time and exceeds the normal range, this may result in serious complications and even threaten life safety. Therefore, accurate blood pressure monitoring is crucial for early diagnosis and intervention treatment. However, compared with the single point in time blood pressure detection of traditional cuff-type electronic blood pressure monitors, continuous dynamic monitoring can more truly reflect the real-time changes in blood pressure and dynamic trends, providing more comprehensive and accurate data. The human pulse signal contains a large amount of physiological and pathological information related to the cardiovascular system, and continuous blood pressure monitoring can be realized by accurately extracting the characteristic parameters and building a blood pressure prediction model. Currently, the main method of pulse signal detection is the PPG method, whose major drawbacks are high power consumption, sensitivity to ambient light and pressure perturbation, and susceptibility of electronic components to electromagnetic wave interference. As a result, it is impossible to measure blood pressure simultaneously in special environments such as MRI and CT. Thus, we propose a fiber-optic blood pressure sensor with continuous accurate measurement and without spatial alignment based on the microstructural setup of a reflective microfiber coupler, which is achieved by combining dual-channel pulse wave acquisition and machine-learning model prediction. This electromagnetic interference-resistant, wearable, and continuous blood pressure monitoring system will play an important role in human CVD prevention in the future.MethodsFirst, two single-mode fibers twisted around each other are drawn into a microfiber coupler using the flame fusion taper method, and the reflective coupler is formed by cutting flat at the section of the waist region area, which has a diameter of 5 μm and a length of 10 mm. The device is encapsulated between an epoxy resin substrate and two layers of PDMS circular films, where the substrate is a through-hole structure, the upper PDMS layer is a circular film with a diameter of 15 mm and a thickness of 100 μm, and the lower PDMS is a raised spherical structure with a diameter of 10 mm and a height of 1.5 mm. Particularly, this structure can improve the detection sensitivity and reduce the sensitivity of the sensing area to the spatial location. Then, a dual-channel pulse wave detection system is set up to obtain the brachial artery transit time (BPTT), the radial artery transit time (RPTT), and the transit time difference between the radial artery and brachial artery (DBRPTT). Finally, the support vector regression algorithm is utilized to build a blood pressure prediction model to realize continuous and accurate blood pressure detection.Results and DiscussionsThe mechanical simulation results of the packaging structure show that it can sense micro-pressure from multiple directions, reducing its dependence on the detection position (Figs. 2-3). In the static pressure experiment, the detection sensitivity is -0.682 kPa-1 in the range of 500-1000 Pa. The sensor can respond immediately at the moment of loading and unloading pressure, with the response time of 35 ms and 46 ms respectively. Additionally, the durability and repeatability of the sensor are also tested. After 2500 cycles of the periodic pressure with a frequency of 5 Hz and a size of 1 N, the sensor still shows good response and excellent repeatability. After about 5000 cycles, the response amplitude drops by about 5% from the beginning. Since the time for sensing to measure pulse is short (about five seconds), less impact is exerted on later blood pressure prediction. When the sensor is placed at different positions in the radial artery area, the sensor can effectively detect high-fidelity pulse signals, indicating that there are no strict alignment requirements between the sensor and the artery (Fig. 5). By employing a dual-channel sensing system, the pulse waveforms at the radial artery and brachial artery are collected simultaneously. Three PTT (BPTT, RPTT, and DBRPTT) characteristic parameters (Fig. 6) are extracted from these sample data to build a blood pressure prediction model. The correlation diagram and Bland-Altman diagram reveal that both the true and the predicted values are negatively correlated with the K value. The correlation coefficient R values of SBP and DBP are 0.96 and 0.95 respectively, which indicates that there is a good positive correlation between the reference and predicted values. The mean difference value and SD value of SBP are 0.08 mmHg and 1.13 mmHg respectively, and the mean difference value and SD value of DBP are -0.35 mmHg and 1.25 mmHg respectively (Fig. 11). These indicators are both lower than the AAMI standard [(5±8) mmHg]. The performance comparison results between the sensor and other blood pressure sensors show that the sensor features an extremely compact structure, high sensitivity, sound stability, long service life, and anti-electromagnetic interference. Finally, a volunteer is randomly selected to collect 14 sets of data from 8:00 to 21:00 a day to verify the feasibility of the sensor. The results demonstrate that the normal pattern of“two peaks and one trough”is blood pressure trends. Another volunteer receives continuous monitoring during a mixed exercise of squatting and jogging. As the exercise time increases, both SBP and DBP rise but remain stable after about ten minutes (Fig. 12). This shows that the proposed blood pressure monitoring system can continuously and effectively monitor the health level of blood pressure.ConclusionsWe develop a reflective optical microfiber coupler sensor chip (R-OMCSC) for cardiovascular health assessment of accurate and continuous blood pressure monitoring. The R-OMCSC exhibits performance with high sensitivity and detection pulse wave without spatial alignment, which allows for perceiving weak physiological signals. Embedding the sensor into a sports wristband, we construct a dual-channel pulse wave detection system, obtain the RPTT, DPTT, and DBRPTT values, and build an SVR prediction model. Experimental results show that the system can achieve continuous blood pressure monitoring. In the future, we will keep improving the integration of the photoelectric signal processing system with the proposed dual-channel R-OMCSC pulse wave sensor, and a large amount of data will be collected for more accurate analysis. The proposed non-invasive BP detection system features high accuracy and continuous monitoring and will have the opportunity to be employed for clinical applications and thus help patients with CVD prevention.

    Apr. 10, 2024
  • Vol. 44 Issue 7 0728001 (2024)
  • Kangning Ji, Xinyu Hu, Linsen Xiong, Haibo Wang, and Zhimei Qi

    ObjectiveMini-unmanned aerial vehicle (Mini-UAV) is widely employed in scientific research and entertainment due to its small size, low cost, easy operation, and high flexibility. However, the“abuse”of mini-UAVs has caused great hidden dangers to public security and personal privacy. Therefore, radio, radar, image recognition, and other detection methods have been proposed to meet the urgent need for mini-UAV detection and surveillance. However, since mini-UAVs have low altitudes, low speeds, and a small reflective cross-sectional area, it is difficult for radars to detect them quickly under the interference of the complex background. Additionally, radio detection is prone to false alarms due to severe electromagnetic interference at low altitudes. Although CNN-based image recognition has a high detection accuracy, the ability to accurately distinguish between birds and mini-UAVs is affected by image resolution, which needs to be improved. Meanwhile, the above methods have complex equipment, high detection costs, and poor real-time performance. In contrast, the mini-UAV can be quickly detected in noisy low-altitude environments by acoustic detection, which features sound real-time performance, simple equipment, and low cost. However, the current acoustic sensors adopted for acoustic detection have low sensitivity and do not recognize the sound source direction. Therefore, we fabricate a fiber-optic acoustic sensor with a resonant MEMS wheel-shaped diaphragm to detect acoustic signals with high sensitivity and high signal-to-noise ratio (SNR) near the resonance peak. The sensor has an“8”shaped directional response, which allows for the identification of the sound source direction. Finally, a new method is provided for mini-UAV detection.MethodsTo improve the sensitivity of optical fiber acoustic sensors and reduce the damping effect caused by the enclosed back cavity of the circular diaphragm, we adopt a wheel-shaped diaphragm with an open acoustic back cavity as the acoustic sensing diaphragm. The wheel-shaped diaphragm consists of a central diaphragm connected to four symmetrically distributed connecting arms on an outer base ring. Firstly, the geometric structure of the wheel-shaped diaphragm is modeled by acoustic vibration theory. According to the characteristics of the mini-UAV's radiated noise spectrum, the diaphragm eigenfrequency is set near the mini-UAV noise fingerprint frequency, and the geometric parameters of the wheel-shaped diaphragm at this frequency are calculated. The acoustic characteristics are simulated and verified via finite element analysis software. Then, the wheel-shaped diaphragm is fabricated using MEMS processing technology. Meanwhile, to optimize the sensor performance, we sputter a metal on the diaphragm surface to improve the optical reflectivity of the diaphragm. Finally, the fiber optic acoustic sensor of the silicon-based MEMS wheel diaphragm is assembled by mechanical micro-assembly. In addition, the cavity length of its static Fabry-Pérot (FP) interference cavity is adjusted to make the sensor work at the quadrature point, which ensures high sensitivity without signal distortion.Results and DiscussionsA fiber optic acoustic sensor is fabricated using the designed silicon-based MEMS wheel-shaped diaphragm (Fig. 5). The FP static cavity length is measured using interferometric spectroscopy. The experiment shows that when the laser wavelength is 1550 nm, the FP static cavity length is 144.457 μm, which meets the quadrature point (Fig. 6). An acoustic testing system is built to characterize the performance of the wheel-shaped diaphragm fiber-optic acoustic sensor (Fig. 7). The sensor has a resonance peak at 7.279 kHz and a relatively flat response in the frequency range of 2-6 kHz below the resonant frequency (Fig. 8). At normal incidence of 7 kHz sound, the sound pressure sensitivity is 1.8 V/Pa, the SNR is 71 dB, and the minimum detectable sound pressure is 99 μPa/Hz0.5 (Fig. 9). In outdoor mini-UAV detection experiments, mini-UAV noise can be accurately detected within a range of 65 m, with a detection capability about three times that of commercial ECM (Fig. 13).ConclusionsTo detect the radiation noise of mini-UAVs, we design and fabricate a fiber-optic acoustic sensor with a silicon-based MEMS wheel-shaped diaphragm. The wheel-shaped diaphragm consists of a central vibrating membrane and four symmetrically distributed joint arms, and it has high sensitivity near the resonance frequency and the ability to detect mini-UAV at a distance. The sensor has a resonance peak at 7.279 kHz. At the normal incidence of 7 kHz sound, the sound pressure sensitivity is 1.8 V/Pa, the SNR is 71 dB, and the minimum detectable sound pressure is 99 μPa/Hz0.5. Additionally, it has an“8”shaped directional pattern, which indicates its ability to identify the sound source direction. It can accurately identify the noise of mini-UAVs within a range of 65 m, and the detection ability is about three times that of commercial ECM. This indicates its advantages and potential in applications such as mini-UAV detection in some special situations.

    Apr. 10, 2024
  • Vol. 44 Issue 7 0728002 (2024)
  • Xiongxing Zhang, Zhe Sun, Xueqing Zhao, Zihao Gao, Xiaojun Feng, Wen Pan, and Haibin Chen

    ObjectiveShock wave is a kind of compression wave in which the wavefront propagates in the form of a synoptic surface in an elastic medium. Its typical feature is the discontinuous abrupt changes of state parameters of the medium on the abrupt surface, such as pressure, density, and temperature. As the study of shock waves progresses, it has been found that shock wave technology has great civilian value, so the measurement of shock wave signals has become increasingly important. The formation and propagation of shock waves are accompanied by overpressure and rapid changes in pressure. The response speed and reliability of the corresponding pressure sensors have more demanding requirements. Traditional electrical shock wave pressure sensors are susceptible to electromagnetic interference, temperature range tolerance, rise time, and other issues, which limit the application of such sensors. Fiber-optic Fabry-Perot (F-P) pressure sensors, as an important branch of fiber-optic sensors, provide new possibilities for dynamic pressure measurement of shock waves due to their advantages of fast response speed, high sensitivity, small size, and high resistance to electromagnetic interference. To achieve the dynamic pressure measurement of shock waves, a thin-film fiber-optic F-P pressure sensor with a fiber-tip coating is studied.MethodsThe basic structure of the thin-film fiber-optic F-P sensor studied in this paper mainly consists of two gold films with different thicknesses, a layer of parylene film serving as the F-P cavity, and a single-mode optical fiber for optical field coupling. When the shock wave pressure was applied to the end surface of the sensor, the parylene film was subjected to pressure, and deformation was produced, causing a change in the F-P cavity length. This change in length then affected the interference of reflected light produced by the two gold films on the front and back surfaces of the F-P cavity. Before the sensor was fabricated, the optical and mechanical aspects of the sensor were simulated using finite element simulation software, and the performance of the sensor under different parameters was calculated by combining theoretical formulas. In addition, the parameters of the sensor were determined. After the sensor was fabricated, the static and dynamic pressure measurement system was designed and constructed, and the experimental results were analyzed.Results and DiscussionsIn the pressure range of 0-60 MPa, a static pressure measurement experiment is conducted on a thin-film fiber-optic F-P pressure sensor using a bench-top oil pressure pump. The reflected spectrum signal of the sensor is obtained and processed to calculate the cavity length of the F-P cavities of different pressure sensors. From the reflectance spectrum curves (Fig. 12) of the wavelength and corresponding light intensity under different pressures, it can be seen that with increasing pressure, the overall reflectance spectrum of the sensor drifts to the left. Based on the wave valley values at different pressures, the length information of the sensor cavity corresponding to the pressure is calculated (Fig. 13), yielding wavelength sensitivity and cavity length sensitivity of the sensor of 0.0809 nm/MPa and 0.3200 nm/MPa, respectively, which are consistent with the simulation results. In the dynamic pressure measurement experiments, the sensor successfully captures the shock wave signal with a peak pressure of 7.47 MPa and a rise time of 75 ns (Fig. 15).ConclusionsFor measuring shock wave signals, we propose a thin-film fiber-optic F-P pressure sensor. The effective structure of the sensor is a three-layer structure consisting of gold film, polymer film, and gold film. By utilizing the change of the peak position of the sensor's reflected spectral wave, the sensor causes a change of spectral intensity, so as to realize the measurement of the signal pressure. In the pressure measurement range of 0-60 MPa, the wavelength sensitivity is 0.0809 nm/MPa, and the cavity length sensitivity is 0.3200 nm/MPa. Within the range of dynamic pressure measurement, the sensor can measure the dynamic signals with a pressure rise time of 75 ns and a pressure rise amplitude of 7.41 MPa. The experimental results show that the sensor has a large range of pressure measurement ability and high sensitivity, and it has a small size, light weight, and anti-electromagnetic interference. Therefore, the sensor has greater application prospects in the field of shock wave pressure measurement.

    Apr. 10, 2024
  • Vol. 44 Issue 7 0728003 (2024)
  • Yize Liu, Junfeng Jiang, Kun Liu, Shuang Wang, Yixuan Wang, Xin Chen, and Tiegen Liu

    ObjectiveThe trace gas direction holds practical significance in human health, industrial production safety, national defense, and other key fields. The optical fiber whisper gallery mode (WGM) sensors can achieve high sensitivity and resolution sensing measurement due to their strong light-matter interaction. However, the common silica material of WGM sensors is not sensitive to gases, which limits their applications in gas sensing. As a kind of two-dimensional material, graphene oxide (GO) not only has sound physical properties such as high mechanical strength and flexibility, but also features a significant surface volume ratio, efficient surface adsorption, low noise level, and stable chemical properties. Based on optical WGM excitation, the GO film is coated on a hollow microsphere cavity inwall to achieve gas sensing. The gas molecule adsorption on the GO will affect the effective refractive index of the overall microcavity structure and be reflected by the WGM shift. It is worth noting that the unique hollow structure of the microbubble is a natural fluid channel, which is very suitable for gas transportation. It is unnecessary to design a separate fluid channel or external gas chamber.MethodsThe investigation is based on the WGM sensor theory. The changed refractive index induced by gas molecular adsorption is analyzed. The sensors are fabricated by melt pressured rheological method and injection of GO dispersion. First, the performance of the GO-coated WGM gas sensor is investigated, and the changes in WGM resonance wavelength are observed by injecting gases with different concentrations into the sensor. Next, the gas sensing performance below 40×10-6 is elaborately investigated. The sensitivity and resolution of the sensor are obtained. Finally, the real-time response to 10×10-6-40×10-6 NH3 is demonstrated to show the sound recoverability, response, and recovery time.Results and DiscussionsThe designed GO-coated microbubble sensor exhibits deserved gas sensing performance. Fig. 4 shows the WGM spectrum of the structure with different gas concentrations. The resonance wavelength appears to be red-shifted as the gas concentration increases, and this trend is gradually slowing down. The optical quality factor Q is 3.7×105. Specifically, for the low concentrations from 0 to 40×10-6, the sensitivity is 0.73×106 pm with a fitting coefficient of 0.9994 (Fig. 5). According to the standard deviation of center wavelength fluctuations, detection resolution of the gas sensor is better than 1.9×10-6. The temperature response performance is shown in Fig. 6, and the response is 10.88 pm/℃. Finally, the time response of the gas sensor at low concentrations is shown in Fig. 7. At the concentration of 20×10-6, the response time and recovery time are 294 s and 329 s respectively.ConclusionsWe design a kind of gas sensor based on a GO-coated microbubble. The gas molecule adsorption affects the refractive index of GO and changes the overall effective refractive index of the microcavity sensor correspondingly. Gas sensing can be achieved by monitoring the WGM shifts via a power meter. The sensors are fabricated by melt pressurized stretching and injection of GO dispersion. The sensitivity is 0.73×106 pm within a gas concentration below 40×10-6. According to the wavelength drift standard deviation of the overall system, the resolution is 1.9×10-6. At the gas concentration of 20×10-6, the response time and recovery time of the sensor are 294 s and 329 s respectively. Meanwhile, the hollow sensor structure does not need additional gas channels or gas chamber packaging structures during gas sensing, thus providing convenience for practical applications.

    Apr. 10, 2024
  • Vol. 44 Issue 7 0728004 (2024)
  • Yuzhi Shi, Chengxing Lai, Weicheng Yi, Haiyang Huang, Chao Feng, Tao He, Aiqun Liu, Weicheng Qiu, Zhanshan Wang, and Xinbin Cheng

    SignificanceMomentum is an important backbone of wave fields such as electromagnetic waves, matter waves, sound waves, and fluid waves. As the carriers of electromagnetic waves, photons possess both linear and angular momenta, which can interact with matter and generate optical forces. The technique, or“optical tweezers”which utilize optical forces to manipulate micro/nano-objects, was established by Arthur Ashkin between the 1970s and 1980s. Optical tweezers have unparalleled advantages in capturing and manipulating microscopic particles and provide new research tools for research fields such as biomedicine, physics, and chemistry. In 1997, Steven Chu, Claude Cohen-Tannoudji, and William D. Phillips won the Nobel Prize in Physics for employing optical forces to achieve atomic cooling. It was later in 2018 that Ashkin won half of the Nobel Prize in Physics for his pioneering contributions to optical tweezers and implementing them for biomedical applications.Optical forces adopted in optical manipulation mainly include two popular types of radiation pressure and optical gradient force. The radiation pressure is the force along the direction of the Poynting vector due to the light scattering and absorption and has important applications in atomic cooling, optical sorting, and particle propulsion. The optical gradient force is the force generated by the inhomogeneous intensity or phase distribution of the light field, with great potential in numerous physical and biomedical applications.The optical lateral force (OLF) is an extraordinary force that is perpendicular to the light propagation direction and independent of the intensity or phase gradient of the light field. It is related to the intrinsic and structural properties of light and matter. The strategy to realize an OLF is to break the system symmetry to make photons exert transverse momenta and consequently generate optical forces on particles. Since OLF was first proposed by Nori's and Chan's groups on the same day in 2014, various methods and mechanisms have been reported to configure and explore OLF, such as utilizing the spin and orbital momenta of light, coupling of light and particle chirality, spin-orbit interaction (SOI), and surface plasmon polariton (SPP).In the past ten years, the understanding of transverse momenta and relevant light-matter interaction has reached a new stage. Transverse momentum, whether linear or angular, is closely related to OLF since the optical force is a consequence of momentum exchange or translation between light and matter. For example, transverse spin momentum, also known as the Belinfante spin momentum (BSM), or transverse spin angular momentum (SAM), can generate spin-correlated OLF. Additionally, the imaginary Poynting momentum (IPM) can also induce OLF. The chirality of particles can couple with light and generate transverse energy flux and force near the surface. The conversion of spin angular momentum to orbital angular momentum, or the so-called SOI, endows a new way to generate the OLF.Investigations of such extraordinary transverse light momenta and OLF deepen the understanding of light-matter interactions and have tremendous applications in bidirectional enantioselective separation, meta-robots, spintronics, and quantum physics.ProgressWe review the current theoretical and experimental research progress on OLF, including different mechanisms, experimental methods, and potential applications. We first introduce some fundamentals of transverse momenta, and representative mechanisms for generating OLF from both theoretical and experimental perspectives, including the BSM, chirality, SOI, IPM, and some other effects such as heat, electricity, bubbles, and topology. Meanwhile, we review some representative applications based on OLF, such as meta-robots, particle sorting, and some other biomedical and chemical applications. Finally, we summarize this research direction and provide our vision of new physical mechanisms and more applications that may emerge in the future.Conclusions and ProspectsMomentum and force are two fundamental quantities in electromagnetics. With the innovation and burgeoning development of optical theories of transverse light momenta, mechanisms of OLF are also advancing. The optical force has also become an essential platform and effective tool for testing and validating numerous optical phenomena including transverse momenta.Traditional optical gradient force and radiation pressure have been widely studied in the past four decades, and their technical limitations in some applications have been well comprehended. Some peculiar optical forces discovered in recent years such as the optical pulling force and OLF are playing increasingly important roles in high-precision optical manipulation. OLF provide new possibilities for nanometer-precision sorting, enantioselective separation, and minuscule momentum probing. Additionally, unprecedented advantages of metasurfaces in electromagnetic wave guidance and steering also present more possibilities for manipulating particles. Especially in recent years, with the rapid development of nanofabrication technology, a type of“meta-robot”driven by the OLF has emerged. Although it has not been implemented in practice, its interesting properties and the new degree of freedom in optical manipulation are expected to find many biomedical applications in the future, such as cargo transporting, biotherapy, and local probing. We can also envision various biological applications of OLF, such as bilaterally sorting and binding tiny bioparticles, cargo transporting using metavehicles, stretching and folding DNA and protein molecules in line-shaped beams, enantioselective separation, and high-sensitive sensing by the helical dichroism. Therefore, we can conclude that with the development of modern optics and photonics, the two interrelated quantities of momentum and force will be explored more deeply and have wide applications in material science, biophysics, quantum science, spintronics, optical manipulation, and sensing.

    Apr. 25, 2024
  • Vol. 44 Issue 7 0700001 (2024)
  • Yunfei Bai, Haiyan Luo, Zhiwei Li, Yi Ding, and Wei Xiong

    ObjectiveRaman spectroscopy is a non-destructive analytical technique based on the inelastic scattering of light by matter. While inducing Raman signals, the fluorescence background directly affects the characterization of sample Raman properties. The common approaches to reducing fluorescence background are primarily implemented through hardware and software methods. Hardware methods mainly involve techniques such as shifted excitation Raman difference spectroscopy, time-resolved Raman spectroscopy, and deep ultraviolet Raman spectroscopy. Although these methods exhibit effective outcomes, they often entail complex instrument setups and high costs. Software methods, on the other hand, refer to utilizing signal processing techniques to subtract fluorescence background from Raman spectra. Raman spectra are characterized by typically discontinuous with sharp peaks, contrasted with the continuous and smooth trends often present in fluorescence spectra. Given the difference in their spectral characteristics, when Raman spectroscopy analysis is carried out, employing baseline correction algorithms to eliminate fluorescence interference helps ensure the reliability and accuracy of Raman spectroscopy data. Common methods for mitigating fluorescence background include polynomial fitting, discrete wavelet transform, morphological algorithms, variational mode decomposition, least squares methods, and neural networks. However, due to the presence of Raman peaks, these methods typically result in varying degrees of baseline rise in the fitting outcomes. In the present study, we report an adaptive iterative re-weighted penalized least square (airPLS) method based on Raman peaks truncation. By identifying the positions of Raman peaks, truncating the corresponding regions, and employing the airPLS algorithm for baseline fitting, the method reduces the rise in the fitted baseline caused by abrupt changes in intensity within the Raman peak regions, making the fitted baseline approach closer to the true baseline. We hope that this improved method will further enhance the accuracy of Raman spectroscopy baseline fitting.MethodsBaseline fitting is conducted with airPLS based on Raman peaks truncation. Initially, the spectral signal is denoised by the Savitzky-Golay filter. Subsequently, we employ a peak-finding algorithm to identify Raman peaks within the denoised spectrum and use the first derivative of the spectrum to determine the left and right boundaries for each Raman peak. Following this, we truncate the Raman peaks within these defined boundaries to obtain the original baseline. An airPLS fitting is performed on this original baseline to derive a new baseline. At this stage, we compute the difference between the new baseline and the original baseline and truncate the regions where the absolute difference exceeds a threshold. We iterate this process by subjecting the truncated signal to successive airPLS fitting until the absolute difference between the baselines from two consecutive fittings is less than the threshold, concluding the iteration. The resulting fitted baseline is output. Here, the threshold represents the average of the absolute differences between the baselines from two consecutive fittings. Subtracting the fitted baseline from the original Raman spectrum yields the corrected Raman spectrum.Results and DiscussionsThe airPLS based on Raman peaks truncation has demonstrated outstanding performance in baseline fitting. Simulated Raman spectra and measured Raman spectra from lipstick are utilized to validate the proposed baseline fitting method, respectively. Comparative analyses are conducted against commonly used baseline fitting methods, including polynomial fitting, discrete wavelet transform, variational mode decomposition, and airPLS. As depicted in Figs. 6(a), 7(a), and 7(b), although the algorithm proposed in this article achieves a baseline fit closer to the theoretical baseline in regions with weak Raman peaks or without Raman peaks, the improvement compared to the aforementioned algorithms is not notably conspicuous. This indicates that the fitting capabilities of these algorithms are similar in spectra exhibiting gradual trends. However, near the Raman peak regions, as shown in Figs. 6(b), 6(c), and 7(c), these methods experience baseline elevation due to abrupt changes in spectral peak intensity. In contrast, the proposed baseline fitting method, which incorporates Raman peaks truncation, minimizes the influence of Raman peaks on the fitting results, resulting in the closest fit to the theoretical values. Table 3 presents the root mean square errors (RMSE) between the fitted baseline and the theoretical baseline for our method and the aforementioned commonly used methods, evaluating the performance of these methods in spectra exhibiting both single and complex trends. Comparative analysis indicates that under various signal-to-noise ratios of spectra, our method yields the lowest RMSE, showcasing its superior performance. The baseline fitting results of the lipstick Raman spectrum shown in Fig. 8 are consistent with the simulated analysis outcomes. In the region devoid of Raman peaks (800-1000 cm-1), the baseline fitting capabilities of each algorithm are similar. However, within the region with numerous Raman peaks (1100 to 1650 cm-1), the baseline fits of all algorithms are affected to varying degrees by the spectral peaks, resulting in baseline rises. Our baseline fitting method, employing spectral peak truncation and an iterative approach, significantly mitigates the influence of spectral peaks on the fitting outcomes. This method maximizes the preservation of Raman peak intensities and stands out as the optimal choice among the various methods evaluated.ConclusionsWe introduce the Raman peak-truncated airPLS baseline fitting method. The utilization of Raman peak truncation mitigates the influence of abrupt changes in Raman peak intensity on baseline fitting. The method not only inherits the effective baseline fitting performance of airPLS in peak-free regions but also resolves the issue of baseline elevation caused by Raman peak intensity, thereby enhancing the accuracy of baseline fitting. Comparative experiments conducted on simulated spectra demonstrate the superior baseline fitting performance of Raman peak-truncated airPLS. Under different spectral signal-to-noise ratios, the RMSE for Type 1 spectra fitted by our method is below 0.0042, and for Type 2 spectra, it is below 0.0052, which is the lowest among various methods. In experiments fitting Raman spectra from lipstick samples, airPLS based on Raman peaks truncation outperforms several commonly used algorithms. It accurately restores Raman peak intensities without distorting the corrected spectra, effectively removing fluorescence baseline trends and meeting the requirements of Raman spectroscopy data processing.

    Apr. 10, 2024
  • Vol. 44 Issue 7 0730001 (2024)
  • Yuanhao Cai, Xiuhua Fu, Zhaowen Lin, Ben Wang, Zhuobin Huang, Yonggang Pan, Suotao Dong, and Guangyuan Fu

    ObjectiveWith the rapid development of ultraviolet optics and ultraviolet technology, ultraviolet monochromator as an important tool in the development of ultraviolet technology provides strong support for related technological innovation. As an important optical component in the ultraviolet monochromator, the ultraviolet filter performance seriously affects the test accuracy of the monochromator, and its optical performance mainly includes transmittance, cut-off depth, and steepness of the transition zone, among which the influence of steepness is particularly important. In recent years, in-depth research has been carried out on the preparation of high-performance ultraviolet filter films at home and abroad, most of which focus on center transmittance, cut-off depth, and cut-off band width. Meanwhile, although the optical performance has been improved to a certain extent, in the transition zone there are still some stray rays that have not been effectively filtered out. To this end, based on the utilization requirements of the ultraviolet monochromator, a deep ultraviolet high-steepness filter film is developed to filter the interference of incoherent light and improve the measurement accuracy of the monochromator.MethodsBy analyzing material properties and studying the thin film design theory, Al2O3 and AlF3 are selected as high and low refractive index materials respectively, and a vacuum ultraviolet wide cut-off, deep ultraviolet to visible high transmission filter film is designed on the fused silica substrate by double-sided split design method. During thin film preparation, the control variable method is adopted to optimize the preparation and thin film stress analysis, and the optimal deposition process parameters are selected, which solves the problem of thin film cracking caused by excessive stress of the prepared high-steep filter film. Additionally, the monitoring error of film thickness is inverted and analyzed via repeated experiments, and the proportion coefficient of film thickness is corrected to realize the accurate monitoring of film thickness and improve the steepness of the transition zone.Results and DiscussionsDue to the large number of layers of high-steepness filter films, the accumulated stress is too large, which seriously affects the mechanical properties of the film and causes film cracking. To solve this problem, we discuss the influence of different deposition process parameters on the film quality (Table 2) and demonstrate the surface shape changes of the substrate before and after coating in Table 3. Meanwhile, the power value of the coating surface becomes larger after coating, and the residual stress of the film is expressed as tensile stress on the fused silica substrate, which is calculated by the simplified Stoney formula (Fig. 12). The process adjustment before and after increasing ion-assisted deposition energy can effectively reduce the thin film stress. Additionally, observations on the cracking degree of the film under different ion source parameters (Fig. 13) show that when the ion source power is increased to 200 V/2 A, the crack fringes of the film disappear completely, with significantly improved film properties. After the thin film deposition, the test spectral curve (Fig. 14) deviates greatly from the theoretical design spectrum. The inversion analysis reveals that this difference is mainly due to the quenching effect during the deposition of the thin film, which leads to the thin thickness of 33-54 layers, and the error inversion spectral curve is shown in Fig. 15. After repeated experiments to correct the crystallized thickness of the layer, the test spectral curve is in good agreement with the design spectral curve (Fig. 16).ConclusionsBased on the theory of thin film structure design, the reasonable selection of thin film structure parameters is realized, and the film layer thickness is appropriately adjusted and optimized, which effectively reduces film preparation difficulty. By employing the control variable method, the influence of ion-assisted deposition energy on the thin film quality is emphatically discussed. When the ion source power increases from 0 V/0 A to 200 V/2 A, the film stress decreases from 219 MPa to 178 MPa, and the problem of film cracking is effectively solved. After the thin film deposition is completed, the transmission spectrum curve is inverted and analyzed, and the proportion coefficient of the film thickness is adjusted to achieve accurate film thickness control. Finally, the deep ultraviolet high-steepness filter film has a transmittance of 3.05% at 227.7 nm, a transmittance of 89.91% at 231.3 nm, a steepness of 3.6 nm, and an average transmittance of 97.67% and 0.61% at 232-400 nm and 115-228 nm respectively, which meets the needs of the ultraviolet monochromator.

    Apr. 10, 2024
  • Vol. 44 Issue 7 0731001 (2024)
  • Jiacheng Chen, Wei Ma, Hongyu Zhu, Yusheng Zhou, Yaohui Zhan, and Xiaofeng Li

    ObjectiveSelf-adaptive thermal control devices have become the research focus due to their adaptive characteristics. However, on one hand, the special spectral requirements lead to a complex and time-consuming design process, and on the other hand, the device performance needs to be optimized to meet special application scenarios. To this end, we propose a deep-generation network model to perform complex optimization tasks. Unlike traditional approaches relying on dataset updates, our model integrates a generated neural network with the transfer matrix method (TMM), which generates the expected multi-layer structure and automatically optimizes the material type and the thickness of each layer using the gradient information provided by TMM.MethodsFirstly, a neural network for global optimization is devised to intricately design the structure of photonic devices. The optimization network consists of a residual generation network and an electromagnetic solver TMM. The residual generation network obtains the refractive index and thickness of the material. The TMM solver is employed to derive the spectrum of the generated structure and compute the loss function for reverse parameter updates until the network converges. Secondly, the material categories are constrained, and the material optimization space is limited to a finite number of material properties in the specified material library. We adopt a reparameterization technique to relax the refractive index to a continuous value and restrict it to a specified position on the continuous interval with network updates. A hyperparameter is adopted to regulate the sharpness of the softmax function, thereby limiting the contribution of various materials in the material library to the specified layer. The influence of different loss functions and hyperparameters on network optimization is studied, the loss function is customized, and the best hyperparameters are selected to ensure that the network meets the requirements. Finally, a deep neural network model is utilized to optimize an adaptive thermal control device based on phase change material vanadium dioxide. The structures of 10-layer and 60-layer films are optimized, and their spectral and field distributions of the structure at high and low temperatures are studied to assess the performance.Results and DiscussionsThe proposed global optimization network model eliminates the need for a dataset and can simultaneously optimize the design of material types and thicknesses. We employ 12 materials from the material library to automatically design and optimize multi-layer film devices for adaptive thermal control on a 500 nm Ag substrate. Firstly, a 10-layer adaptive thermal control device is optimized, and the film structure is shown in Table 1. The solar absorption ratio of this device is 0.19, and the difference in high- and low-temperature emissivity is 0.79. For thin films in a high-temperature state, the electric field intensity decreases monotonically along the incident direction. Due to the top-down absorption of the thin film at this time, almost no interference between the incident and reflected waves can be observed. For thin films in a low-temperature state, the entire film system becomes semi-transparent, and strong interference between the incident and reflected waves can be observed. Increasing the number of film layers to 60 can improve device performance, which leads to a solar absorption ratio of 0.17 and 0.82 respectively (Fig. 6). When the number of membrane layers is 10, the traditional neural network's loss value continuously decreases and stops decreasing after 20 optimizations, falling into the local optimal solution and causing the gradient to disappear. Meanwhile, the global optimization network exhibits a spike in the loss value attributable to varying initial points in each optimization run, which makes the structures deviate from local optimal solutions. As the number of membrane layers increases to 60, the global optimization network yields more instances where the results diverge from local minimum values. This characteristic enables the network to effectively explore global optimal solutions and mitigates the risk of the network converging to local optimal solutions (Fig. 7).ConclusionsWe develop a global optimization network framework for designing optoelectronic devices with complex multi-layer film structures. The network solves the material classification problem by adopting probability matrices, and residual modules in the network are also leveraged to make optimization easier. As a validation and demonstration of network optimization capabilities, we adopt this method to design an adaptive thermal control device based on vanadium dioxide. This structure can automatically turn on and off radiative cooling according to environmental temperature without any additional energy input. Meanwhile, it yields excellent performance with a high-temperature solar absorption ratio below 0.2, a high-temperature emissivity greater than 0.9, and an emissivity difference greater than 0.8. Compared with traditional optimization algorithms, neural networks search for the optimal solution with high degrees of freedom and faster speed in searching for optimal solutions, underscoring the practicality of this method in complex design tasks. The results suggest the versatility of this method in designing various optoelectronic systems and highlight the potential extension of this approach to 3D photonic structures using trained neural networks, which offers possibilities for more intricate photonic device design and effective material design in diverse fields.

    Apr. 10, 2024
  • Vol. 44 Issue 7 0731002 (2024)
  • Qi Chen, Zhibao Qin, Xiaoyu Cai, Shijie Li, Zijun Wang, Junsheng Shi, and Yonghang Tai

    ObjectiveReconstructing soft tissue structures based on the endoscope position with robotic surgery simulators plays an important role in robotic surgery simulator training. Traditional soft tissue structure reconstruction is mainly achieved through surface reconstruction algorithms using medical imaging data sets such as computed tomography and magnetic resonance imaging. These methods fail to reconstruct the color information of soft tissue models and are not suitable for complex surgical scenes. Therefore, we proposed a method based on neural radiation fields, combined it with classic volume rendering to segment robotic surgery simulator scenes from videos with deformable soft tissue captured by a monocular stereoscopic endoscope, and performed three-dimensional reconstruction of biological soft tissue structures to restore soft tissue. By using segmented arbitrary scene model (SASM) for segmentation modeling of time-varying objects and time-invariant objects in videos, specific dynamic occlusions in surgical scenes can be removed.MethodsInspired by recent advances in neural radiation fields, we first constructed a self-supervision-based framework that extracted multi-view images from monocular stereoscopic endoscopic videos and used the underlying 3D information in the images to construct geometric constraints of objects, so as to accurately reconstruct soft tissue structures. Then, the SASM was used to segment and decouple the dynamic surgical instruments, static abdominal scenes, and deformable soft tissue structures under the endoscope. In addition, this framework used a simple neural network multilayer perceptron (MLP) to represent moving surgical instruments and deformed soft tissue structures in dynamic neural radiation fields and proposed skew entropy loss to correctly predict surgical instruments, cavity scenes, and soft tissue structures in surgical scenes.Results and DiscussionsWe employ MLP to represent robotic surgery simulator scenes in the neural radiation field to accommodate the inherent geometric complexity and deformable soft tissue. Furthermore, we establish a hybrid framework of the neural radiation field and SASM for efficient characterization and segmentation of endoscopic surgical scenes in an endoscopic robotic surgery simulator. To address the dynamic nature of scenes and facilitate accurate scene separation, we propose a self-supervised approach incorporating a novel loss function. For validation, we perform a comprehensive quantitative and qualitative evaluation of a data set captured using a stereoendoscope, including simulated robotic surgery scenes from different angles and distances. The results show that our method performs well in synthesizing realistic robotic surgery simulator scenes compared with existing methods, with an average improvement of 12.5% in peak signal-to-noise ratio (PSNR) and an average improvement of 8.43% in structural similarity (Table 1). It shows excellent results and performance in simulating scenes and achieving high-fidelity reconstruction of biological soft tissue structures, color, textures, and other details. Furthermore, our method shows significant efficacy in scene segmentation, enhancing overall scene understanding and accuracy.ConclusionsWe propose a novel NeRF-based framework for self-supervised 3D dynamic surgical scene decoupling and biological soft tissue reconstruction from arbitrary multi-viewpoint monocular stereoscopic endoscopic videos. Our method decouples dynamic surgical instrument occlusion and deformable soft tissue structures, recovers a static abdominal volume background representation, and enables high-quality new view synthesis. The key parts of our framework are the SASM and the neural radiation field. The highly segmentable module of SASM decomposes the surgical scene into dynamic, static, and deformable regions. A spatiotemporal hybrid representation is then designed to facilitate and efficiently model the decomposed neural radiation fields. Our method achieves excellent performance in various simulation scenes of robotic surgery data, such as large-scale moving surgical instruments and 3D reconstruction of deformable soft tissue structures. We believe that our method can facilitate robotic surgery simulator scene understanding and hope that emerging NeRF-based 3D reconstruction technology can provide inspiration for robotic surgery simulator scene understanding and empower various downstream clinically oriented tasks.

    Apr. 10, 2024
  • Vol. 44 Issue 7 0733001 (2024)
  • Jin Zhao, Chenglong Wang, and Hong Yu

    ObjectiveSmall angle X-ray scattering (SAXS) is a powerful tool to measure structural features on the order of 1-100 nm. Due to high measurement accuracy and strong penetrability, SAXS attracts much attention to characterizing the complex three-dimensional (3D) structure information of periodic nanostructures in integrated circuit (IC) and has been successfully applied to high aspect ratio (HAR) structures, such as 3D-NAND and DRAM. SAXS for IC inline metrology is mostly based on compact X-ray sources. Limited by the brightness of compact X-ray sources, SAXS measurement requires a long exposure time to improve the signal-to-noise (SNR) of SAXS signals. Since the integration effect of long exposure time, numerous cosmic rays are inevitably introduced in the SAXS measurement pattern. As a typical kind of noise that is not correlated with SAXS signals, cosmic rays appear in SAXS patterns randomly and cause signal distortion, which has a negative effect on nanostructure information extraction. However, for lack of making full use of the signal's periodicity information, present cosmic ray rejection algorithms cannot accurately identify and remove the cosmic rays that have a real influence on SAXS signals in the measurement pattern. A new cosmic ray rejection method is needed for SAXS measurement patterns of periodic nanostructures, which will help improve the SNR of SAXS patterns and the performance of nanostructure information extraction.MethodsWe propose a cosmic ray rejection method for the SAXS measurement pattern of periodic nanostructure. First, a pattern sequence including many short exposure SAXS measurement patterns of periodic nanostructure samples is generated in the same measurement conditions. Then, the coordinates of the periodic scattering signals are calculated by taking the periodic information of the nanostructure as physical prior, and cosmic rays existing in the effective signal area for each diffraction order in each scattering pattern are identified. After removing the abnormal frames influenced by cosmic rays from the pattern sequence, the SAXS measurement pattern after cosmic ray rejection is obtained by summing the remaining frames of the pattern sequence. The pattern sequence including 500 short exposure SAXS measurement patterns of periodic nanostructure samples is used to evaluate the performance of the proposed method. The precision, miss rate, and false alarm rate of cosmic ray detection results of the pattern sequence are calculated. Meanwhile, two existing methods for cosmic ray rejection of Laplacian edge detection and multi-frame median pixel rejection are selected as the comparison method, and the SAXS measurement pattern sequence is removed from cosmic rays by adopting the two comparison methods and the proposed method. The mean square error (MSE), peak signal-to-noise ratio (PSNR), and structural similarity (SSIM) of the pattern sequence before and after cosmic ray rejection by three methods are calculated respectively. Since the influence of cosmic rays and Poisson noise on the SAXS measurement pattern is relative to the exposure time, the competitive relationships between two kinds of noise and cosmic ray rejection performances of the proposed method in different exposure times are analyzed. This is realized by calculating the relationship of PSNR of the pattern sequence before and after denoising by three methods respectively, and the number of frames included in the sequence.Results and DiscussionsAccording to the confusion matrix calculated based on the cosmic ray detection results of the pattern sequence including 500 short exposure SAXS measurement patterns (Fig. 4), the precision, miss rate, and false alarm rate are 87.67%, 4.93%, and 5.18%, respectively. Compared with the two comparison methods, the pattern sequence denoised by this method has the best cosmic ray rejection effect, and the MSE, PSNR, and SSIM of the method are all optimal (Fig. 5 and Table 1). Especially, PSNR of the pattern sequence increases by 5.55 dB after removing cosmic rays by this method. When the number of frames included in the pattern sequence is low and equivalent to exposure time, Poisson noise is dominant and the PSNR of the pattern sequence is so low that we need more exposure time. When the number of frames increases to about 200, cosmic rays seriously restrict the upper limit of the PSNR of the scattering pattern. However, the PSNR of the pattern sequence after denoising by this method still increases with the rising number of frames, and the growth rate of the PSNR is significantly higher than comparison methods (Fig. 6).ConclusionsWe propose a method for cosmic ray rejection in the SAXS measurement pattern of periodic nanostructures. The pattern sequence including 500 short exposure SAXS measurement patterns of periodic nanostructure samples is simulated and removed from cosmic rays by this method. According to the cosmic ray detection results, the miss rate and false alarm rate are both only about 5%, which proves that the proposed method has a sound detection effect on cosmic rays for the single frame scattering pattern. Meanwhile, the PSNR of the pattern sequence increases by 5.55 dB after removing cosmic rays by the proposed method. The PSNR gain greatly improves the extraction reliability and accuracy of the periodic nanostructure information. By analyzing the competitive relationship between Poisson noise and cosmic rays and evaluating the cosmic ray rejection performance of the proposed method in different exposure time, we find that this method can break the upper limit of PSNR caused by cosmic rays and improve PSNR of scattering pattern continuously. This proves that this method can obtain excellent PSNR gain in the long exposure integration condition. In principle, the proposed method provides a reliable cosmic ray rejection scheme for SAXS measurement patterns of periodic nanostructures, improving the detection SNR of SAXS patterns effectively. This method features a simple principle and fast operation and thus has practical significance to improve the inline metrology performance of SAXS.

    Apr. 10, 2024
  • Vol. 44 Issue 7 0734001 (2024)
  • Chuanjiang Liu, Ao Wang, Genyuan Zhang, Wei Yuan, and Fenglin Liu

    ObjectiveSpatial resolution of X-ray imaging systems is crucial for microstructural object studies due to the small size of the subjects. Specifically, the focal spot size of the X-ray source is a main factor affecting the spatial resolution of micro-computed tomography (micro-CT), which will produce penumbra blur on detectors and thus blur the reconstructed images and reduce spatial resolution. Meanwhile, reducing the focal spot size by decreasing the X-ray tube power is a straightforward solution, but will prolong the scan duration. Therefore, we aim to develop a deep learning-based strategy by learning the inverse finite focal spot model to mitigate the penumbra blur for obtaining CT images with high spatial resolution even in the case of a non-ideal X-ray source.MethodsFirst, we derive the finite focal spot model that builds a relationship from the ideal point source projection to the finite focal spot projection. Based on the derived model, we numerically compute a paired projection dataset. Second, we utilize the neural network U-net and an attention mechanism module of convolution modulation block to build a self-attention mechanism-based U-net (SU-net) and thus learn the inverse finite focal spot model. The goal is to estimate the ideal point source projection from the actual non-ideal focal spot projection. SU-net (Fig. 1) which introduces convolution modulation blocks into the contracting path of the U-net is proposed to boost the U-net property. Finally, the standard filtered back-projection (FBP) is employed for reconstruction using the estimated ideal point projection.Results and DiscussionsSimulation experiments are performed by the public dataset 2DeteCT to verify the effectiveness of the SU-net, which consists of a wide variety of dried fruits, nuts, and different types of rocks. Two groups of results are randomly selected in the test dataset for visualization (Fig. 2) and quantitative indicators are tested on the whole test dataset (Fig. 3). The results show that our proposed SU-net can estimate the ideal point source projection from the non-ideal focal spot projection. To verify the robustness of the SU-net, we test it with data outside of the simulation experimental dataset (Fig. 4), and the results show that it has better generalization than the end-to-end enhanced super resolution generative adversarial network (ESRGAN). Meanwhile, the ablation experiment is conducted with the same dataset and experimental parameters as the simulation experiment to confirm the validity of the added convolutional modulation module (CM) and gradient deviation loss, with quantitative indicators measured (Table 1). The results show that both the CM module and gradient deviation loss added by us can improve the network performance. Practical experiments are carried out to evaluate the effectiveness of the SU-net algorithm on real data (Fig. 5). Since it is difficult to obtain label data in the actual experiment, we select three evaluation indicators that do not require label data (Table 2), including PIQE (perception-based image quality evaluator), NIQE (natural image quality), and image sharpness evaluation function DCT (discrete cosine transform). The results show that our proposed SU-net algorithm achieves the optimal results compared with the comparison methods.ConclusionsIn micro-CT imaging, the focal spot size of the actual X-ray source is limited, and under the relatively large focal spot size, the projected image will be blurred, and the reconstruction of the measured projection directly using the CT algorithm based on the point source model will cause the image to be blurred. We propose a U-net based on the self-attention mechanism to estimate the ideal point source projection from the actual measured non-ideal focal spot projection. Meanwhile, we establish a training dataset according to the relationship between the non-ideal focal spot projection and the ideal point source projection to optimize the network. Simulation and practical experiments show that this method can effectively estimate clear projection from blurred projection. The advantage of the proposed method is that we can construct a dataset by the relationship between the finite focal spot projection model and the ideal point source projection model, without collecting data pairs composed of non-ideal focal spot projection and ideal point source projection, which greatly reduces the difficulty of constructing datasets. Secondly, the proposed network directly based on the relationship between the finite focal spot projection model and the ideal point source projection model has strong interpretability, which means the inverse relationship from the finite focal spot model to the ideal point source model is learned through the network. Therefore, this method has better generalization than end-to-end ESRGAN, especially for CT images with high fidelity of image details. Our limitation is that the training is conducted for a specific focal spot size and a specific scanning geometry without considering the influence of noise. Subsequent studies will train networks with different focal spot sizes and geometric parameters and consider situations with noise.

    Apr. 10, 2024
  • Vol. 44 Issue 7 0734002 (2024)
  • Please enter the answer below before you can view the full text.
    8-2=
    Submit