Acta Optica Sinica
Co-Editors-in-Chief
Qihuang Gong
Jiwei Xing, Wenhui Sun, Xuelian Liu, Yanfen Liu, Xiaohua Liu, Xiaojun Liu, Binzheng Hao, Jianjun Li, Wang Luo, Qinan Li, and Haichao Yu

ObjectiveThe interaction between laser pulses and materials has been extensively studied in recent decades as a common physical mechanism, such as laser propulsion (LP) and laser-induced breakdown spectroscopy. LP has gained widespread attention due to its inherent advantages of reducing launch costs and increasing payload. With the development of LP, the research field has gradually transitioned from macroscopic to microscopic fields. However, during the propulsion process, the direct irradiation of high-energy laser pulses on particles can cause permanent damage to the particle surface, and the large laser spot size can lead to deviations in the particle’s trajectory. Therefore, a device that can control the spot diameter and reduce surface damage to particles needs to be proposed. In this work, we propose LP based on a tapered fiber to realize the propulsion of microscale microspheres and analyze the mechanism of LP based on the motion of microspheres. We study the effects of laser energy and microsphere size on the distance of the microsphere. In addition, we analyze the influence of laser energy emitted from the fiber tip on the fiber tip size and discuss the relationship between laser energy density and fiber tip diameter, revealing the nonlinear increase in laser energy and the decrease in scattering loss as the fiber diameter increases. Our research may provide further support for the precise manipulation of colloids and biomaterials at the micrometer level.Methods1) Experimental setup for LP. A tapered fiber structure is prepared through flame heating. A Nd∶YAG laser is coupled into the fiber using a 40× objective lens and emitted from the fiber tip. The tip and microspheres are placed on three-dimensional translation stages. By the combination of vertical CCD1 and horizontal CCD2, the driven microspheres can be flexibly and precisely controlled. The dynamics of the microspheres are captured by the CCD1 camera. After the propulsion experiment is completed, the laser energy emitted from the fiber tip is measured by an energy meter. 2) Formation of plasma shock wave. During the interaction between the laser emitted from the fiber tip and the atoms, the electrons in the atoms are excited or transitioned to higher energy states. These high-energy electrons are accelerated and further excite electrons in the atoms. The high-energy electrons collide with other atoms, generating additional electrons. When the number of electrons reaches a certain number (~1016 /cm3), a high-temperature and high-pressure plasma is formed. Subsequently, the shock wave generated by the expansion of the plasma propels the microsphere forward through the recoil effect. 3) Calculation of microsphere movement distance and velocity. The dynamics of the microspheres are recorded by a CCD1 camera with a frame rate of 1000 frame/s. The time interval between adjacent images is 1/1000 s. By analyzing the movement of the microsphere between two images taken at t=1/1000 s as the initial state, the displacement s within the time range of 0-1/1000 s is determined. Then the average velocity v=s/t is calculated. We consider the average velocity as the initial velocity due to the short interaction time between the laser pulse and the microsphere. To reduce experimental errors, the experiment is repeated three times under the same conditions.Results and DiscussionsIn the experiment of propelling microspheres with a diameter of ~80 μm using a laser with an energy of ~9.6 μJ through a ~8 μm fiber tip, the microsphere moves a distance of 547 μm within a time range of 6/1000 s. The maximum velocity is calculated to be 12.4 cm/s, and the momentum is determined to be P=8.3×10-11 Ns. The calculated value P differs from the theoretical value PM by three orders of magnitude. By adjusting the relative position between the fiber tip and the microsphere, we observe that the microsphere can move in the direction of the fiber as well as diagonally. These findings indicate that the ejection mechanism of shock waves plays a dominant role in the propulsion of the microspheres (Fig. 2). In the qualitative study of the effects of laser energy and microsphere size on microsphere movement, we find that the movement distance of the microsphere increases with increasing laser energy. It can be explained that with the increase in laser energy, the energy carried by the shock wave formed by the expansion of plasma increases, resulting in a greater force exerted on the surface of the microsphere. On the other hand, as the size of the microsphere increases, the movement distance decreases. This can be attributed to the increased resistance between the microsphere and the substrate surface due to the larger size. The above experimental results further illustrate the propagation characteristics of shock wave (Fig. 3). After investigating the relationship between laser energy and fiber tip diameter [Fig. 5(b)], we discover that the laser energy emitted from the fiber tip exhibites nonlinear increases, which is attributed to declining scattering loss with increasing fiber diameter. The calculated limit of the output energy density at the fiber tip is ~1.15 μJ/μm2. For a fiber tip diameter of approximately 2 μm, the energy density is ~1.25 μJ/μm2 [Fig. 5(c)], indicating that the fiber tip has been damaged.ConclusionsWe present a straightforward solution that makes LP of microspheres feasible using a tapered fiber structure. In the experiment, a laser with an energy of ~9.6 μJ is emitted from the fiber tip, driving the movement of a ~80 μm diameter microsphere. Within a time range of 6/1000 s, the microsphere moves a distance of 547 μm. The fact that PM?P indicates that the motion of the microsphere is primarily governed by the ejection mechanism of the shock wave, similar to the launch of a bullet. In the qualitative study of the influence of laser energy and microsphere size on microsphere movement, we observe that the movement distance of the microsphere increases with increasing laser energy, while it decreases with increasing microsphere size. We analyze the relationship of pulse energy and energy density with fiber tip diameter, revealing a nonlinear increase in energy, which is attributed to declining scattering loss with increasing fiber diameter. In terms of propulsion, the laser emitted from the fiber tip exhibits the characteristics of low energy and longer propagation distances. The observed interaction between light and microspheres in this experiment may provide valuable insights for future research on manipulating colloids and biomaterials at the microscale.

May. 10, 2024
  • Vol. 44 Issue 9 0914001 (2024)
  • Tianyi Zhang, Yiyi Xu, Lifang Feng, and Zhuo Xue

    ObjectiveIn recent years, the use of low-cost vision sensors to achieve navigation and positioning has received more and more attention. As vision sensors have high measurement accuracy, wide range, rich information, and non-contact, flexible, portable, and low-cost characteristics, they can achieve large-scale multi-target tracking and complete positioning tasks in complex and limited industrial field environments. We study an indoor visual positioning system based on camera and QR code. Firstly, the effective recognition range of the QR code beacon is analyzed, and the calculation formula of recognition range based on marker size, camera definition, and other parameters is derived. Based on this formula, the layout of the QR code beacon in the positioning scene is designed, and the system positioning is realized by the perspective n points (PnP) calibration algorithm. Finally, the validity of the QR code recognition range is verified by experiments.MethodsWe conduct the following research based on the existing perspective four points (P4P) QR code location algorithm: 1) We define the recognition range of the QR code and deduce the recognition range calculation formula according to the recognition algorithm accuracy, QR code size, camera resolution, and camera field of view (FOV). 2) According to the definition and calculation of the recognition range of the QR code, we design the QR code beacon deployment scheme of the target scene, realize a large range of positioning and recognition range coverage with fewer QR codes, improve the recognition rate, and ensure the accuracy of the positioning algorithm. 3) We analyze the positioning effect of the system under the fixed and mobile states of the camera position, calculate the positioning accuracy and positioning recognition rate of the system under different conditions, and verify the theoretical recognition range and positioning recognition rate.Results and DiscussionsThe actual test environment is a room of 7 m×5 m×3 m (Fig. 6), and the relevant experimental parameters are shown in Table 1. To ensure the overall accuracy of the positioning system and the success rate of positioning, the spacing of the QR code beacon is reduced during the actual deployment, and the spacing is set to 2 m. According to the situation of the room, the space rectangular coordinate system is established, and four positions are marked in Table 1 to deploy QR codes so that the identification range can cover the whole room. To verify the effectiveness of the identification range algorithm and deployment scheme, we design two experiments. Experiment 1: To verify the positioning accuracy of fixed positions within the recognition range, we carry out positioning accuracy tests at different positions within the recognition range of four QR codes. The test results are shown in Fig. 7. After testing, the error of the QR code located at the edge of the identification range is slightly larger than that of the center of the identification range. The positioning error near the right below the QR code is less than 6 cm, and the positioning error near the edge of the identification range is less than 10 cm. The overall average positioning error is 8.32 cm, which is basically consistent with the positioning error of the algorithm theory. The positioning accuracy within the recognition range of the QR code is not affected. Experiment 2: The recognition rate is tested in the positioning scene (Fig. 8). Raspberry PI 3B is utilized to build the robot platform, and the camera is deployed on the robot to make the robot move around the room at a constant speed of 0.33 m/s along a straight or circular route. During the process, the positioning data is collected at a constant time interval and the number of successful positioning is calculated. In the experiment, the positioning program counts the number of successful positioning. Whenever the program successfully identifies the QR code and outputs the positioning result, and the positioning position deviated from the actual position or route is no more than 15 cm, it is regarded as a successful positioning, and the deviation distance from the route is regarded as the positioning error. The test results (Table 2) show that when the robot moves along a straight line or a ring route, the recognition rates of the QR code are 92.31% and 91.59%, respectively. Within the recognition range of the QR code, the positioning recognition rate meets the requirements. At the same time, the cumulative distribution function curve of positioning error in the fixed position and the moving process is shown in Fig. 9. It can be seen from Fig. 9 that when the robot moves, the positioning error distribution curve of the system moves better in a straight line than that in a circular motion. In addition, the error of the two methods has little change compared with the average positioning error of the fixed position, and the error of 90% positioning results is less than 9 cm. It shows that the positioning accuracy is basically not affected when the robot moves within the QR code recognition range, and the QR code beacon deployment scheme designed in the experiment meets the requirements of positioning accuracy and positioning success rate.ConclusionsWe study the recognition range of indoor visual positioning system based on the QR code and the deployment scheme of QR code beacons. To improve the deployment efficiency of the QR code and the coverage range of the positioning system, we first define the recognition range of the QR code and derive the calculation formula of the positioning recognition range according to the performance of the QR code recognition algorithm, marker size, camera definition, and other parameters. Then, the deployment strategy of QR code beacons is given for the positioning scenario, and the validity of the QR code positioning recognition range and beacon deployment scheme is verified by experiments. The results show that in the recognition range of the QR code, the average positioning error of fixed position is 8.32 cm. In the positioning scenario of QR code beacons, the positioning system is deployed on the robot, and the system is in linear and circular motions. The recognition rates of the QR code are 92.31% and 91.59%, respectively, which meets the positioning coverage requirements, and the positioning accuracy is almost consistent with the average positioning error of fixed position. Our QR code beacon deployment strategy has a good effect verified by experimental tests and improves the positioning efficiency and system reliability of the QR code indoor positioning algorithm based on P4P.

    May. 10, 2024
  • Vol. 44 Issue 9 0915001 (2024)
  • Chenxia Hu, Ying Liu, Chenglong Wang, Guangpeng Zhou, Chen Yu, and Boshi Dang

    ObjectiveCompared with visible light systems, cooled infrared imaging optical systems have better application effects in terrible climatic conditions. Compared with uncooled infrared imaging optical systems, they have higher detection sensitivity, longer viewing distances, and more excellent image quality. Therefore, the cooled infrared imaging optical systems are widely used in many fields, such as aerospace and military applications. Cooled infrared imaging optical systems with long focal lengths and large apertures have the problems of long barrel lengths, large volume, and high cost. To solve these problems and achieve a cold shield efficiency of 100%, the design of the catadioptric optical system is generally adopted, such as the Cassegrain-based catadioptric optical system. As sufficient theoretical guidance for determining the initial structure of such systems is lacking, we propose a method for optimal values of key parameters. We design a catadioptric cooled mid-wave infrared imaging optical system based on Cassegrain, which provides important theoretical guidance for the determination of the initial structure of this kind of system.MethodsWe derive the calculation formulas which are expressed by three key parameters: the shading coefficient α, magnification of the second mirror of Cassegrain βsec, and the vertical magnification of relay mirror βrelay, including the initial structure parameters of the optical system, the T value of system length, the primary spherical aberration, and the primary coma aberration of the Cassegrain system. The variation of the difficulty in correcting aberration and compactness of the system with α, βsec, and βrelay are analyzed through derived calculation formulas. Based on the contradictory relationship between the difficulty in correcting aberration and compactness of the system, the optimal value method of key parameters is proposed. The initial structure of the optical system is determined by the optimal value method, and the initial structure is further optimized through ZEMAX. A catadioptric cooled mid-wave infrared imaging optical system is designed, of which focal length is -600 mm and F number is 2. Finally, we finish the tolerance analysis on the optical system using the Mont Carlo statistical analysis method. The correctness of the theory and the machinability of the optical system are proved.Results and DiscussionsCombined with the derived calculation formulas, the T value of the optical system, the primary spherical aberration, and the primary coma of Cassegrain, the variation curves of SⅠ, SⅡ, and T value with α, βsec, and βrelay are given (Figs. 4-6). We also analyze the change rules of the system compactness and difficulty in correcting aberration with α, βsec, and βrelay. Based on the contradictory relationship between the difficulty in correcting aberration and compactness of the system, we propose the optimal value method of key parameters. The value of α should be as small as possible to ensure sufficient light intake and compactness of system structure and the value of βsec should be as large as possible to reduce the difficulty in correcting aberrations. Considering the contradictory relationship between the difficulty of correcting aberrations and the compactness of the system, the value of βrelay should not be too large or too small. Based on the optimal value method, three key parameters are determined as α=0.3, βsec=-3, and βrelay=-0.5. The initial structure of Cassegrain is determined through the value of α, βsec, and βrelay and optimized slightly through ZEMAX. The design results show that the initial structure of Cassegrain determined according to the optimal value method only needs simple optimization to obtain better image quality (Fig. 7). The initial structure of the optical system is formed by connecting the relay mirror group and the small aberration Cassegrain (Fig. 8) and optimized further. We obtain the catadioptric cooled mid-wave infrared imaging optical system with a long focal length and a large aperture, which is composed of Cassegrain and a relay mirror group with 6 lenses (Fig. 9). The optical system is compact in structure with a total length of 428 mm. Compared with the initial structure, the value of βrelay decreases, which proves that the length of the barrel can be reduced by reducing the value of βrelay. Although the aberration of Cassegrain increases significantly, the residual aberration can be fully compensated by the relay mirror group. At 33 lp/mm, the modulation transfer function (MTF) value of each field of view is greater than 0.4 (Fig. 10), and the imaging quality of the optical system is ideal. The results of tolerance analysis of the system by Monte Carlo statistical analysis show that more than 98% of the samples have MTF values greater than 0.2 and more than 90% have values greater than 0.3. The imaging quality of the optical system meets the requirements and this system is machinable.ConclusionAiming at the design of a catadioptric optical system based on the Cassegrain, we propose an optimal value method of key parameters. The method provides theoretical guidance for the selection of key parameters when determining the initial structure of this kind of optical system and solves the problems of long structure and correcting aberration hard caused by the improper value of key parameters. The initial structure of Cassegrain is slightly optimized by ZEMAX. The results show that the system obtained by this method can meet the design requirements of compactness and reduce the difficulty of aberration correction. After optimizing the initial structure, we design a catadioptric cooled mid-wave infrared imaging optical system with a long focal length and a large aperture, whose structure is compact with a total length of 428 mm. The MTF value of each field of view is greater than 0.4 at the Nyquist frequency, and the root mean square (RMS) radius of each field of view is less than 4 μm, indicating that the imaging quality of the optical system is ideal. The results of tolerance analysis of the system by Monte Carlo statistical analysis show that more than 98% of the samples have MTF values greater than 0.2 and more than 90% have values greater than 0.3. Therefore, the imaging quality of the optical system meets the requirements and this system is machinable. The design results show that when designing a catadioptric optical system based on Cassegrain, the initial structure of the system can be determined by the optimal value of key parameters that we proposed, and the optical system with ideal image quality and compact structure can be obtained by conventional optimization.

    May. 10, 2024
  • Vol. 44 Issue 9 0922002 (2024)
  • Tong Yang, Yongdong Wang, Lü Xin, Dewen Cheng, and Yongtian Wang

    SignificanceProgress in imaging and display optical systems exerts significant influences on the development of science and technology. Imaging and display systems intrinsically utilize optical elements (geometric or phase elements) to modulate optical wavefronts and achieve expectational imaging relationships, system specifications, and structure requirements. As the representative elements of geometric and phase elements respectively, freeform optical elements (FOEs) and holographic optical elements (HOEs) have significant advantages in optical system design. FOEs possess high degrees of design freedom, which can greatly enhance the ability to modulate wavefronts and improve imaging performance. Additionally, freeform surfaces can correct the aberrations of optical systems with off-axis nonsymmetric structures. Meanwhile, HOEs can unconventionally deflect rays at large angles due to their unique ability to modulate optical wavefronts. They can dramatically reduce the weight and volume of optical systems due to the lightweight form factor, and realize better optical see-through experiences and full-color display due to unique selectivity and multiplex ability, achieving mass productions owing to relatively simple fabrication methods and low costs. Meanwhile, it is easy to fabricate HOEs with large sizes due to the unique fabrication methods. Considering the above-mentioned advantages, designers may design imaging and display optical systems that combine FOEs and HOEs, significantly improving the degrees of design freedom and the ability to correct aberrations. Additionally, we can achieve advanced system specifications, excellent system performance, compact and lightweight system forms, and unconventional system structures with off-axis nonsymmetry, with further development of optical systems promoted. It is important to summarize the existing design methods of imaging and display systems combining FOEs and HOEs, analyze the problems restricting their further development, and predict the development trends. Meanwhile, it is essential to summarize the existing designs and applications of these systems to better guide and promote the development.ProgressWe describe the basic principles, ray-tracing models, advantages, and applications of FOEs and HOEs respectively, summarize the system design methods, review the designs and applications of these systems, and analyze current restrictions and future development trends. The design of these systems can be divided into three types. 1) FOEs and HOEs are simultaneously utilized to correct the aberrations of optical systems. 2) The freeform surface is adopted as the substrate shape of HOEs. 3) During HOE fabrication, FOEs are introduced to modulate the recording waves of HOEs. In practical optical system designs, the design can be a combination of the above three ways. The first way directly builds ray-tracing models of freeform optics and HOEs in the optical system design and then adopts the optimization strategy to achieve expectational requirements. The second way coats the holographic recording medium on the freeform substrate to yield HOEs with freeform substrates. The third way bridges the numerical relationship between freeform optics and recording waves of HOEs to fabricate HOEs with unconventional profiles of holographic phase function or grating vector. The methods for defining HOEs based on ray tracing are described in detail, including the phase functions (direction cosines) of the recording waves, holographic phase function, and holographic grating vector, which guides the basic combined design schemes. We review the ways of fabricating HOEs including the whole-area exposing and sub-area exposing (holographic printing) to provide references for combined design fabrication. The calculation methods of starting points of optical systems based on HOEs are summarized in detail, including point-by-point construction and iteration methods, confocal methods, and simultaneous multiple surface (SMS) methods, which guide the design of the optical system combining FOEs and HOEs. The designs and applications of these systems are summarized based on the classifications of HOEs, including augmented reality (AR) near-eye display systems, head-up display (HUD) systems, and HOE-lens imaging systems. Additionally, combined designs of freeform optics and other types of phase elements are also presented, such as liquid crystal polarization hologram (LCPH) based on freeform exposure, and metasurfaces with freeform substrate, which has certain guidance for the combined design of FOEs and HOEs.Conclusions and ProspectsStudies on the system design combining FOEs and HOEs make significant progress in the basic principles, design frameworks, and fabrication methods, which has been employed for developing imaging and display systems with high performance, novel structure, and lightweight form factor. There are also some problems and challenges for the research on the system design combining FOEs and HOEs. They include how to fabricate HOEs with freeform substrates by innovative coating technologies of the holographic recording medium, how to correct chromatic aberrations in the imaging and display system using HOEs, how to reduce the nonuniformity of diffraction efficiency and stray light of systems combining FOEs and HOEs, and how to conduct tolerance analysis of such systems. In summary, the research on the design of imaging and display systems combining FOEs and HOEs will promote the development of next-generation high-performance and compact optical systems.

    May. 10, 2024
  • Vol. 44 Issue 9 0900001 (2024)
  • Chi Cheng, Huijie Zhao, Qi Guo, and Ran Li

    ObjectiveImaging spectrometers based on acousto-optic tunable filters (AOTFs) are widely recognized for their rapid tuning, reliability, repeatability, and ability to change spectral channels with ease. These instruments have been extensively studied in space remote sensing and reconnaissance. Meanwhile, the spectrometers should be capable of functioning accurately over a broad temperature range to deliver precise spectral information across various operating environments. However, the spectral data accuracy is compromised by ambient temperature fluctuations, which affects the AOTF’s spectral tuning and the spectrometer’s response to radiation. The tuning relationship shift is predominantly the result of refractive index changes in the acousto-optic crystal and the velocity of acoustic waves as temperature varies, altering the acousto-optic interaction within the crystal. Similarly, the spectrometer’s radiation response drifts due to alterations in the AOTF’s diffraction efficiency and temperature-dependent changes in the performance of both electronic and optical components. Although previous studies have taken account of the temperature drift in radiation response during the radiometric calibration, it is necessary to first ensure the spectral wavelength stability in the output images, and otherwise, radiometric calibration cannot be achieved. Therefore, implementing temperature corrections during spectral calibration is essential to prevent wavelength deviations in the output images during temperature shifts, which would result in erroneous radiometric calibration.MethodsWe propose a spectral and radiometric calibration method for correcting temperature effects. Firstly, an AOTF tuning model that incorporates a temperature variable is built. Within this model, the relationship between the drive frequency and the optical wavelength, acoustic wave velocity, refractive index, angle of incidence, and acoustic cut angle is derived. The effect of acoustic wave velocity on the drive frequency is considered independently, and a temperature increase brings about rising acoustic wave velocity, leading to a higher drive frequency (Fig. 2). Then the effect of the refractive index on the drive frequency is considered separately, and a temperature rise leads to increasing refractive index, which also results in a higher drive frequency. Meanwhile, both crystal physical parameters are considered concerning their influence on the drive frequency and compared with the actual measured frequency. At different temperatures, the response of the AOTF’s driving frequency at different wavelengths is measured. The central driving frequencies at various temperatures and wavelengths are extracted, and then a polynomial fitting is employed to deduce the tuning relationship between the central driving frequency, temperature, and optical wavelength. This allows for the correction of temperature-induced tuning drifts during the spectral calibration. During the radiometric calibration, the spectrometer is loaded with adjusted driving frequencies to ensure that the system response can track the required wavelengths at all temperatures. The system responses at different temperatures and wavelengths are collected to obtain the spectral radiometric calibration coefficients that include the temperature variable. By adopting interpolation methods, the spectral radiometric calibration coefficients at any temperature are obtained to realize temperature-corrected radiometric calibration (Fig. 3).Results and DiscussionsMultiple wavelengths within the range of 3.7 to 4.5 μm are selected to measure the frequency response of the spectral imaging system at various temperatures between -30 and 50 ℃ [Figs. 4(a) and 4(b)]. As the temperature increases, the central driving frequency shifts towards higher frequencies. For the spectral channel with a central wavelength of 4.0 μm, the central driving frequency is 20.05 MHz at a working temperature of -30 ℃, and 20.14 MHz at a working temperature of 50 ℃. It is evident that when there is an approximate temperature difference of 80 ℃ in the working conditions, the driving frequency needs an adjustment of 0.09 MHz to ensure the output wavelength stability. If a fixed driving frequency is applied at different temperatures, the central wavelength of the output from each spectral channel of the system drifts (Table 3), with the wavelength drifting by 0.0015-0.0025 μm per 10 ℃. After completing spectral calibration, the driving frequency accuracy at each wavelength is significantly improved [Fig. 4(d)], and the average driving frequency deviation at different temperatures is reduced (Table 4). The response of the spectral imaging system drifts with temperature, and the spectral data obtained at different temperatures will show variations with temperature. When the temperature rises from -20 to 30 ℃, the system response decreases and then the calculated spectral radiance decreases [Fig. 5(a)]. After radiometric calibration corrected for temperature, the spectral radiance accuracy improves at lower temperature ranges (Table 5).ConclusionsTo enhance the temperature stability of spectrometer data, we propose a method for correcting the temperature influence on spectral and radiometric calibration. Firstly, a tuning model of the AOTF incorporating temperature variables is built. We analyze the mechanism by which temperature variations affect the characteristics of AOTFs via altering the physical parameters of the crystal material, with the most significant effect of acoustic wave velocity. This model corrects the spectral drift caused by temperature in spectral calibration, achieving wavelength tracking during the variable-temperature radiometric calibration and ensuring wavelength stability in subsequent radiometric calibrations. Thereafter, the spectral radiometric calibration coefficients that include temperature variables are determined to complete the radiometric calibration. Relying on a laboratory setup, we construct a mid-wave infrared (3.7-4.5 μm) calibration verification system for AOTF spectral imaging temperature correction to validate the calibration method over the temperature range from -30 to 50 ℃. The results indicate that the average driving frequency deviation at a low temperature of -30 ℃ is reduced from 41.1 to 0.29 kHz, effectively suppressing the spectral radiance deviation from theoretical values.

    May. 10, 2024
  • Vol. 44 Issue 9 0930001 (2024)
  • Yongheng Yin, Long Ma, and Peng Li

    ObjectiveColor reproduction plays a very important role in textile, printing, telemedicine, and other industries, but affected by the manufacturing process or color rendering mechanism of digital image acquisition equipment, color image transmission between digital devices often has color distortion. Meanwhile, once the distortion appears, the above-mentioned industries will suffer losses or even irreversible damage. During color image acquisition, the most commonly employed acquisition equipment is the digital camera, which is an important method to convert the color image collected by the digital camera into the image seen by the human eye (or the camera characteristic method). Although the existing nonlinear camera characterization methods have the best camera characterization performance at present, these methods have hue distortion. To retain the important properties of the hue-plane preserving and further improve the camera characterization performance, we propose a hue-subregion weighted constrained hue-plane preserving camera characterization (HPPCC-NWCM) method.MethodsThe proposed method improves weighted constrained hue-plane preserving camera characterization from the perspective of optimizing the hue-subregion. First, the camera response value RGBs and the colorimetric value XYZs of the training samples are synchronously preprocessed, with the hue angles calculated and hue subregions preliminarily divided. Then, by operating in the hue subregion, the minimum hue angle differences between each training sample and the samples in the hue subregion are employed as the weighted power function, and the pre-calculation camera characterization matrices (pre-calculation matrices) are calculated for each sample respectively. Additionally, the weighted constrained normalized camera characterization matrix in the hue subregion is obtained by weighted averaging of the pre-calculation matrices using the weighted power function. Combined with the characterization results of samples within the hue subregion and all samples, the number and position of the hue subregions are optimized, and those under the best performance are obtained. To verify the performance improvement of this method, we conduct simulation experiments. Firstly, the hue-subregion number selection experiment is carried out by combining three cameras and three groups of object reflectance datasets under the D65 light illuminant. Then, the two cameras from the previous experimental data are compared with existing methods for further experiments and the exposure independence of each method is verified by changing the exposure level. Finally, the SFU dataset is compared with the existing methods repeatedly with 42 cameras under three light illuminants.Results and DiscussionsVerified by many simulation experiments and real camera experiments, in the simulation experiment of selecting the hue-subregion number, the camera characterization performance of this method is generally enhanced with the increasing hue-subregion number (Fig. 7), tends to stabilize when the number is 6, and yields the best performance when the number is 9. The performance of the subregion number 2 is worse than that of 1, and the analysis is that the small subregion number results in poor universality and low specificity of the characterization matrix in the hue subregion, which affects the characterization performance of the camera. After comparing the simulation experiment with the existing methods, the performance of this method is about 10% to 20% higher than those of the existing hue-plane preserving camera characterization methods, and it is better than or close to the nonlinear method (Table 1). In the variable exposure experiment, the performance of each method is close to that of the fixed exposure experiment, and that of the linear method and the root-polynomial method is close, which can prove the exposure independence. While the polynomial method is obviously worse, exposure independence does not exist (Tables 1 and 2). In the simulation experiments of supplementary light illuminants and cameras, the comparison trend of the results is basically the same as that of the previous experiment, and this method performs better in the supplementary experiment. In addition to being better than the existing camera characterization methods, it can be better than or equal to the nonlinear methods in many environments (Table 3).ConclusionsBy optimizing the hue subregion to improve the weighted constrained hue-plane preserving camera characterization method, the number and position of the hue subregion are optimized to achieve a more accurate camera characterization transformation for different hue subregions. By adopting the theoretical derivation and experimental verification of camera characterization transformation, this method features exposure independence, excellent hue-plane preservation properties, and the combination of the stability of low-order methods and the accuracy of high-order methods. In simulation experiments, it can be better than the existing hue-plane preservation methods, and better than or close to other nonlinear methods. In multi-camera supplementary experiments, the 95 percentile error improvement shows that this method has strong robustness and practical significance.

    May. 25, 2024
  • Vol. 44 Issue 9 0933001 (2024)
  • Please enter the answer below before you can view the full text.
    9+5=
    Submit