1Sun Yat-Sen University (Zhuhai Campus), Guangdong Provincial Key Laboratory of Quantum Metrology and Sensing, School of Physics and Astronomy, Zhuhai, China
2Sun Yat-Sen University, State Key Laboratory of Optoelectronic Materials and Technologies, School of Physics, Guangzhou, China
Recording and identifying faint objects through atmospheric scattering media by an optical system are fundamentally interesting and technologically important. We introduce a comprehensive model that incorporates contributions from target characteristics, atmospheric effects, imaging systems, digital processing, and visual perception to assess the ultimate perceptible limit of geometrical imaging, specifically the angular resolution at the boundary of visible distance. The model allows us to reevaluate the effectiveness of conventional imaging recording, processing, and perception and to analyze the limiting factors that constrain image recognition capabilities in atmospheric media. The simulations were compared with the experimental results measured in a fog chamber and outdoor settings. The results reveal good general agreement between analysis and experiment, pointing out the way to harnessing the physical limit for optical imaging in scattering media. An immediate application of the study is the extension of the image range by an amount of 1.2 times with noise reduction via multiframe averaging, hence greatly enhancing the capability of optical imaging in the atmosphere.
【AIGC One Sentence Reading】:Optical imaging through scattering media is explored with a model considering various factors. Experiments show agreement, enhancing imaging capability by 1.2 times with noise reduction.
【AIGC Short Abstract】:This study explores the limits of optical imaging through atmospheric scattering media using a comprehensive model that integrates target characteristics, atmospheric effects, imaging systems, and digital processing. Simulations and experiments in fog chambers and outdoor settings show good agreement, highlighting ways to harness these limits. An application of this research extends the image range by 1.2 times with noise reduction, enhancing optical imaging capability in such media.
Note: This section is automatically generated by AI . The website and platform operators shall not be liable for any commercial or legal consequences arising from your use of AI generated content on this website. Please be aware of this.
Imaging acquisition and recognition abilities are traditionally constrained by either the diffraction limit or the resolving capability of an optical imaging system. In low-visibility environments, optical images suffer severe contrast degradation due to atmospheric effects, resulting in reduced viewing distance and angular resolution (AR). Hence, the image perceptibility can be partially or completely impaired. Though existing imaging models, which are based on the contrast threshold of human vision,1–6 have made success in predicting the probability of human recognition of targets captured by traditional imaging systems through the atmosphere, the evolution of imaging technologies has substantially enhanced the capability to perceive images.
The continuous development in imaging technologies in the atmosphere or in scattering media necessitates a re-examination of the traditional optical imaging models. Modern experimental settings, incorporating advanced optical components and processing techniques, present new opportunities and challenges for improving image perceptibility in atmospheric scattering media. For instance, Bian et al.7 applied liquid crystal to enable higher signal-interference ratio, allowing deeper penetration for imaging through the fog. Likewise, Wang et al.8 and Demos et al.9 applied optical filtering and polarizers to improve imaging performance. The development of high dynamic range sensors enables an exponentially increased sensitivity to record optical signals. Deep-learning-assisted signal processing10–13 demonstrates unprecedented effects. Advanced displaying systems14 have shown the potential to further improve the human perception of foggy images. All these advancements need to be taken into consideration to accurately predict the enhanced imaging performance under low-visibility conditions. On the other hand, one of the main challenges in imaging modeling is the complex interactions between light and atmospheric particles. Traditional models often simplify these interactions, neglecting factors, such as dynamic changes in atmospheric conditions. In addition, the validation of these models in both controlled and real-world settings needs an unambiguous experimental verification.
In this work, we introduce a reformulated imaging model that comprehensively includes all processes in imaging, including optical transferring, recording, signal processing, and perception. The model is based on the principles of the meteorological optical range (MOR). More specifically, we have incorporated a special parameter, , which describes image perceptibility as introduced in human vision models15 into our imaging model. The validity of this parameter for discernible images has been experimentally verified. By introducing into modeling the imaging systems, we enable a dynamic evaluation of the imaging limit. Our research aims to bridge the gap between theoretical modeling and practical applications by providing a detailed analysis of the modulation transfer function (MTF) in the presence of updated detection configuration. This work not only provides physical insight for dehazing algorithms16,17 but also offers guidance for refining optical imaging systems. Our analysis was examined through experiments in a fog chamber as well as in outdoor settings. Good agreement between theoretical analysis and experimental results was obtained. Hence, the work described allows for the quantitative determination of the physical boundary of the optical imaging in atmospheric scattering media, filling a critical gap in current modeling approaches and setting the stage for future innovation in atmospheric imaging.
Sign up for Advanced Photonics Nexus TOC Get the latest issue of Advanced Photonics delivered right to you!Sign up now
2 Principles and Methods
2.1 Imaging Model
The MTF is widely used to describe the characteristics of an optical system,18 and it is applied in this work to describe the imaging transferring behavior in atmospheric scattering media. Inspired by the theories of MOR that the imaging perceptibility is determined by the vision’s minimum modulation difference, we advocate for a refined definition of an imaging model that relies merely on the appearance of the final modulation determined by the imaging system. Hence, the following formulation15 is applied to describe the imaging system: where , , and represent the modulation (or contrast) of the target, the imaging system’s output, and the noise, respectively. is the system’s MTF, is the perceiving factor, a multiplier of the theoretical minimum discernible modulation, and is the minimum of the sensor’s dynamic range and bit depth. Equation (1) presents two critical conditions: the signal-to-noise ratio (SNR) condition, delineating the system’s capability to distinguish modulation variations across spatial frequencies amid noise; and the signal-to-interference ratio (SIR) condition, illustrating an absolute threshold for modulation detection limited by the system’s sensitivity to optical intensity.
Specially, for the SNR condition, the parameter serves as a comprehensive indicator of the effectiveness of image perceptibility, spanning from naked eye observation to algorithm-assisted analysis, with its magnitude reflecting the efficacy of postprocessing techniques. When the imaging modulation falls below , representing the system perceptibility, the target becomes obscured by noise, thus reaching the noise-dominated imaging limit. Likewise, when the signal modulation falls short of , the signal is lost during the optoelectronic recording and conversion.
The expression for is19where and represent the maximum and minimum light intensities, respectively, whereas and represent the maximum and minimum reflectances, respectively.
The light transmission and imaging process involves the ambient light , the reflected light , the interference light , and the scattered light . The latter three transmit through the atmosphere and lenses and are then converted into electrical signals by the imaging sensor. Thus, the MTF of the imaging system, , can be expressed as3where , , and denote the MTFs of the atmosphere, lens, and sensor, respectively. The is divided into aerosol and turbulence .20–23 With Sec. 5, the expressions of all the MTFs can be written as where is the spatial frequency, is the working wavelength, is the focal length, is the effective diameter of the lens, is the atmospheric coherent width, represents the optical thickness, is the Airy disk diameter, and is the line spread width of the sensor.
The modulation of noise15 can be calculated as where signifies the Fourier transform, denotes taking the mean value, and represents the noise from the system, originating both from the atmosphere and the imaging system itself.
By substituting the above equations into Eq. (1), we can achieve the expression for AR as a function of in the SNR condition, where is the minimum distinguishable distance. To satisfy the SIR condition, there exists a maximum value for ,
Equation (8) indicates that, when attenuation is weak, the limitation of the imaging is entirely due to the Rayleigh criterion and imaging resolution. The above two equations quantify two boundaries for imaging, representing the imaging limit, revealing the complex nature of imaging through atmospheric scattering media. The discussion indicates that the imaging limit can be harnessed via various strategies, such as refining imaging methods, employing lenses with higher numerical apertures, enhancing sensor dynamic range and bit depth, reducing system noise, and implementing advanced algorithms.
The choice of the value is pivotal in framing our approach to quantifying imaging limitations through atmospheric scattering media. Within the human eye sensitivity (10-bit),24,25 there exists a positive correlation between the probability of visual perception and the value of for unprocessed images.26 When , the human eye can reliably detect objects. Following the guidelines of the International Union of Pure and Applied Chemistry (IUPAC), we set as the detection limit.27,28 Drawing from both the theoretical groundwork laid out in prior discussions and the empirical benchmarks set by existing research, the experimental validation of the threshold becomes imperative, as it offers a critical test of the applicability in real-world applications.
The objective of the indoor experiment was to rigorously evaluate model’s accuracy in predicting imaging limits within a controlled low-visibility environment and to replicate typical imaging scenarios examining both the SIR condition and the SNR condition (whether ). Thus, we chose the following two cameras: Daheng MER-232-48GM-P NIR (8-bit) and PCO EDGE 4.2 (16-bit). MER-232-48GM-P NIR is optimized for near-infrared and SIR is limited, whereas the EDGE 4.2 is visible-light-responsive and SNR-limited. Lens choices were the Kowa M7528-MP and Zeiss Milvus 2/100M with the focal lengths of 75 and 100 mm, respectively, to match the camera’s pixel physical dimensions.
The fog chamber experiment was orchestrated at the Visibility Calibration Laboratory of the China Meteorological Administration in Shanghai, China. The temperature was 298 K, and the air pressure was 101 kPa. The fog in the experiment was made of water particles with diameters ranging from 1 to . The target employed was the USAF 1951 resolution test chart, featuring six groups (G:0 to G:5) and six elements (E:1 to E:6) within each group, with the test chart’s reflective area for the black portion around 2.5% and for the white area around 90%. Alignment between the target and cameras is horizontal, with a working distance of 16 m. A visibility meter sensor (HUAYUN SOUNDING DNQ1) with a 10% of error was placed at the same height as the target to obtain the visibility,29,30 as Fig. 1 demonstrates.
Figure 1.Schematic of the optical imaging process through atmospheric scattering media under external illumination.
In pursuit of further advancements in the system’s ability to penetrate fog and to validate the SNR condition, we conducted an in-depth test and analysis in outdoor settings. The outdoor experiment was orchestrated in Chongqing, China, during October, with the target facing west. The temperature and the air pressure of the location were 300 K and 100 KPa, respectively. In the foggy weather, the fog stably existed for over 3 h, with the visibility increasing from about 1 to 3 km. We utilized the PCO EDGE 4.2, paired with a Canon EF 400 mm F/5.6L lens. The resolution test chart was positioned 5 km away from the camera. Visibility measurement was facilitated by a drone-mounted visibility sensor, offering real-time data of the atmospheric condition. We recorded the target under the same experimental settings for conditions with and without fog. The modulation depth of the target without fog was used in our calculation, effectively accounting for the turbulence effect on MTF. This calibration process allows a simplification of the theoretical analysis by neglecting MTF terms other than . In the data acquired without fog, the theoretical resolution of our imaging system, as predicted by Eq. (9) using the modulation depth of the target captured at a close distance, was . However, due to the effect of atmospheric turbulence, the resolution adjusted to when using the modulation depth measured at a distance without fog.
According to Eq. (8), the noise reduction can effectively improve the system’s imaging capability. We conducted a simple approach of multiframe (M-F) averaging to reduce Gaussian noise,31,32 thereby increasing the SNR. The noise variance can present an inverse proportional decay with the number of frames averaged. The noise variance is defined as33where we identified and selected a homogeneous region within the image and filtered it with a moderate kernel to obtain . The data were sampled at intervals of 100 ms and 200 images were averaged.
2.2 Experimental Imaging Criteria
The recorded images were further processed with the following technique. For the 8-bit images, the series was shown by the original image, dark channel prior (DCP),34 and clipped limit adaptive histogram equalization (CLAHE).35 The DCP leverages the atmospheric degradation model to enhance images. The CLAHE enhances local modulation by dividing the image into small regions and applying histogram equalization separately to each region. Both are effective dehazing algorithms for enhancing images. For the 16-bit data, the participants were invited to perform custom histogram adjustments to achieve the optimal enhancement algorithm without prior knowledge of the target.
Examples of the data collected in the fog chamber experiment are shown in Fig. 2. The figure demonstrates the target alongside the average and normalized values inside the red and blue regions in Figs. 2(a) and 2(b) with high and low visibilities, respectively. In Fig. 2(a), the imaging is limited by the system’s resolution, extending beyond G:2, E:2. Conversely, in Fig. 2(b), the imaging is limited by visibility; thus three clear peaks are discernible before G:2, E:2, and no discernible feature beyond. The quantitative analysis is in good agreement with the human visual observation as well as algorithm-aided observation, indicating that the obliteration of signal features renders the task of discernment infeasible without prior knowledge of the target. These findings not only delineate the limits for image perceptibility but also set the stage for study to harness the resolution limits.
Figure 2.Original data from fog chamber experiment. Targets in (a) high and (b) low visibility conditions, along with the average normalized values for the red region in (c) high and (d) low visibility conditions, and the blue region in (e) high and (f) low visibility conditions.
For the MER-232-48GMP NIR imaging system, Fig. 3(a) presents the data analysis where the red curve represents the SNR condition, and the black dashed line signifies the SIR condition. The yellow, orange, and blue error points correspond to the data observed without algorithms, using DCP, and using CLAHE for the corresponding minimum AR, respectively. With algorithm-aided vision, the penetrable by the imaging system improves by . However, in terms of smaller objects, these algorithms offer a relatively minor enhancement for . Here, the imaging limit is mainly determined by the SIR condition.
Figure 3.Relationship between AR and , prediction versus data: (a) MER-232-48-GM-P NIR and (b) PCO EDGE 4.2.
For the PCO EDGE 4.2 imaging system, Fig. 3(b) illustrates the performance analysis where the red curve and black dashed line represent the SNR and the SIR conditions, respectively. The blue error points represent the observed data. Given that the system is SNR-limited, we focus on fitting the data with respect to . To quantify the fitting’s accuracy and reliability, we employed the statistical measures: -squared () and root mean square error (RMSE). Their expressions are36where is the measured value, is the predicted value, and data is the number of data. The fitted is , aligning well with the IUPAC limit, with an -squared of 0.9544, and RMSE of 0.0001, affirming the imaging limit’s reliance on SNR conditions.
These results underscore the versatility and accuracy of our proposed model in predicting the single-frame (S-F) imaging limit of the detecting system affected by atmospheric scattering. The experiment not only confirms the model’s applicability across different camera systems and conditions but also highlights the potential for enhancement algorithms to mitigate the visibility limitations in foggy environments.
3.2 Outdoor Experiment
We first examine the relationship between the modulation and variance of the image noise. is measured to examine the relationship between the image clarity and the number of averaging, and the variance is measured to confirm the elimination of Gaussian noise. The numerical results of the two quantities calculated based on measured experimental data are illustrated as blue dots in Figs. 4(a) and 4(b), showcasing the decay of variance and in accordance with the law associated with Gaussian noise, as the fitted black lines indicate. When the number of averaging is greater than 100, and variance quickly converge to a constant level, indicating that the image contains Gaussian noise and non-Gaussian noise that cannot be eliminated by averaging.37
Figure 4.Relationship between number of averaging and (a) image variance and (b) SNR.
We then examine the efficacy of using averaging in actual imaging. Figure 5 shows the AR for S-F and M-F averaged images against , highlighting the impact on SNR. The gray line and green dot line are the S-F and M-F SNR curves, respectively, with the -square and RMSE values as 0.92761, and 0.94839, , respectively. The black dashed line is the SIR condition. In the case of a camera limited by SNR, employing the noise reduction through averaging has effectively extended the imaging range to 1.2 times that of the S-F imaging system, corresponding to an increase of . Compared to the S-F system, where the value associated with its is 3, the M-F system exhibits a significantly reduced value at 0.1839, underscoring the technique’s potential to extend the imaging system’s operational .
Figure 5.Relationship between AR and in the field experiment.
As a result, the field experiment reinforces the comprehensive framework of our proposed model, highlighting both the potential and limitations of current image enhancement techniques in addressing atmospheric scattering effects.
4 Conclusion
In conclusion, we derived a comprehensive physical model to describe the behavior of optical images after traversing a specific optical thickness in the atmosphere based on the principles of MOR. We show explicitly that the image can be retrieved with the requirement that the image modulation survives to a minimum level so that the system detection allows the modulation to be detected via high dynamic range detectors or alternatively via M-F averaging. Experimental validations of our prediction were conducted both in a fog chamber and outdoor settings, revealing a substantial alignment, demonstrating that the optical image limit can be harnessed to achieve its best performance via an optimized optical system, a suitable detecting system, and effective postsignal processing.
5 Appendix: The System MTF
In this section, we focus on the derivations of the system MTF.
The optical imaging process is shown in Fig. 1. The ambient light illuminating the distant target undergoes diffuse reflection, resulting in the reflectance within the field of view, denoted as . The imaging light consists of the reflected light as well as the interference light and scattered light . When the target is of restricted AR, the enters the lenses at a fixed angle. Therefore, we consider the extinction coefficient to be a constant. This allows us to formulate the radiative equation as19where is the scattering coefficient. The coordinates , , and are established in a reference system with the target as the origin. Assuming the distance between the target and the lenses is , we denote as .
The light intensity distribution can be calculated as
Substituting Eq (15) into the definition of modulation, we obtain the relationship between modulation depth and ,
For ideal target imaging with an infinitely distant background and an object appearing black, and . This leads to a simplified expression for modulation depth induced by atmospheric attenuation with respect to ,
Thus, the aerosol MTF can be written as
The turbulence MTF for short exposure imaging is expressed as38,39where is the spatial frequency, is the working wavelength, is the focal length, is the effective diameter of the lens, , represents the zenith angle, is the refractive index structure parameter, and and represent the atmospheric pressure and temperature on the Kelvin scale, respectively.
The point spread function (PSF) of the lenses is characterized by Bessel functions, and its diameter, where the first minimum occurs, is known as the Airy disk. The full width at half-maximum of the Airy disk is given as where is the Airy disc diameter. The lenses’ PSF can be approximated as a Gaussian function:40
Thus, the MTF of the lens can be expressed as
In theory, the MTF of the sensor can be simulated by a sinc function,41 defined as , but in practical engineering, a Gaussian function also provides a good approximation.42,43 The PSF of the sensor can be obtained by applying low-pass filtering to the edge spread function derived from the sensor’s captured edges, followed by a Fourier transform of the line spread function (LSF). Therefore, its expression is where is the total width of the LSF. Consequently, the MTF of the sensor can be written as
Yikun Liu received his BS degree and PhD in physics from Sun Yat-sen University in 2006 and 2013, respectively. He is an assistant professor at the School of Physics and Astronomy of Sun Yat-sen University. His current research interests include optical imaging and photonics.
Biographies of the other authors are not available.
[31] M. Tico. Multi-frame image denoising and stabilization, 1-4(2008).
[32] A. Nazir, M. S. Younis, M. K. Shahzad. MFNR: multi-frame method for complete noise removal of all PDF types in multi-dimensional data using KDE(2020).
[33] S. D.-I. Schuster et al. Noise variance and signal-to-noise ratio estimation from spectral data, 1-6(2019).