Acta Optica Sinica, Volume. 45, Issue 8, 0810002(2025)
Method for Measuring Visibility on Foggy Highways Based on Depth Estimation of Encoder and Decoder
Visibility refers to the maximum horizontal distance at which an individual with normal vision can identify and distinguish an object against the sky background under prevailing weather conditions without external assistance. It is a critical parameter reflecting atmospheric transparency and serves as a key indicator in the transportation sector. Various factors influence visibility, with fog and haze having the most significant influence. In foggy and hazy weather, the fine particles suspended in the air hinder light transmission and absorb light reflected from object surfaces, significantly reducing visibility. On highways, reduced visibility due to fog and haze is a major cause of traffic accidents, posing severe risks to public safety and economic stability. The accurate and efficient acquisition of visibility data is essential for effective traffic management. Therefore, developing a high-precision visibility detection method that provides reliable data support for transportation authorities has become a key research focus in ground-based meteorology. To address this, we propose an advanced visibility detection method based on a depth estimation network and an atmospheric scattering model.
In this paper, we propose a novel visibility detection method leveraging an encoder-decoder structured depth estimation network. Using highway surveillance videos, the method determines visibility by integrating principles of the atmospheric scattering model to extract atmospheric transmission coefficients and scene depth information. First, the K-means clustering algorithm is applied to segment video frames into foggy and visible road regions, enabling the identification of the maximum scene depth area of the road. Next, the dark channel prior algorithm is enhanced using a regional entropy method to refine the selection of atmospheric light intensity, ensuring a more precise estimation of atmospheric transmission coefficients. Subsequently, an encoder-decoder-based depth estimation network extracts depth information from the images. Finally, visibility is calculated based on the atmospheric scattering physical model.
The depth information obtained using the DenseNet-169 network is compared with that derived from five other depth estimation networks: BTS, LapDepth, MonoDepth, CADepth, and Lite-Mono (Table 1). The results show that DenseNet-169 outperforms these models in terms of root mean square error (RMSE), absolute relative error (AR), and accuracy. This demonstrates that the depth information generated by DenseNet-169 is highly precise and effectively supports visibility estimation. To further enhance network performance, four attention modules—coordinate attention (CA), convolutional block attention module (CBAM), separable self-attention (SSA), and efficient channel attention (ECA)—are integrated into DenseNet-169 and compared against the original network without attention modules (Table 2). Among these, the integration of the CBAM module achieves the best performance, yielding the lowest RMSE and AR values while attaining the highest accuracy at a threshold of 1.252. As a result, we adopt the Dense+CBAM encoder-decoder network for depth estimation. Experimental data are collected from surveillance video recorded on November 24, 2024, between 14:25 and 15:15. A sample video frame is shown in Fig. 11, while Table 4 and Fig. 12 present visibility detection results and error variations across different timestamps. The proposed method’s visibility estimation results are compared with those of a visibility monitoring device and four alternative networks. The findings indicate that the proposed method closely aligns with the monitoring device, with only minor deviation observed at 15:00. Overall, the proposed method demonstrates high accuracy in both short-range and long-range visibility estimation, achieving an accuracy rate of 89.83% and an average error of approximately 73 m. Furthermore, the method exhibits strong generalization capabilities, highlighting its reliability and precision in visibility measurement.
We propose a novel visibility detection method based on an encoder-decoder depth estimation network and an atmospheric scattering model. This method employs the atmospheric scattering physical model as its theoretical foundation to calculate visibility by deriving atmospheric transmittance coefficients and scene depth information from images. To enhance accuracy, a regional entropy method is introduced to refine atmospheric light intensity estimation, leveraging stable gray-level variations in foggy regions. Image segmentation techniques are applied to locate the boundary between visible road surfaces and foggy sky regions, focusing on pixel information within the target area to minimize the interference from irrelevant features. In the depth estimation network, the encoder-decoder structure of DenseNet-169 is optimized by incorporating the Dense Block-B module, which suppresses redundant features in the input and enhances feature extraction efficiency. In addition, to mitigate interference from the original image background, the CBAM is embedded in the encoder’s convolutional module, improving road surface feature extraction while reducing the influence of irrelevant features. Experimental results demonstrate that, compared to existing methods, the proposed approach achieves higher accuracy in both long-range and short-range visibility estimation, with an accuracy rate of 89.83%, reduced overall error, and enhanced generalization capability. This method enables efficient and precise visibility measurement, providing reliable data support for traffic management.
Get Citation
Copy Citation Text
Peng Peng, Yucheng Dong, Jiachun Li, Yitao Yao. Method for Measuring Visibility on Foggy Highways Based on Depth Estimation of Encoder and Decoder[J]. Acta Optica Sinica, 2025, 45(8): 0810002
Category: Image Processing
Received: Jan. 10, 2025
Accepted: Feb. 27, 2025
Published Online: Apr. 27, 2025
The Author Email: Peng Peng (pengpeng@sust.edu.cn), Jiachun Li (zhs@chd.edu.cn)
CSTR:32393.14.AOS250467