Optics and Precision Engineering, Volume. 30, Issue 10, 1246(2022)
Image dehazing method based on adaptive bi-channel priors
Image is an important source of information for modern warfare, and the quality of image decreases in foggy environment, which seriously hinders the ability of photoelectric reconnaissance and identification. In order to improve the effective utilization of images in foggy environment, an adaptive bi-channel prior image dehazing method was developed. First, based on the dark channel prior and the bright channel prior theories, the hazy images are converted from RGB to HSV color space, and the thresholds of saturation and luminance components are used to detect white or light pixels and black or dark pixels in hazy images that do not satisfy the dark and light channel priors, respectively. Then, superpixels are selected as the local area for the calculation of the dark and bright channels, and the local transmittance and atmospheric light values are estimated. Finally, adaptive bi-channel priors are developed to rectify any incorrect estimation of transmission and atmospheric light values for both white and black pixels. The transmittance map and atmospheric light map are filtered by the guided filter, and then substituted into the atmospheric scattering model to obtain a clear dehaze image. Experimental results show that the dehazed image restores the true color, the visual effect is natural and clear, and the dehazing process of the image is accurately and efficiently achieved. The dehazing process is performed on the FRIDA database, the mean square error between the dehazed image and the ground truth using the method in this paper is better than that of the existing method, which are 15% lower than that yielded by the BiCP method.
Get Citation
Copy Citation Text
Yutong JIANG, Zhonglin YANG, Mengqi ZHU, Yi ZHANG, Lixia GUO. Image dehazing method based on adaptive bi-channel priors[J]. Optics and Precision Engineering, 2022, 30(10): 1246
Category: Information Sciences
Received: Jul. 8, 2021
Accepted: --
Published Online: Jun. 1, 2022
The Author Email: JIANG Yutong (jiangyutong201@163.com)