Chinese Journal of Liquid Crystals and Displays, Volume. 37, Issue 4, 508(2022)
Technical research of composite residual network in low illumination image enhancement
Imaging equipment in a dark environment has problems such as low contrast, loss of image detail information, and color distortion, which can cause huge interference in application scenarios such as video surveillance, intelligent transportation, and face recognition. In order to solve this situation, this paper proposes a composite residual network that incorporates the attention mechanism to enhance low-illuminance images. The algorithm firstly puts the brightness component V into the constructed neural network through color space conversion (RGB-HSV). The neural network extracts the shallow features of the image through a multi-branch structure that incorporates the attention mechanism, passes through the composite residual network extracts deep features, and then reconstructs the image to obtain the enhanced V component. Finally, the low-illuminance image enhancement is achieved through component fusion. The experimental results show that compared with the current mainstream low illumination image enhancement algorithms at home and abroad, the proposed algorithm significantly improves the image brightness and contrast in subjective vision. Compared with the traditional algorithm, the PSNR and SSIM are improved about 20% and 15%, respectively. And compared with the deep learning algorithm, the PSNR and SSIM are improved about 9% and 3%, respectively. It performs well in artificially synthesized low-light images or real and natural low-light images, basically meet the requirements of natural color, contrast and robustness for image enhancement.
Get Citation
Copy Citation Text
Xing-rui WANG, Yan PIAO, Yu-mo WANG. Technical research of composite residual network in low illumination image enhancement[J]. Chinese Journal of Liquid Crystals and Displays, 2022, 37(4): 508
Category:
Received: Aug. 30, 2021
Accepted: --
Published Online: Jun. 20, 2022
The Author Email: Yan PIAO (piaoyan@cust.edu.cn)