Laser & Optoelectronics Progress, Volume. 57, Issue 16, 161008(2020)
Image Fusion Based on Residual Learning and Visual Saliency Mapping
To improve the detail information and preserve the contrast of fusion images of visible light images and infrared images, a multi-scale decomposition image fusion method based on residual learning and visual saliency mapping is proposed. First, a Gaussian filter and a guided filter are used to perform multi-scale decomposition, therefore decomposing the image into a basic layer and a detail layer. The detail layer is divided into a small-scale texture layer and a middle-scale edge layer. Then, the proposed improved visual saliency mapping method is used to fuse the base layers, and the base layer of the low-light image is enhanced to make the fused image have good contrast and overall appearance. For the detail layer, a residual network deep learning fusion model is proposed to maximize the fusion rules of the small-scale texture layer and maximize the middle-scale edge layer, respectively. The experiment compares the proposed algorithm with the latest six methods on the four objective indicators of discrete cosine feature mutual information, wavelet feature mutual information, structural similarity, and artifact noise rate on the TNO dataset. The proposed algorithm is improved in the first three objective indicators and further decreased in the artifact noise rate. This algorithm not only keeps the salient features of the image, but also makes the fused image have more detailed texture information and good contrast, therefore effectively reducing artifacts and noise.
Get Citation
Copy Citation Text
Luoyi Ding, Jin Duan, Yu Song, Yong Zhu, Xiaoshan Yang. Image Fusion Based on Residual Learning and Visual Saliency Mapping[J]. Laser & Optoelectronics Progress, 2020, 57(16): 161008
Category: Image Processing
Received: Dec. 3, 2019
Accepted: Jan. 6, 2020
Published Online: Aug. 5, 2020
The Author Email: Duan Jin (duanjin@vip.sina.com)