Laser & Optoelectronics Progress, Volume. 57, Issue 16, 161008(2020)

Image Fusion Based on Residual Learning and Visual Saliency Mapping

Luoyi Ding1, Jin Duan1,2、*, Yu Song1, Yong Zhu3, and Xiaoshan Yang1
Author Affiliations
  • 1College of Electronic Information Engineering, Changchun University of Science and Technology, Changchun, Jilin 130022, China
  • 2Fundamental Science on Space-Ground Laser Communication Technology Laboratory, Changchun University of Science and Technology, Changchun, Jilin 130044, China
  • 3College of Computer Science and Technology, Changchun University of Science and Technology, Changchun, Jilin 130022, China
  • show less

    To improve the detail information and preserve the contrast of fusion images of visible light images and infrared images, a multi-scale decomposition image fusion method based on residual learning and visual saliency mapping is proposed. First, a Gaussian filter and a guided filter are used to perform multi-scale decomposition, therefore decomposing the image into a basic layer and a detail layer. The detail layer is divided into a small-scale texture layer and a middle-scale edge layer. Then, the proposed improved visual saliency mapping method is used to fuse the base layers, and the base layer of the low-light image is enhanced to make the fused image have good contrast and overall appearance. For the detail layer, a residual network deep learning fusion model is proposed to maximize the fusion rules of the small-scale texture layer and maximize the middle-scale edge layer, respectively. The experiment compares the proposed algorithm with the latest six methods on the four objective indicators of discrete cosine feature mutual information, wavelet feature mutual information, structural similarity, and artifact noise rate on the TNO dataset. The proposed algorithm is improved in the first three objective indicators and further decreased in the artifact noise rate. This algorithm not only keeps the salient features of the image, but also makes the fused image have more detailed texture information and good contrast, therefore effectively reducing artifacts and noise.

    Tools

    Get Citation

    Copy Citation Text

    Luoyi Ding, Jin Duan, Yu Song, Yong Zhu, Xiaoshan Yang. Image Fusion Based on Residual Learning and Visual Saliency Mapping[J]. Laser & Optoelectronics Progress, 2020, 57(16): 161008

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Image Processing

    Received: Dec. 3, 2019

    Accepted: Jan. 6, 2020

    Published Online: Aug. 5, 2020

    The Author Email: Duan Jin (duanjin@vip.sina.com)

    DOI:10.3788/LOP57.161008

    Topics