Laser & Optoelectronics Progress, Volume. 56, Issue 16, 161004(2019)

Multimodal Image Fusion Based on Generative Adversarial Networks

Xiaoli Yang1, Suzhen Lin1、*, Xiaofei Lu2, Lifang Wang1, Dawei Li1, and Bin Wang1
Author Affiliations
  • 1 School of Big Data, North University of China, Taiyuan, Shanxi 0 30051, China
  • 2 Jiuquan Satellite Launch Center, Jiuquan, Gansu 735000, China
  • show less

    This study proposes a new network based on generative adversarial networks to achieve an end-to-end image adaptive fusion, thus solving the difficulties in designing multiscale geometric tools and fusion rules in multimodal image fusion. First, the multimodal source image is synchronously input into the generative network, whose structure is created based on a residual-based convolutional neural network proposed herein. The network can generate the fused image through adaptive learning. Second, the fused and label images are sent to the discriminant network. The generator is gradually optimized through the feature representation and classification identification of the discriminator. The final fused image is obtained in the dynamic balance of the generator and discriminator. In comparison with the existing representative fusion methods, the proposed algorithm obtains more cleaner fusion results and has no artifacts, thereby providing a better visual quality.

    Tools

    Get Citation

    Copy Citation Text

    Xiaoli Yang, Suzhen Lin, Xiaofei Lu, Lifang Wang, Dawei Li, Bin Wang. Multimodal Image Fusion Based on Generative Adversarial Networks[J]. Laser & Optoelectronics Progress, 2019, 56(16): 161004

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Image Processing

    Received: Jan. 9, 2019

    Accepted: Mar. 22, 2019

    Published Online: Aug. 5, 2019

    The Author Email: Lin Suzhen (lsz@nuc.edu.cn)

    DOI:10.3788/LOP56.161004

    Topics