Optics and Precision Engineering, Volume. 31, Issue 7, 1085(2023)
Image fusion of dual-discriminator generative adversarial network and latent low-rank representation
To improve the visual effect of infrared and visible image fusion, images from two different sources were decomposed into low-rank images and sparse images with noise removed by latent low-rank representation. Moreover, to obtain the fusion sparse plot, the KL transformation was used to determine the weights and weighted fusion of the sparse components. The generation adversarial network of the double discriminator was redesigned, and the low-rank component characteristics of the two sources were extracted as the inputs of the network through the VGG16 network. The fusion low-rank diagram was generated using the game of generator and discriminator. Finally, the fusion sparse image and the fusion low-rank image were superimposed to obtain the final fusion result. Experimental results showed that on the TNO dataset, compared with the five listed advanced methods, the five indicators of entropy, standard deviation, mutual information, sum of difference correlation, and multi-scale structural similarity increased by 2.43%, 4.68%, 2.29%, 2.24%, and 1.74%, respectively, when using the proposed method. For the RoadScene dataset, only two metrics, namely, the sum of the difference correlation and multi-scale structural similarity, were optimal. The other three metrics were second only to the GTF method. However, the image visualization effect was significantly better than the GTF method. Based on subjective evaluation and objective evaluation analysis, the proposed method can obtain high-quality fusion images, which has obvious advantages compared with the comparison method.
Get Citation
Copy Citation Text
Daiyu YUAN, Lihua YUAN, Tengyan XI, Zhe LI. Image fusion of dual-discriminator generative adversarial network and latent low-rank representation[J]. Optics and Precision Engineering, 2023, 31(7): 1085
Category: Information Sciences
Received: Aug. 9, 2022
Accepted: --
Published Online: Apr. 28, 2023
The Author Email: YUAN Lihua (lihuayuan@nchu.edu.cn)