Optics and Precision Engineering, Volume. 32, Issue 2, 221(2024)
PET/CT Cross-modal medical image fusion of lung tumors based on DCIF-GAN
Medical image fusion based on Generative Adversarial Network (GAN) is one of the research hotspots in the field of computer-aided diagnosis. However, the problems of GAN-based image fusion methods such as unstable training, insufficient ability to extract local and global contextual semantic information of the images, and insufficient interactive fusion. To solve these problems, this paper proposed a dual-coupled interactive fusion GAN (DCIF-GAN). Firstly, a dual generator and dual discriminator GAN was designed, the coupling between generators and the coupling between discriminators was realized through the weight sharing mechanism, and the interactive fusion was realized through the global self-attention mechanism; secondly, coupled CNN-Transformer feature extraction module and feature reconstruction module were designed, which improved the ability to extract local and global feature information inside the same modal image; thirdly, a cross modal interactive fusion module (CMIFM) was designed, which interactively fuse image feature information of different modalities. In order to verify the effectiveness of the proposed model, the experiment was carried out on the lung tumor PET/CT medical image dataset. Compared with the best method of the other four methods, the proposed method in the average gradient, spatial frequency, structural similarity, standard deviation, peak signal-to-noise ratio, and information entropy are improved by 1.38%, 0.39%, 29.05%, 30.23%, 0.18%, 4.63% respectively. The model can highlight the information of the lesion areas, and the fused image has clear structure and rich texture details.
Get Citation
Copy Citation Text
Tao ZHOU, Qianru CHENG, Xiangxiang ZHANG, Qi LI, Huiling LU. PET/CT Cross-modal medical image fusion of lung tumors based on DCIF-GAN[J]. Optics and Precision Engineering, 2024, 32(2): 221
Category:
Received: Aug. 2, 2023
Accepted: --
Published Online: Apr. 2, 2024
The Author Email: CHENG Qianru (chengqianru5@163. com)