Optics and Precision Engineering, Volume. 30, Issue 10, 1217(2022)
Single-image translation based on multi-scale dense feature fusion
To solve the problems of low image quality and poor detail features generated by the existing single image translation models, a single image translation model based on multi-scale dense feature fusion is proposed in this paper. First, in this model, the idea of multi-scale pyramid structure is used to downsample the original and target images to obtain input images of different sizes. Then, in the generator, images of different sizes are input into the dense feature module for style feature extraction, which are transferred from the original image to the target image, and the required translation image is generated through continuous game confrontation with the discriminator. Finally, dense feature modules are added in each stage of training by means of incremental growth generator training, which realizes the migration of generated images from global to local styles, and generates the required translation images. Extensive experiments have been conducted on various unsupervised images to perform image translation tasks. The experimental results demonstrate that in contrast to the existing methods, the training time of this method is shortened by 80%, and the SIFID value of the generated image is reduced by 22.18%. Therefore, the model proposed in this paper can better capture the distribution difference between the source and target domains, and improve the quality of image translation.
Get Citation
Copy Citation Text
Qihang LI, Long FENG, Qing YANG, Yu WANG, Guohua GENG. Single-image translation based on multi-scale dense feature fusion[J]. Optics and Precision Engineering, 2022, 30(10): 1217
Category: Information Sciences
Received: Dec. 22, 2021
Accepted: --
Published Online: Jun. 1, 2022
The Author Email: GENG Guohua (1925995331@qq.com)