Opto-Electronic Engineering, Volume. 49, Issue 4, 210317(2022)
NIR-VIS face image translation method with dual contrastive learning framework
Fig. 1. Comparison of the VIS image (the first row) generated by some algorithms from NIR domain with the real visible image (the last row)
Fig. 2. The structure diagram of the proposed method. To simplify the network structure, the identity loss is not indicated in the figure, see Section 2.4.4 for details
Fig. 4. Crop out facial regions and extract edges from face images in NIR and VIS conditions respectively
Fig. 5. The comparison experimental results on two datasets. From left to right: input NIR face image, CycleGAN, CSGAN, CDGAN, UNIT, Pix2pixHD, the proposed method, and real VIS face image. Where rows Ⅰ~Ⅲ are from NIR-VIS Sx1 dataset, and rows Ⅳ~Ⅶ are from NIR-VIS Sx2 dataset
Fig. 6. Results of the ablation experiments on two datasets. From left to right: input NIR face image, Baseline method, the proposed method without StyleGAN2、
Fig. 7. Comparison of edge images obtained by using each edge extraction method separately. From left to right: real face image, Roberts operator, Prewitt operator, Sobel operator, Laplacian operator, Canny operator
Fig. 8. The effect of different values of
|
|
|
|
|
Get Citation
Copy Citation Text
Rui Sun, Xiaoquan Shan, Qijing Sun, Chunjun Han, Xudong Zhang. NIR-VIS face image translation method with dual contrastive learning framework[J]. Opto-Electronic Engineering, 2022, 49(4): 210317
Category:
Received: Sep. 30, 2021
Accepted: --
Published Online: May. 24, 2022
The Author Email: Shan Xiaoquan (2334321350@qq.com)