Chinese Journal of Liquid Crystals and Displays, Volume. 38, Issue 2, 245(2023)
Face image repair network based on face structure guidance
Aiming at the problems of unreasonable facial semantic information and inconsistency of facial contours in the restored face image in the deep learning network for face image inpainting, a face image inpainting network guided by face structure information is proposed. Firstly, the encoder-decoder network technology is used to build a face structure sketch generation network, and skip connections and residual blocks with dilated convolution are added to the generator of the structure sketch generation network to generate the structure sketch of the region to be repaired. Secondly, when a face inpainting network is builted, an attention mechanism is introduced into the inpainting network generator, so that the inpainting network pays more attention to the area to be repaired during the inpainting process, and uses the generated face structure sketch as a guide to realize the face image vivid inpainting of facial semantic structure and texture information. Finally, the feature matching loss is introduced into the loss function of the structure sketch generation network for the model training, so as to constrain the generator to generate results more similar to the real structure sketch. In the loss function of the repair network, the perceptual loss and style loss are combined for the model training, therefore, the facial contour structure and color texture of the face image in the area to be repaired can be better reconstructed, so that the repaired image is closer to the real image. The comparative experimental results show that in the face image dataset, the repair performance of the network model designed in this paper has a high improvement.
Get Citation
Copy Citation Text
Hao-de SHI, Ming-ju CHEN, Jin HOU, Lan LI. Face image repair network based on face structure guidance[J]. Chinese Journal of Liquid Crystals and Displays, 2023, 38(2): 245
Category: Research Articles
Received: May. 28, 2022
Accepted: --
Published Online: Feb. 20, 2023
The Author Email: Ming-ju CHEN (12347259@qq.com)