Fine-grained Face Editing via Personalized Spatial-aware Affine Modulation
Si Liu, Renda Bao, Defa Zhu, Shaofei Huang, Qiong Yan, Liang Lin, Chao Dong
Fine-grained face editing, as a special case of image translation task, aims at modifying face attributes according to users’ preference. Although generative adversarial networks (GANs) have achieved great success in general image translation tasks, these models cannot be directly applied in the face editing problem. Ideal face editing is challenging as it has two special requirements – personalization and spatial-awareness. To address these issues, we propose a novel Personalized Spatial-aware Affine Modulation (PSAM) method based on a general GAN structure. The key idea is to modulate the intermediate features in a personalized and spatial-aware manner, which corresponds to the face editing procedure. Specifically, for personalization, we adopt both the face image and the desired attribute as input to generate the modulation tensors. For spatial-aware, we set these tensors to be of the same size as the input image, allowing pixelwise modulation. Extensive experiments in four fine-grained face editing tasks, i.e., makeup, expression, illumination and aging, demonstrate the effectiveness of the proposed PSAM method. The synthesis results of PSAM can be further boosted by a new transferable training strategy.