Face Editing Based on Facial Recognition Features

Face editing generates a face image with the target attributes without changing the identity or other information. Current methods have achieved considerable performance; however, they cannot effectively retain the face's identity and semantic information while controlling the attribute intensi...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on cognitive and developmental systems 2023-06, Vol.15 (2), p.774-783
Hauptverfasser: Ning, Xin, Xu, Shaohui, Nan, Fangzhe, Zeng, Qingliang, Wang, Chen, Cai, Weiwei, Li, Weijun, Jiang, Yizhang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Face editing generates a face image with the target attributes without changing the identity or other information. Current methods have achieved considerable performance; however, they cannot effectively retain the face's identity and semantic information while controlling the attribute intensity. Inspired by two human cognitive characteristics, namely, the principle of global precedence and the principle of homology continuity, we propose a novel face editing approach called the information retention and intensity control generative adversarial network (IricGAN). It includes a learnable hierarchical feature combination (HFC) function, which can construct a sample's source space through multiscale feature mixing; it can guarantee the integrity of the source space while significantly compressing the network. Additionally, the attribute regression module (ARM) can decouple different attribute paradigms in the source space to ensure the correct modification of the required attributes and preserve the other areas. The gradual process of modifying the face attributes can be simulated by applying different control strengths in the source space. In face editing experiments, both qualitative and quantitative results demonstrate that IricGAN achieves the best overall results among state-of-the-art alternatives. Target attributes can be continuously modified by refeeding the relationship of the source space and the image, and the independence of each attribute can be retained to the greatest extent. IricGAN: https://github.com/nanfangzhe/IricGAN .
ISSN:2379-8920
2379-8939
DOI:10.1109/TCDS.2022.3182650