FaceGCN: Structured Priors Inspired Graph Convolutional Networks for Blind Face Restoration

Facial image restoration has gained a tremendous progress since the increasing boom of the deep learning methods. Owing to its nature of strong ill-posedness, different categories of a-priori constraints have been harnessed or embedded in the existing deep architectures. While, as it turns to blind...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on circuits and systems for video technology 2025-01, p.1-1
Hauptverfasser: Yan, Weidan, Shao, Wenze, Zhang, Dengyin, Xiao, Liang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Facial image restoration has gained a tremendous progress since the increasing boom of the deep learning methods. Owing to its nature of strong ill-posedness, different categories of a-priori constraints have been harnessed or embedded in the existing deep architectures. While, as it turns to blind face restoration with more complicated degradations, the challenge becomes greater. In this paper, a further insightful step is taken by exploring the potentials of the graph convolutional networks (GCN) in conjunction with the structured priors for the blind problem. Specifically, a lightweight yet physically more intuitive model termed FaceGCN is proposed. On the one hand, a dynamic generator of facial adjacency matrices is constructed assisted by two self-supervised losses, allowing a sparse, accurate, and adaptive construction of case-specific face graphs with facial feature components as nodes. On the other hand, to model well the joint local-nonlocal correlations among various facial feature components, a kind of novel strip-attention GCN modules is correspondingly developed by splitting facial feature maps into intra- and inter-strips in both horizontal and vertical orientations, respectively. Extensive experimental results show that FaceGCN has achieved comparable or even superior performance to state-of-the-art methods, yet at a considerably less computational cost.
ISSN:1051-8215
1558-2205
DOI:10.1109/TCSVT.2025.3526841