Uncertainty-Aware Semantic Guidance and Estimation for Image Inpainting

Completing a corrupted image by filling in correct structures and reasonable textures for a complex scene remains an elusive challenge. In case that a missing hole involves diverse semantic information, conventional two-stage approaches based on structural information often lead to unreliable struct...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE journal of selected topics in signal processing 2021-02, Vol.15 (2), p.310-323
Hauptverfasser: Liao, Liang, Xiao, Jing, Wang, Zheng, Lin, Chia-Wen, Satoh, Shin'ichi
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Completing a corrupted image by filling in correct structures and reasonable textures for a complex scene remains an elusive challenge. In case that a missing hole involves diverse semantic information, conventional two-stage approaches based on structural information often lead to unreliable structural prediction and ambiguous visual texture generation. To address the problem, we propose a SEmantic GUidance and Estimation Network (SeGuE-Net) that iteratively evaluates the uncertainty of inpainted visual contents based on pixel-wise semantic inference and optimize structural priors and inpainted contents alternatively. Specifically, SeGuE-Net utilizes semantic segmentation maps as guidance in each iteration of image inpainting, under which location-dependent inferences are re-estimated, and, accordingly, poorly-inferred regions are refined in subsequent iterations. Extensive experiments on real-world images demonstrate the superiority of our proposed method over state-of-the-art approaches in terms of clear boundaries and photo-realistic textures.
ISSN:1932-4553
1941-0484
DOI:10.1109/JSTSP.2020.3045627