Reference Image-Assisted Auxiliary Feature Fusion in Image Inpainting

Image inpainting has achieved remarkable advancement in improving image quality via reconstructing the corrupted area with perceptual rationality. However, when the key semantic features of the mask area are blurred, existing methods solely relying on the corrupted image surrounding information may...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE signal processing letters 2024, Vol.31, p.1394-1398
Hauptverfasser: Bai, Shaojie, Lin, Lixia, Hu, Zihan, Cao, Peng
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Image inpainting has achieved remarkable advancement in improving image quality via reconstructing the corrupted area with perceptual rationality. However, when the key semantic features of the mask area are blurred, existing methods solely relying on the corrupted image surrounding information may not be sufficient to restore the precise semantics of the original image. To solve this problem, in this letter, we propose a dual codec network guided by reference images, which obtains the effective semantic priors to reconstruct the lost detail feature of the corrupted image. Our network first aligns the reference image using a SIFT aligned module, then, extracts the texture feature and the gradient detail through a Fourier texture restoration module and an auxiliary feature extraction module, respectively, which finally emerge for image inpainting. To validate our scheme, we introduce the Reimage3K dataset with more realistic and diverse scenarios than existing datasets. Comprehensive experiments show that our scheme outperforms SOTA on PSNR, FID, and LPIPS by 3.29 dB, 54.5%, and 51.3% on DPED10K and 3.74 dB, 48.4%, 55.8% on Reimag3 k, respectively.
ISSN:1070-9908
1558-2361
DOI:10.1109/LSP.2024.3398536