Frequency-Aware Guidance for Blind Image Restoration via Diffusion Models
Blind image restoration remains a significant challenge in low-level vision tasks. Recently, denoising diffusion models have shown remarkable performance in image synthesis. Guided diffusion models, leveraging the potent generative priors of pre-trained models along with a differential guidance loss...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Blind image restoration remains a significant challenge in low-level vision
tasks. Recently, denoising diffusion models have shown remarkable performance
in image synthesis. Guided diffusion models, leveraging the potent generative
priors of pre-trained models along with a differential guidance loss, have
achieved promising results in blind image restoration. However, these models
typically consider data consistency solely in the spatial domain, often
resulting in distorted image content. In this paper, we propose a novel
frequency-aware guidance loss that can be integrated into various diffusion
models in a plug-and-play manner. Our proposed guidance loss, based on 2D
discrete wavelet transform, simultaneously enforces content consistency in both
the spatial and frequency domains. Experimental results demonstrate the
effectiveness of our method in three blind restoration tasks: blind image
deblurring, imaging through turbulence, and blind restoration for multiple
degradations. Notably, our method achieves a significant improvement in PSNR
score, with a remarkable enhancement of 3.72\,dB in image deblurring. Moreover,
our method exhibits superior capability in generating images with rich details
and reduced distortion, leading to the best visual quality. |
---|---|
DOI: | 10.48550/arxiv.2411.12450 |