Masked Pre-training Enables Universal Zero-shot Denoiser
In this work, we observe that model trained on vast general images via masking strategy, has been naturally embedded with their distribution knowledge, thus spontaneously attains the underlying potential for strong image denoising. Based on this observation, we propose a novel zero-shot denoising pa...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In this work, we observe that model trained on vast general images via
masking strategy, has been naturally embedded with their distribution
knowledge, thus spontaneously attains the underlying potential for strong image
denoising. Based on this observation, we propose a novel zero-shot denoising
paradigm, i.e., Masked Pre-train then Iterative fill (MPI). MPI first trains
model via masking and then employs pre-trained weight for high-quality
zero-shot image denoising on a single noisy image. Concretely, MPI comprises
two key procedures: 1) Masked Pre-training involves training model to
reconstruct massive natural images with random masking for generalizable
representations, gathering the potential for valid zero-shot denoising on
images with varying noise degradation and even in distinct image types. 2)
Iterative filling exploits pre-trained knowledge for effective zero-shot
denoising. It iteratively optimizes the image by leveraging pre-trained
weights, focusing on alternate reconstruction of different image parts, and
gradually assembles fully denoised image within limited number of iterations.
Comprehensive experiments across various noisy scenarios underscore the notable
advances of MPI over previous approaches with a marked reduction in inference
time. Code available at https://github.com/krennic999/MPI. |
---|---|
DOI: | 10.48550/arxiv.2401.14966 |