Self-Supervised Real-World Image Denoising Based on Multi-Scale Feature Enhancement and Attention Fusion
Deep learning denoising methods are often constrained by the high cost of acquiring real-world noisy images and the labor-intensive process of dataset construction. Our self-supervised Multi-Scale Blind-Spot Network with Adaptive Feature Fusion (MA-BSN) addresses these issues, offering an efficient...
Gespeichert in:
Veröffentlicht in: | IEEE access 2024, Vol.12, p.49720-49734 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Deep learning denoising methods are often constrained by the high cost of acquiring real-world noisy images and the labor-intensive process of dataset construction. Our self-supervised Multi-Scale Blind-Spot Network with Adaptive Feature Fusion (MA-BSN) addresses these issues, offering an efficient solution for image denoising. MA-BSN mitigates the challenges of spatial noise correlation preservation and limited receptive fields, which are prevalent in existing self-supervised denoising approaches. The network employs a blind-spot architecture that generates sub-images at multiple scales, enhancing denoising beyond the capabilities of pixel-shuffle downsampling. A depth-wise convolutional Transformer network (DTN) extracts features across a global receptive field, addressing the convolutional neural networks' (CNNs) limitations. An adaptive feature fusion module (AFF) is introduced to refine feature learning for specific regions in the denoised images, leveraging attention mechanisms for improved performance. Our network's efficacy is validated through experiments on the SIDD and DND real-world noise benchmark datasets. Results on the DND dataset show a PSNR/SSIM of 38.41 dB/0.940, surpassing state-of-the-art self-supervised methods and underscoring our approach's superior denoising capability. |
---|---|
ISSN: | 2169-3536 2169-3536 |
DOI: | 10.1109/ACCESS.2024.3384577 |