Speckle2Void: Deep Self-Supervised SAR Despeckling with Blind-Spot Convolutional Neural Networks
Information extraction from synthetic aperture radar (SAR) images is heavily impaired by speckle noise, hence despeckling is a crucial preliminary step in scene analysis algorithms. The recent success of deep learning envisions a new generation of despeckling techniques that could outperform classic...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Information extraction from synthetic aperture radar (SAR) images is heavily
impaired by speckle noise, hence despeckling is a crucial preliminary step in
scene analysis algorithms. The recent success of deep learning envisions a new
generation of despeckling techniques that could outperform classical
model-based methods. However, current deep learning approaches to despeckling
require supervision for training, whereas clean SAR images are impossible to
obtain. In the literature, this issue is tackled by resorting to either
synthetically speckled optical images, which exhibit different properties with
respect to true SAR images, or multi-temporal SAR images, which are difficult
to acquire or fuse accurately. In this paper, inspired by recent works on
blind-spot denoising networks, we propose a self-supervised Bayesian
despeckling method. The proposed method is trained employing only noisy SAR
images and can therefore learn features of real SAR images rather than
synthetic data. Experiments show that the performance of the proposed approach
is very close to the supervised training approach on synthetic data and
superior on real data in both quantitative and visual assessments. |
---|---|
DOI: | 10.48550/arxiv.2007.02075 |