Detail retaining convolutional neural network for image denoising
[Display omitted] •Retaining high frequency detail information is important for image denoising.•Removing batch normalization (BN) is beneficial for image denoising.•Image restoration tasks can be finished by a single end-to-end network. Compared with the traditional image denoising method, although...
Gespeichert in:
Veröffentlicht in: | Journal of visual communication and image representation 2020-08, Vol.71, p.102774, Article 102774 |
---|---|
Hauptverfasser: | , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | [Display omitted]
•Retaining high frequency detail information is important for image denoising.•Removing batch normalization (BN) is beneficial for image denoising.•Image restoration tasks can be finished by a single end-to-end network.
Compared with the traditional image denoising method, although the convolutional neural network (CNN) has better denoising performance, there is an important issue that has not been well resolved: the residual image obtained by learning the difference between noisy image and clean image pairs contains abundant image detail information, resulting in the serious loss of detail in the denoised image. In this paper, in order to relearn the lost image detail information, a mathematical model is deducted from a minimization problem and an end-to-end detail retaining CNN (DRCNN) is proposed. Unlike most denoising methods based on CNN, DRCNN is not only focus to image denoising, but also the integrity of high frequency image content. DRCNN needs less parameters and storage space, therefore it has better generalization ability. Moreover, DRCNN can also adapt to different image restoration tasks such as blind image denoising, single image superresolution (SISR), blind deburring and image inpainting. Extensive experiments show that DRCNN has a better effect than some classic and novel methods. |
---|---|
ISSN: | 1047-3203 1095-9076 |
DOI: | 10.1016/j.jvcir.2020.102774 |