A generative adversarial network with multi-scale convolution and dilated convolution res-network for OCT retinal image despeckling
Optical coherence tomography (OCT) has been widely adopted for imaging in various areas, yet it is largely affected by speckle noise generated from the coherent multiple-scattered photons. To alleviate the influences of speckle noise, a generative adversarial network with multi-scale convolution and...
Gespeichert in:
Veröffentlicht in: | Biomedical signal processing and control 2023-02, Vol.80, p.104231, Article 104231 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Optical coherence tomography (OCT) has been widely adopted for imaging in various areas, yet it is largely affected by speckle noise generated from the coherent multiple-scattered photons. To alleviate the influences of speckle noise, a generative adversarial network with multi-scale convolution and dilated convolution res-network (MDR-GAN) is proposed in this study. Specifically, a cascade multi-scale module (CMSM) consisting of three convolution and dilated convolution res-network (CD-Rn) blocks is proposed to raise network learning capacity, while a new residual learning method is devised to link the input and output feature maps for feature reconstructions. Among them, CMSM has the characteristics of capturing multi-scale local features of images. Residual learning effectively avoids the degradation problem of the network. Extensive experiments with four retinal OCT datasets are conducted and results are compared with those of the state-of-the-art deep learning networks to verify the effectiveness of the proposed MDR-GAN. Results demonstrate that the denoising effect of MDR-GAN is better than those of the other denoising methods. The peak single-to-noise ratio (PSNR) of MDR-GAN is improved by 2 dB as compared that of Pix2pix, while its equivalent number of looks (ENL) is improved by at least 233.9% as compared with the-state-of-the-art existing methods. Our MDR-GAN code can be download at https://github.com/Austin-Lms/MDR-GAN. |
---|---|
ISSN: | 1746-8094 1746-8108 |
DOI: | 10.1016/j.bspc.2022.104231 |