Tree-Structured Dilated Convolutional Networks for Image Compressed Sensing

To better recover a sparse image signal carrying redundant information from many fewer measurements than the Nyquist-Shannon sampling theorem suggested, convolutional neural networks (CNNs) can be used to emulate a compressed sensing (CS) process. However, the existing CS methods based on CNNs have...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2022, Vol.10, p.98374-98383
Hauptverfasser: Lu, Rui, Ye, Kuntao
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:To better recover a sparse image signal carrying redundant information from many fewer measurements than the Nyquist-Shannon sampling theorem suggested, convolutional neural networks (CNNs) can be used to emulate a compressed sensing (CS) process. However, the existing CS methods based on CNNs have the problems of high computational complexity and unsatisfactory reconstruction effect. This study aims to present a faster algorithm based on CNNs to obtain reconstructed images with finer texture details from CS measurements. A tree-structured dilated conventional network (TDCN) for image CS is proposed. To extract the image multi-scale features as much as possible for better image reconstruction, the TDCN combines tree-structured residual blocks made of three dilation convolution layers with different dilation factors; the output of each dilated convolution layer is directed to fusion layer to eliminate information loss due to the multiple cascading dilated convolutions. Moreover, L1 loss is employed as an objective optimization function instead of L2 loss to improve training results of the network and achieve better convergence. Extensive CS experiments in the study demonstrate that the proposed TDCN outperforms existing state-of-the-art methods in terms of both PSNR and SSIM at different sampling rates while maintaining a fast computational speed. Our code and the trained model are available at https://github.com/UHADS/TDCN .
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2022.3206016