LCRCA: image super-resolution using lightweight concatenated residual channel attention networks

Images that are more similar to the original high-resolution images can be generated by deep neural network-based super-resolution methods than the non-learning-based ones, but the huge and sometimes redundant network structure and parameters make them unbearable. To get high-quality super-resolutio...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Applied intelligence (Dordrecht, Netherlands) Netherlands), 2022-07, Vol.52 (9), p.10045-10059
Hauptverfasser: Peng, Changmeng, Shu, Pei, Huang, Xiaoyang, Fu, Zhizhong, Li, Xiaofeng
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Images that are more similar to the original high-resolution images can be generated by deep neural network-based super-resolution methods than the non-learning-based ones, but the huge and sometimes redundant network structure and parameters make them unbearable. To get high-quality super-resolution results in computation resource-limited scenarios, we propose a lightweight skip concatenated residual channel attention network, LCRCA for image super-resolution. Specifically, we design a light but efficient deep residual block (DRB) which can generate more precise residual information by using more convolution layers under the same computation budget. To enhance the feature maps of DRB, an improved channel attention mechanism named statistical channel attention (SCA) is proposed by introducing channel statistics. Besides, compared with the commonly used skip connections, we propose to use skip concatenation (SC) to build information flows for feature maps of different layers. Finally, DRB, SCA, and SC are efficiently used to form the proposed network LCRCA. Experiments on four test sets show that our method can gain up to 3.2 dB and 0.12 dB over the bicubic interpolation and the representative lightweight method FERN, respectively, and can recover image details more accurately than the compared algorithms. Code can be found at https://github.com/pengcm/LCRCA .
ISSN:0924-669X
1573-7497
DOI:10.1007/s10489-021-02891-5