Accurate and Efficient Image Super-Resolution via Global-Local Adjusting Dense Network

Convolutional neural network-based (CNN-based) method has shown its superior performance on the image super-resolution (SR) task. However, several researches have shown that obtaining a better reconstruction result often leads to the significant increase in parameters and computation. To alleviate t...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on multimedia 2021, Vol.23, p.1924-1937
Hauptverfasser: Zhang, Xinyan, Gao, Peng, Liu, Sunxiangyu, Zhao, Kongya, Li, Guitao, Yin, Liuguo, Chen, Chang Wen
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Convolutional neural network-based (CNN-based) method has shown its superior performance on the image super-resolution (SR) task. However, several researches have shown that obtaining a better reconstruction result often leads to the significant increase in parameters and computation. To alleviate the burden in computational needs, we propose a novel global-local adjusting dense super-resolution network (GLADSR) to build a powerful yet lightweight CNN-based SR model. To enhance the network capacity, we present a global-local adjusting module (GLAM) which can adaptively reallocate the processing resources with local selective block (LSB) and global guided block (GGB). The GLAMs are linked with nested dense connections to make better use of the global-local adjusted features. In addition, we also introduce a separable pyramid upsampling (SPU) module to replace the regular upsampling operation, which thus brings a substantial reduction of its parameters and obtains better results. Furthermore, we show that the proposed refinement structure is capable of reducing image artifacts in SR processing. Extensive experiments on benchmark datasets show that the proposed GLADSR outperforms the state-of-the-art methods with much fewer parameters and much less computational cost.
ISSN:1520-9210
1941-0077
DOI:10.1109/TMM.2020.3005025