Multipath feedforward network for single image super-resolution

Single image super-resolution (SR) models which based on convolutional neural network mostly use chained stacking to build the network. It ignores the role of hierarchical features and relationship between layers, resulting in the loss of high-frequency components. To address these drawbacks, we int...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Multimedia tools and applications 2019-07, Vol.78 (14), p.19621-19640
Hauptverfasser: Shen, Mingyu, Yu, Pengfei, Wang, Ronggui, Yang, Juan, Xue, Lixia, Hu, Min
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Single image super-resolution (SR) models which based on convolutional neural network mostly use chained stacking to build the network. It ignores the role of hierarchical features and relationship between layers, resulting in the loss of high-frequency components. To address these drawbacks, we introduce a novel multipath feedforward network (MFNet) based on staged feature fusion unit (SFF). By changing the connection between networks, MFNet strengthens the inter-layer relationship and improves the information flow in the network, thereby extracting more abundant high-frequency components. Firstly, SFF extracts and integrates hierarchical features by dense connection, which expands the information flow of the network. Afterwards, we use adaptive method to learn effective features in hierarchical features. Then, in order to strengthen relationship between layers and fully use the hierarchical features, we use multi-feedforward structure to connect each SFF, which enables multipath feature re-usage and explores more abundant high-frequency components on this basis. Finally, the image reconstruction is realized by combining the shallow features and the global residual. Extensive benchmark evaluation shows that the performance of MFNet has a significant improvement over the state-of-the-art methods.
ISSN:1380-7501
1573-7721
DOI:10.1007/s11042-019-7334-9