CNF+CT: Context Network Fusion of Cascade-Trained Convolutional Neural Networks for Image Super-Resolution
A novel cascade learning framework to incrementally train deeper and more accurate convolutional neural networks is introduced. The proposed cascade learning facilitates the training of deep efficient networks with plain convolutional neural network (CNN) architectures, as well as with residual netw...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on computational imaging 2020, Vol.6, p.447-462 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | A novel cascade learning framework to incrementally train deeper and more accurate convolutional neural networks is introduced. The proposed cascade learning facilitates the training of deep efficient networks with plain convolutional neural network (CNN) architectures, as well as with residual network (ResNet) architectures. This is demonstrated on the problem of image super-resolution (SR). We show that cascade-trained (CT) SR CNNs and CT-ResNets can achieve state-of-the-art results with a smaller number of network parameters. To further improve the network's efficiency, we propose a cascade trimming strategy that progressively reduces the network size, proceeding by trimming a group of layers at a time, while preserving the network's discriminative ability. We propose context network fusion (CNF) as a method to combine features from an ensemble of networks through context fusion layers. We show that CNF of an ensemble of CT SR networks can result in a network with better efficiency and accuracy than that of other fusion methods. CNF can also be trained by the proposed edge-aware loss function to obtain sharper edges and improve the perceptual image quality. Experiments on benchmark datasets show that our proposed deep convolutional networks achieve state-of-the-art accuracy and are much faster than existing deep super-resolution networks. |
---|---|
ISSN: | 2573-0436 2333-9403 |
DOI: | 10.1109/TCI.2019.2956874 |