Improving Super-Resolution Methods via Incremental Residual Learning
Recently, Convolutional Neural Networks (CNNs) have shown promising performance in super-resolution (SR). However, these methods operate primarily on Low Resolution (LR) inputs for memory efficiency but this limits, as we demonstrate, their ability to (i) model high frequency information; and (ii) s...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recently, Convolutional Neural Networks (CNNs) have shown promising
performance in super-resolution (SR). However, these methods operate primarily
on Low Resolution (LR) inputs for memory efficiency but this limits, as we
demonstrate, their ability to (i) model high frequency information; and (ii)
smoothly translate from LR to High Resolution (HR) space. To this end, we
propose a novel Incremental Residual Learning (IRL) framework to address these
mentioned issues. In IRL, first we select a typical SR pre-trained network as a
master branch. Next we sequentially train and add residual branches to the main
branch, where each residual branch is learned to model accumulated residuals of
all previous branches. We plug state of the art methods in IRL framework and
demonstrate consistent performance improvement on public benchmark datasets to
set a new state of the art for SR at only approximately 20% increase in
training time. |
---|---|
DOI: | 10.48550/arxiv.1808.07110 |