Improving Super-Resolution Methods via Incremental Residual Learning

Recently, Convolutional Neural Networks (CNNs) have shown promising performance in super-resolution (SR). However, these methods operate primarily on Low Resolution (LR) inputs for memory efficiency but this limits, as we demonstrate, their ability to (i) model high frequency information; and (ii) s...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Aadil, Muneeb, Rahim, Rafia, Hussain, Sibt ul
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Aadil, Muneeb
Rahim, Rafia
Hussain, Sibt ul
description Recently, Convolutional Neural Networks (CNNs) have shown promising performance in super-resolution (SR). However, these methods operate primarily on Low Resolution (LR) inputs for memory efficiency but this limits, as we demonstrate, their ability to (i) model high frequency information; and (ii) smoothly translate from LR to High Resolution (HR) space. To this end, we propose a novel Incremental Residual Learning (IRL) framework to address these mentioned issues. In IRL, first we select a typical SR pre-trained network as a master branch. Next we sequentially train and add residual branches to the main branch, where each residual branch is learned to model accumulated residuals of all previous branches. We plug state of the art methods in IRL framework and demonstrate consistent performance improvement on public benchmark datasets to set a new state of the art for SR at only approximately 20% increase in training time.
doi_str_mv 10.48550/arxiv.1808.07110
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1808_07110</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1808_07110</sourcerecordid><originalsourceid>FETCH-LOGICAL-a670-ad08ee69905c392f910ada1d9488306399cee19a9b49c8467c63fef25a5c22ef3</originalsourceid><addsrcrecordid>eNotj7tOxDAURN1QoIUPoFr_QIIdx45douUVKQgJto_u2tdgKXEi5yH4e8JCNVPMGekQcsNZXmop2S2kr7DmXDOds4pzdknu635MwxriB31fRkzZG05Dt8xhiPQF58_BTXQNQOtoE_YYZ-joNglu2UqDkOKGXpELD92E1_-5I8fHh-PhOWten-rDXZOBqlgGjmlEZQyTVpjCG87AAXem1FowJYyxiNyAOZXG6lJVVgmPvpAgbVGgFzuy_7s9a7RjCj2k7_ZXpz3riB-lkkW2</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Improving Super-Resolution Methods via Incremental Residual Learning</title><source>arXiv.org</source><creator>Aadil, Muneeb ; Rahim, Rafia ; Hussain, Sibt ul</creator><creatorcontrib>Aadil, Muneeb ; Rahim, Rafia ; Hussain, Sibt ul</creatorcontrib><description>Recently, Convolutional Neural Networks (CNNs) have shown promising performance in super-resolution (SR). However, these methods operate primarily on Low Resolution (LR) inputs for memory efficiency but this limits, as we demonstrate, their ability to (i) model high frequency information; and (ii) smoothly translate from LR to High Resolution (HR) space. To this end, we propose a novel Incremental Residual Learning (IRL) framework to address these mentioned issues. In IRL, first we select a typical SR pre-trained network as a master branch. Next we sequentially train and add residual branches to the main branch, where each residual branch is learned to model accumulated residuals of all previous branches. We plug state of the art methods in IRL framework and demonstrate consistent performance improvement on public benchmark datasets to set a new state of the art for SR at only approximately 20% increase in training time.</description><identifier>DOI: 10.48550/arxiv.1808.07110</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2018-08</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,781,886</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1808.07110$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1808.07110$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Aadil, Muneeb</creatorcontrib><creatorcontrib>Rahim, Rafia</creatorcontrib><creatorcontrib>Hussain, Sibt ul</creatorcontrib><title>Improving Super-Resolution Methods via Incremental Residual Learning</title><description>Recently, Convolutional Neural Networks (CNNs) have shown promising performance in super-resolution (SR). However, these methods operate primarily on Low Resolution (LR) inputs for memory efficiency but this limits, as we demonstrate, their ability to (i) model high frequency information; and (ii) smoothly translate from LR to High Resolution (HR) space. To this end, we propose a novel Incremental Residual Learning (IRL) framework to address these mentioned issues. In IRL, first we select a typical SR pre-trained network as a master branch. Next we sequentially train and add residual branches to the main branch, where each residual branch is learned to model accumulated residuals of all previous branches. We plug state of the art methods in IRL framework and demonstrate consistent performance improvement on public benchmark datasets to set a new state of the art for SR at only approximately 20% increase in training time.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj7tOxDAURN1QoIUPoFr_QIIdx45douUVKQgJto_u2tdgKXEi5yH4e8JCNVPMGekQcsNZXmop2S2kr7DmXDOds4pzdknu635MwxriB31fRkzZG05Dt8xhiPQF58_BTXQNQOtoE_YYZ-joNglu2UqDkOKGXpELD92E1_-5I8fHh-PhOWten-rDXZOBqlgGjmlEZQyTVpjCG87AAXem1FowJYyxiNyAOZXG6lJVVgmPvpAgbVGgFzuy_7s9a7RjCj2k7_ZXpz3riB-lkkW2</recordid><startdate>20180821</startdate><enddate>20180821</enddate><creator>Aadil, Muneeb</creator><creator>Rahim, Rafia</creator><creator>Hussain, Sibt ul</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20180821</creationdate><title>Improving Super-Resolution Methods via Incremental Residual Learning</title><author>Aadil, Muneeb ; Rahim, Rafia ; Hussain, Sibt ul</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a670-ad08ee69905c392f910ada1d9488306399cee19a9b49c8467c63fef25a5c22ef3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Aadil, Muneeb</creatorcontrib><creatorcontrib>Rahim, Rafia</creatorcontrib><creatorcontrib>Hussain, Sibt ul</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Aadil, Muneeb</au><au>Rahim, Rafia</au><au>Hussain, Sibt ul</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Improving Super-Resolution Methods via Incremental Residual Learning</atitle><date>2018-08-21</date><risdate>2018</risdate><abstract>Recently, Convolutional Neural Networks (CNNs) have shown promising performance in super-resolution (SR). However, these methods operate primarily on Low Resolution (LR) inputs for memory efficiency but this limits, as we demonstrate, their ability to (i) model high frequency information; and (ii) smoothly translate from LR to High Resolution (HR) space. To this end, we propose a novel Incremental Residual Learning (IRL) framework to address these mentioned issues. In IRL, first we select a typical SR pre-trained network as a master branch. Next we sequentially train and add residual branches to the main branch, where each residual branch is learned to model accumulated residuals of all previous branches. We plug state of the art methods in IRL framework and demonstrate consistent performance improvement on public benchmark datasets to set a new state of the art for SR at only approximately 20% increase in training time.</abstract><doi>10.48550/arxiv.1808.07110</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.1808.07110
ispartof
issn
language eng
recordid cdi_arxiv_primary_1808_07110
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title Improving Super-Resolution Methods via Incremental Residual Learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-11T22%3A02%3A15IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Improving%20Super-Resolution%20Methods%20via%20Incremental%20Residual%20Learning&rft.au=Aadil,%20Muneeb&rft.date=2018-08-21&rft_id=info:doi/10.48550/arxiv.1808.07110&rft_dat=%3Carxiv_GOX%3E1808_07110%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true