Hierarchical Residual Attention Network for Single Image Super-Resolution

Convolutional neural networks are the most successful models in single image super-resolution. Deeper networks, residual connections, and attention mechanisms have further improved their performance. However, these strategies often improve the reconstruction performance at the expense of considerabl...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Behjati, Parichehr, Rodriguez, Pau, Mehri, Armin, Hupont, Isabelle, Tena, Carles Fernández, Gonzalez, Jordi
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Behjati, Parichehr
Rodriguez, Pau
Mehri, Armin
Hupont, Isabelle
Tena, Carles Fernández
Gonzalez, Jordi
description Convolutional neural networks are the most successful models in single image super-resolution. Deeper networks, residual connections, and attention mechanisms have further improved their performance. However, these strategies often improve the reconstruction performance at the expense of considerably increasing the computational cost. This paper introduces a new lightweight super-resolution model based on an efficient method for residual feature and attention aggregation. In order to make an efficient use of the residual features, these are hierarchically aggregated into feature banks for posterior usage at the network output. In parallel, a lightweight hierarchical attention mechanism extracts the most relevant features from the network into attention banks for improving the final output and preventing the information loss through the successive operations inside the network. Therefore, the processing is split into two independent paths of computation that can be simultaneously carried out, resulting in a highly efficient and effective model for reconstructing fine details on high-resolution images from their low-resolution counterparts. Our proposed architecture surpasses state-of-the-art performance in several datasets, while maintaining relatively low computation and memory footprint.
doi_str_mv 10.48550/arxiv.2012.04578
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2012_04578</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2012_04578</sourcerecordid><originalsourceid>FETCH-LOGICAL-a678-b956cf179ed3e769d7eee01bc401c975134e54af17358c0318f26af17e62d7b43</originalsourceid><addsrcrecordid>eNotj82OgjAcxHvxsNF9gD3ZFwBb-snRGHclMW4i3Ekpf7QRgRTYj7df0D3NTDIzyQ-hN0pCroUgG-N_3FcYERqFhAulX1BycOCNt1dnTY3P0LtynMx2GKAZXNvgEwzfrb_hqvU4dc2lBpzczQVwOnbgg2nR1uPcXKFFZeoeXv91ibL3fbY7BMfPj2S3PQZGKh0UsZC2oiqGkoGScakAgNDCckJtrARlHAQ3U4MJbQmjuorkHEFGpSo4W6L18_bBknfe3Y3_zWem_MHE_gAf4Ecj</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Hierarchical Residual Attention Network for Single Image Super-Resolution</title><source>arXiv.org</source><creator>Behjati, Parichehr ; Rodriguez, Pau ; Mehri, Armin ; Hupont, Isabelle ; Tena, Carles Fernández ; Gonzalez, Jordi</creator><creatorcontrib>Behjati, Parichehr ; Rodriguez, Pau ; Mehri, Armin ; Hupont, Isabelle ; Tena, Carles Fernández ; Gonzalez, Jordi</creatorcontrib><description>Convolutional neural networks are the most successful models in single image super-resolution. Deeper networks, residual connections, and attention mechanisms have further improved their performance. However, these strategies often improve the reconstruction performance at the expense of considerably increasing the computational cost. This paper introduces a new lightweight super-resolution model based on an efficient method for residual feature and attention aggregation. In order to make an efficient use of the residual features, these are hierarchically aggregated into feature banks for posterior usage at the network output. In parallel, a lightweight hierarchical attention mechanism extracts the most relevant features from the network into attention banks for improving the final output and preventing the information loss through the successive operations inside the network. Therefore, the processing is split into two independent paths of computation that can be simultaneously carried out, resulting in a highly efficient and effective model for reconstructing fine details on high-resolution images from their low-resolution counterparts. Our proposed architecture surpasses state-of-the-art performance in several datasets, while maintaining relatively low computation and memory footprint.</description><identifier>DOI: 10.48550/arxiv.2012.04578</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2020-12</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2012.04578$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2012.04578$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Behjati, Parichehr</creatorcontrib><creatorcontrib>Rodriguez, Pau</creatorcontrib><creatorcontrib>Mehri, Armin</creatorcontrib><creatorcontrib>Hupont, Isabelle</creatorcontrib><creatorcontrib>Tena, Carles Fernández</creatorcontrib><creatorcontrib>Gonzalez, Jordi</creatorcontrib><title>Hierarchical Residual Attention Network for Single Image Super-Resolution</title><description>Convolutional neural networks are the most successful models in single image super-resolution. Deeper networks, residual connections, and attention mechanisms have further improved their performance. However, these strategies often improve the reconstruction performance at the expense of considerably increasing the computational cost. This paper introduces a new lightweight super-resolution model based on an efficient method for residual feature and attention aggregation. In order to make an efficient use of the residual features, these are hierarchically aggregated into feature banks for posterior usage at the network output. In parallel, a lightweight hierarchical attention mechanism extracts the most relevant features from the network into attention banks for improving the final output and preventing the information loss through the successive operations inside the network. Therefore, the processing is split into two independent paths of computation that can be simultaneously carried out, resulting in a highly efficient and effective model for reconstructing fine details on high-resolution images from their low-resolution counterparts. Our proposed architecture surpasses state-of-the-art performance in several datasets, while maintaining relatively low computation and memory footprint.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj82OgjAcxHvxsNF9gD3ZFwBb-snRGHclMW4i3Ekpf7QRgRTYj7df0D3NTDIzyQ-hN0pCroUgG-N_3FcYERqFhAulX1BycOCNt1dnTY3P0LtynMx2GKAZXNvgEwzfrb_hqvU4dc2lBpzczQVwOnbgg2nR1uPcXKFFZeoeXv91ibL3fbY7BMfPj2S3PQZGKh0UsZC2oiqGkoGScakAgNDCckJtrARlHAQ3U4MJbQmjuorkHEFGpSo4W6L18_bBknfe3Y3_zWem_MHE_gAf4Ecj</recordid><startdate>20201208</startdate><enddate>20201208</enddate><creator>Behjati, Parichehr</creator><creator>Rodriguez, Pau</creator><creator>Mehri, Armin</creator><creator>Hupont, Isabelle</creator><creator>Tena, Carles Fernández</creator><creator>Gonzalez, Jordi</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20201208</creationdate><title>Hierarchical Residual Attention Network for Single Image Super-Resolution</title><author>Behjati, Parichehr ; Rodriguez, Pau ; Mehri, Armin ; Hupont, Isabelle ; Tena, Carles Fernández ; Gonzalez, Jordi</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a678-b956cf179ed3e769d7eee01bc401c975134e54af17358c0318f26af17e62d7b43</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Behjati, Parichehr</creatorcontrib><creatorcontrib>Rodriguez, Pau</creatorcontrib><creatorcontrib>Mehri, Armin</creatorcontrib><creatorcontrib>Hupont, Isabelle</creatorcontrib><creatorcontrib>Tena, Carles Fernández</creatorcontrib><creatorcontrib>Gonzalez, Jordi</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Behjati, Parichehr</au><au>Rodriguez, Pau</au><au>Mehri, Armin</au><au>Hupont, Isabelle</au><au>Tena, Carles Fernández</au><au>Gonzalez, Jordi</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Hierarchical Residual Attention Network for Single Image Super-Resolution</atitle><date>2020-12-08</date><risdate>2020</risdate><abstract>Convolutional neural networks are the most successful models in single image super-resolution. Deeper networks, residual connections, and attention mechanisms have further improved their performance. However, these strategies often improve the reconstruction performance at the expense of considerably increasing the computational cost. This paper introduces a new lightweight super-resolution model based on an efficient method for residual feature and attention aggregation. In order to make an efficient use of the residual features, these are hierarchically aggregated into feature banks for posterior usage at the network output. In parallel, a lightweight hierarchical attention mechanism extracts the most relevant features from the network into attention banks for improving the final output and preventing the information loss through the successive operations inside the network. Therefore, the processing is split into two independent paths of computation that can be simultaneously carried out, resulting in a highly efficient and effective model for reconstructing fine details on high-resolution images from their low-resolution counterparts. Our proposed architecture surpasses state-of-the-art performance in several datasets, while maintaining relatively low computation and memory footprint.</abstract><doi>10.48550/arxiv.2012.04578</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2012.04578
ispartof
issn
language eng
recordid cdi_arxiv_primary_2012_04578
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title Hierarchical Residual Attention Network for Single Image Super-Resolution
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-01T10%3A52%3A36IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Hierarchical%20Residual%20Attention%20Network%20for%20Single%20Image%20Super-Resolution&rft.au=Behjati,%20Parichehr&rft.date=2020-12-08&rft_id=info:doi/10.48550/arxiv.2012.04578&rft_dat=%3Carxiv_GOX%3E2012_04578%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true