Swift Parameter-free Attention Network for Efficient Super-Resolution

Single Image Super-Resolution (SISR) is a crucial task in low-level computer vision, aiming to reconstruct high-resolution images from low-resolution counterparts. Conventional attention mechanisms have significantly improved SISR performance but often result in complex network structures and large...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Wan, Cheng, Yu, Hongyuan, Li, Zhiqi, Chen, Yihang, Zou, Yajun, Liu, Yuqing, Yin, Xuanwu, Zuo, Kunlong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Wan, Cheng
Yu, Hongyuan
Li, Zhiqi
Chen, Yihang
Zou, Yajun
Liu, Yuqing
Yin, Xuanwu
Zuo, Kunlong
description Single Image Super-Resolution (SISR) is a crucial task in low-level computer vision, aiming to reconstruct high-resolution images from low-resolution counterparts. Conventional attention mechanisms have significantly improved SISR performance but often result in complex network structures and large number of parameters, leading to slow inference speed and large model size. To address this issue, we propose the Swift Parameter-free Attention Network (SPAN), a highly efficient SISR model that balances parameter count, inference speed, and image quality. SPAN employs a novel parameter-free attention mechanism, which leverages symmetric activation functions and residual connections to enhance high-contribution information and suppress redundant information. Our theoretical analysis demonstrates the effectiveness of this design in achieving the attention mechanism's purpose. We evaluate SPAN on multiple benchmarks, showing that it outperforms existing efficient super-resolution models in terms of both image quality and inference speed, achieving a significant quality-speed trade-off. This makes SPAN highly suitable for real-world applications, particularly in resource-constrained scenarios. Notably, we won the first place both in the overall performance track and runtime track of the NTIRE 2024 efficient super-resolution challenge. Our code and models are made publicly available at https://github.com/hongyuanyu/SPAN.
doi_str_mv 10.48550/arxiv.2311.12770
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2311_12770</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2311_12770</sourcerecordid><originalsourceid>FETCH-LOGICAL-a670-996fb61e64b711ea5413e02526c4cf01b1e2f108719adf42e608282ed19aa4bf3</originalsourceid><addsrcrecordid>eNotj8FuwjAQRH3poaL9AE71DyT1Oo6dHBFKCxICVLhHm7ArWQWCjCnw9wToaTSap5GeEENQqSnyXH1iuPi_VGcAKWjn1KuoVmfPUS4x4I4ihYQDkRzFSPvou72cUzx34VdyF2TF7FvfD3J1OvToDx277emOvYkXxu2R3v9zINZf1Xo8SWaL7-l4NEvQOpWUpeXGAlnTOADC3EBGSufatqZlBQ2QZlCFgxI3bDRZVehC06bvaBrOBuLjefvwqA_B7zBc67tP_fDJbrvaRZs</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Swift Parameter-free Attention Network for Efficient Super-Resolution</title><source>arXiv.org</source><creator>Wan, Cheng ; Yu, Hongyuan ; Li, Zhiqi ; Chen, Yihang ; Zou, Yajun ; Liu, Yuqing ; Yin, Xuanwu ; Zuo, Kunlong</creator><creatorcontrib>Wan, Cheng ; Yu, Hongyuan ; Li, Zhiqi ; Chen, Yihang ; Zou, Yajun ; Liu, Yuqing ; Yin, Xuanwu ; Zuo, Kunlong</creatorcontrib><description>Single Image Super-Resolution (SISR) is a crucial task in low-level computer vision, aiming to reconstruct high-resolution images from low-resolution counterparts. Conventional attention mechanisms have significantly improved SISR performance but often result in complex network structures and large number of parameters, leading to slow inference speed and large model size. To address this issue, we propose the Swift Parameter-free Attention Network (SPAN), a highly efficient SISR model that balances parameter count, inference speed, and image quality. SPAN employs a novel parameter-free attention mechanism, which leverages symmetric activation functions and residual connections to enhance high-contribution information and suppress redundant information. Our theoretical analysis demonstrates the effectiveness of this design in achieving the attention mechanism's purpose. We evaluate SPAN on multiple benchmarks, showing that it outperforms existing efficient super-resolution models in terms of both image quality and inference speed, achieving a significant quality-speed trade-off. This makes SPAN highly suitable for real-world applications, particularly in resource-constrained scenarios. Notably, we won the first place both in the overall performance track and runtime track of the NTIRE 2024 efficient super-resolution challenge. Our code and models are made publicly available at https://github.com/hongyuanyu/SPAN.</description><identifier>DOI: 10.48550/arxiv.2311.12770</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2023-11</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2311.12770$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2311.12770$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Wan, Cheng</creatorcontrib><creatorcontrib>Yu, Hongyuan</creatorcontrib><creatorcontrib>Li, Zhiqi</creatorcontrib><creatorcontrib>Chen, Yihang</creatorcontrib><creatorcontrib>Zou, Yajun</creatorcontrib><creatorcontrib>Liu, Yuqing</creatorcontrib><creatorcontrib>Yin, Xuanwu</creatorcontrib><creatorcontrib>Zuo, Kunlong</creatorcontrib><title>Swift Parameter-free Attention Network for Efficient Super-Resolution</title><description>Single Image Super-Resolution (SISR) is a crucial task in low-level computer vision, aiming to reconstruct high-resolution images from low-resolution counterparts. Conventional attention mechanisms have significantly improved SISR performance but often result in complex network structures and large number of parameters, leading to slow inference speed and large model size. To address this issue, we propose the Swift Parameter-free Attention Network (SPAN), a highly efficient SISR model that balances parameter count, inference speed, and image quality. SPAN employs a novel parameter-free attention mechanism, which leverages symmetric activation functions and residual connections to enhance high-contribution information and suppress redundant information. Our theoretical analysis demonstrates the effectiveness of this design in achieving the attention mechanism's purpose. We evaluate SPAN on multiple benchmarks, showing that it outperforms existing efficient super-resolution models in terms of both image quality and inference speed, achieving a significant quality-speed trade-off. This makes SPAN highly suitable for real-world applications, particularly in resource-constrained scenarios. Notably, we won the first place both in the overall performance track and runtime track of the NTIRE 2024 efficient super-resolution challenge. Our code and models are made publicly available at https://github.com/hongyuanyu/SPAN.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8FuwjAQRH3poaL9AE71DyT1Oo6dHBFKCxICVLhHm7ArWQWCjCnw9wToaTSap5GeEENQqSnyXH1iuPi_VGcAKWjn1KuoVmfPUS4x4I4ihYQDkRzFSPvou72cUzx34VdyF2TF7FvfD3J1OvToDx277emOvYkXxu2R3v9zINZf1Xo8SWaL7-l4NEvQOpWUpeXGAlnTOADC3EBGSufatqZlBQ2QZlCFgxI3bDRZVehC06bvaBrOBuLjefvwqA_B7zBc67tP_fDJbrvaRZs</recordid><startdate>20231121</startdate><enddate>20231121</enddate><creator>Wan, Cheng</creator><creator>Yu, Hongyuan</creator><creator>Li, Zhiqi</creator><creator>Chen, Yihang</creator><creator>Zou, Yajun</creator><creator>Liu, Yuqing</creator><creator>Yin, Xuanwu</creator><creator>Zuo, Kunlong</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20231121</creationdate><title>Swift Parameter-free Attention Network for Efficient Super-Resolution</title><author>Wan, Cheng ; Yu, Hongyuan ; Li, Zhiqi ; Chen, Yihang ; Zou, Yajun ; Liu, Yuqing ; Yin, Xuanwu ; Zuo, Kunlong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a670-996fb61e64b711ea5413e02526c4cf01b1e2f108719adf42e608282ed19aa4bf3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Wan, Cheng</creatorcontrib><creatorcontrib>Yu, Hongyuan</creatorcontrib><creatorcontrib>Li, Zhiqi</creatorcontrib><creatorcontrib>Chen, Yihang</creatorcontrib><creatorcontrib>Zou, Yajun</creatorcontrib><creatorcontrib>Liu, Yuqing</creatorcontrib><creatorcontrib>Yin, Xuanwu</creatorcontrib><creatorcontrib>Zuo, Kunlong</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Wan, Cheng</au><au>Yu, Hongyuan</au><au>Li, Zhiqi</au><au>Chen, Yihang</au><au>Zou, Yajun</au><au>Liu, Yuqing</au><au>Yin, Xuanwu</au><au>Zuo, Kunlong</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Swift Parameter-free Attention Network for Efficient Super-Resolution</atitle><date>2023-11-21</date><risdate>2023</risdate><abstract>Single Image Super-Resolution (SISR) is a crucial task in low-level computer vision, aiming to reconstruct high-resolution images from low-resolution counterparts. Conventional attention mechanisms have significantly improved SISR performance but often result in complex network structures and large number of parameters, leading to slow inference speed and large model size. To address this issue, we propose the Swift Parameter-free Attention Network (SPAN), a highly efficient SISR model that balances parameter count, inference speed, and image quality. SPAN employs a novel parameter-free attention mechanism, which leverages symmetric activation functions and residual connections to enhance high-contribution information and suppress redundant information. Our theoretical analysis demonstrates the effectiveness of this design in achieving the attention mechanism's purpose. We evaluate SPAN on multiple benchmarks, showing that it outperforms existing efficient super-resolution models in terms of both image quality and inference speed, achieving a significant quality-speed trade-off. This makes SPAN highly suitable for real-world applications, particularly in resource-constrained scenarios. Notably, we won the first place both in the overall performance track and runtime track of the NTIRE 2024 efficient super-resolution challenge. Our code and models are made publicly available at https://github.com/hongyuanyu/SPAN.</abstract><doi>10.48550/arxiv.2311.12770</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2311.12770
ispartof
issn
language eng
recordid cdi_arxiv_primary_2311_12770
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title Swift Parameter-free Attention Network for Efficient Super-Resolution
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-01T18%3A36%3A31IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Swift%20Parameter-free%20Attention%20Network%20for%20Efficient%20Super-Resolution&rft.au=Wan,%20Cheng&rft.date=2023-11-21&rft_id=info:doi/10.48550/arxiv.2311.12770&rft_dat=%3Carxiv_GOX%3E2311_12770%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true