Swift Parameter-free Attention Network for Efficient Super-Resolution
Single Image Super-Resolution (SISR) is a crucial task in low-level computer vision, aiming to reconstruct high-resolution images from low-resolution counterparts. Conventional attention mechanisms have significantly improved SISR performance but often result in complex network structures and large...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Single Image Super-Resolution (SISR) is a crucial task in low-level computer
vision, aiming to reconstruct high-resolution images from low-resolution
counterparts. Conventional attention mechanisms have significantly improved
SISR performance but often result in complex network structures and large
number of parameters, leading to slow inference speed and large model size. To
address this issue, we propose the Swift Parameter-free Attention Network
(SPAN), a highly efficient SISR model that balances parameter count, inference
speed, and image quality. SPAN employs a novel parameter-free attention
mechanism, which leverages symmetric activation functions and residual
connections to enhance high-contribution information and suppress redundant
information. Our theoretical analysis demonstrates the effectiveness of this
design in achieving the attention mechanism's purpose. We evaluate SPAN on
multiple benchmarks, showing that it outperforms existing efficient
super-resolution models in terms of both image quality and inference speed,
achieving a significant quality-speed trade-off. This makes SPAN highly
suitable for real-world applications, particularly in resource-constrained
scenarios. Notably, we won the first place both in the overall performance
track and runtime track of the NTIRE 2024 efficient super-resolution challenge.
Our code and models are made publicly available at
https://github.com/hongyuanyu/SPAN. |
---|---|
DOI: | 10.48550/arxiv.2311.12770 |