Transformer for Single Image Super-Resolution
Single image super-resolution (SISR) has witnessed great strides with the development of deep learning. However, most existing studies focus on building more complex networks with a massive number of layers. Recently, more and more researchers start to explore the application of Transformer in compu...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Single image super-resolution (SISR) has witnessed great strides with the
development of deep learning. However, most existing studies focus on building
more complex networks with a massive number of layers. Recently, more and more
researchers start to explore the application of Transformer in computer vision
tasks. However, the heavy computational cost and high GPU memory occupation of
the vision Transformer cannot be ignored. In this paper, we propose a novel
Efficient Super-Resolution Transformer (ESRT) for SISR. ESRT is a hybrid model,
which consists of a Lightweight CNN Backbone (LCB) and a Lightweight
Transformer Backbone (LTB). Among them, LCB can dynamically adjust the size of
the feature map to extract deep features with a low computational cost. LTB is
composed of a series of Efficient Transformers (ET), which occupies a small GPU
memory occupation, thanks to the specially designed Efficient Multi-Head
Attention (EMHA). Extensive experiments show that ESRT achieves competitive
results with low computational costs. Compared with the original Transformer
which occupies 16,057M GPU memory, ESRT only occupies 4,191M GPU memory. All
codes are available at https://github.com/luissen/ESRT. |
---|---|
DOI: | 10.48550/arxiv.2108.11084 |