SLAB: Efficient Transformers with Simplified Linear Attention and Progressive Re-parameterized Batch Normalization
Transformers have become foundational architectures for both natural language and computer vision tasks. However, the high computational cost makes it quite challenging to deploy on resource-constraint devices. This paper investigates the computational bottleneck modules of efficient transformer, i....
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Transformers have become foundational architectures for both natural language
and computer vision tasks. However, the high computational cost makes it quite
challenging to deploy on resource-constraint devices. This paper investigates
the computational bottleneck modules of efficient transformer, i.e.,
normalization layers and attention modules. LayerNorm is commonly used in
transformer architectures but is not computational friendly due to statistic
calculation during inference. However, replacing LayerNorm with more efficient
BatchNorm in transformer often leads to inferior performance and collapse in
training. To address this problem, we propose a novel method named PRepBN to
progressively replace LayerNorm with re-parameterized BatchNorm in training.
Moreover, we propose a simplified linear attention (SLA) module that is simple
yet effective to achieve strong performance. Extensive experiments on image
classification as well as object detection demonstrate the effectiveness of our
proposed method. For example, our SLAB-Swin obtains $83.6\%$ top-1 accuracy on
ImageNet-1K with $16.2$ms latency, which is $2.4$ms less than that of
Flatten-Swin with $0.1\%$ higher accuracy. We also evaluated our method for
language modeling task and obtain comparable performance and lower
latency.Codes are publicly available at https://github.com/xinghaochen/SLAB and
https://github.com/mindspore-lab/models/tree/master/research/huawei-noah/SLAB. |
---|---|
DOI: | 10.48550/arxiv.2405.11582 |