Towards Lightweight Transformer Via Group-Wise Transformation for Vision-and-Language Tasks

Despite the exciting performance, Transformer is criticized for its excessive parameters and computation cost. However, compressing Transformer remains as an open problem due to its internal complexity of the layer designs, i.e. , Multi-Head Attention (MHA) and Feed-Forward Network (FFN). To address...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing 2022, Vol.31, p.3386-3398
Hauptverfasser: Luo, Gen, Zhou, Yiyi, Sun, Xiaoshuai, Wang, Yan, Cao, Liujuan, Wu, Yongjian, Huang, Feiyue, Ji, Rongrong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!