Optimized Spatial Architecture Mapping Flow for Transformer Accelerators
Recent innovations in Transformer-based large language models have significantly advanced the field of general-purpose neural language understanding and generation. With billions of trainable parameters, deployment of these large models relies on high-performance hardware accelerators to efficiently...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recent innovations in Transformer-based large language models have
significantly advanced the field of general-purpose neural language
understanding and generation. With billions of trainable parameters, deployment
of these large models relies on high-performance hardware accelerators to
efficiently deliver the required computation. Spatial architectures, such as
TPUs, offer a promising solution to accelerating computation-intensive
workloads. However, the design process for existing spatial architectures is
predominantly manual, and it often involves time-consuming redesigns for new
applications and new problem dimensions, which greatly limits the development
of optimally designed accelerators for Transformer models. To address these
challenges, we propose SAMT (Spatial Architecture Mapping for Transformers), a
comprehensive framework designed to optimize the dataflow mapping of
Transformer inference workloads onto spatial accelerators. We demonstrate the
effectiveness of SAMT in improving the performance of spatial accelerators for
Transformer models. We propose and leverage the dynamic operator fusion schemes
for the Transformer models and co-search the optimal dataflow mapping
strategies for spatial accelerators. SAMT significantly reduces inference
latency by 12% to 91% and energy consumption by 3% to 23% for evaluated
Transformer models compared to traditional spatial accelerator designs among
edge, mobile and cloud settings. |
---|---|
DOI: | 10.48550/arxiv.2410.07407 |