Answer Fast: Accelerating BERT on the Tensor Streaming Processor
Transformers have become a predominant machine learning workload, they are not only the de-facto standard for natural language processing tasks, but they are also being deployed in other domains such as vision and speech recognition. Many of the transformer-based applications are real-time systems s...
Gespeichert in:
Hauptverfasser: | , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Transformers have become a predominant machine learning workload, they are
not only the de-facto standard for natural language processing tasks, but they
are also being deployed in other domains such as vision and speech recognition.
Many of the transformer-based applications are real-time systems such as
machine translation and web search. These real time systems often come with
strict end-to-end inference latency requirements. Unfortunately, while the
majority of the transformer computation comes from matrix multiplications,
transformers also include several non-linear components that tend to become the
bottleneck during an inference. In this work, we accelerate the inference of
BERT models on the tensor streaming processor. By carefully fusing all the
nonlinear components with the matrix multiplication components, we are able to
efficiently utilize the on-chip matrix multiplication units resulting in a
deterministic tail latency of 130 $\mu$s for a batch-1 inference through
BERT-base, which is 6X faster than the current state-of-the-art. |
---|---|
DOI: | 10.48550/arxiv.2206.11062 |