Computation vs. Communication Scaling for Future Transformers on Future Hardware
Scaling neural network models has delivered dramatic quality gains across ML problems. However, this scaling has increased the reliance on efficient distributed training techniques. Accordingly, as with other distributed computing scenarios, it is important to understand how will compute and communi...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Scaling neural network models has delivered dramatic quality gains across ML
problems. However, this scaling has increased the reliance on efficient
distributed training techniques. Accordingly, as with other distributed
computing scenarios, it is important to understand how will compute and
communication scale relative to one another as models scale and hardware
evolves? A careful study which answers this question can better guide the
design of future systems which can efficiently train future large models.
Accordingly, this work provides a comprehensive multi-axial (algorithmic,
empirical, hardware evolution) analysis of compute vs. communication
(Comp-vs.-Comm) scaling for future Transformer models on future hardware.
First, our algorithmic analysis shows that compute generally enjoys an edge
over communication as models scale. However, since memory capacity scales
slower than compute, these trends are being stressed. Next, we quantify this
edge by empirically studying how Comp-vs.-Comm scales for future models on
future hardware. To avoid profiling numerous Transformer models across many
setups, we extract execution regions and project costs using operator models.
This allows a spectrum (hundreds) of future model/hardware scenarios to be
accurately studied ($ |
---|---|
DOI: | 10.48550/arxiv.2302.02825 |