Fast and Robust Distributed Learning in High Dimension
Could a gradient aggregation rule (GAR) for distributed machine learning be both robust and fast? This paper answers by the affirmative through multi-Bulyan. Given $n$ workers, $f$ of which are arbitrary malicious (Byzantine) and $m=n-f$ are not, we prove that multi-Bulyan can ensure a strong form o...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Could a gradient aggregation rule (GAR) for distributed machine learning be
both robust and fast? This paper answers by the affirmative through
multi-Bulyan. Given $n$ workers, $f$ of which are arbitrary malicious
(Byzantine) and $m=n-f$ are not, we prove that multi-Bulyan can ensure a strong
form of Byzantine resilience, as well as an ${\frac{m}{n}}$ slowdown, compared
to averaging, the fastest (but non Byzantine resilient) rule for distributed
machine learning. When $m \approx n$ (almost all workers are correct),
multi-Bulyan reaches the speed of averaging. We also prove that multi-Bulyan's
cost in local computation is $O(d)$ (like averaging), an important feature for
ML where $d$ commonly reaches $10^9$, while robust alternatives have at least
quadratic cost in $d$.
Our theoretical findings are complemented with an experimental evaluation
which, in addition to supporting the linear $O(d)$ complexity argument, conveys
the fact that multi-Bulyan's parallelisability further adds to its efficiency. |
---|---|
DOI: | 10.48550/arxiv.1905.04374 |