Distributed Extra-gradient with Optimal Complexity and Communication Guarantees
We consider monotone variational inequality (VI) problems in multi-GPU settings where multiple processors/workers/clients have access to local stochastic dual vectors. This setting includes a broad range of important problems from distributed convex minimization to min-max and games. Extra-gradient,...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We consider monotone variational inequality (VI) problems in multi-GPU
settings where multiple processors/workers/clients have access to local
stochastic dual vectors. This setting includes a broad range of important
problems from distributed convex minimization to min-max and games.
Extra-gradient, which is a de facto algorithm for monotone VI problems, has not
been designed to be communication-efficient. To this end, we propose a
quantized generalized extra-gradient (Q-GenX), which is an unbiased and
adaptive compression method tailored to solve VIs. We provide an adaptive
step-size rule, which adapts to the respective noise profiles at hand and
achieve a fast rate of ${\mathcal O}(1/T)$ under relative noise, and an
order-optimal ${\mathcal O}(1/\sqrt{T})$ under absolute noise and show
distributed training accelerates convergence. Finally, we validate our
theoretical results by providing real-world experiments and training generative
adversarial networks on multiple GPUs. |
---|---|
DOI: | 10.48550/arxiv.2308.09187 |