Iterated Vector Fields and Conservatism, with Applications to Federated Learning
We study whether iterated vector fields (vector fields composed with themselves) are conservative. We give explicit examples of vector fields for which this self-composition preserves conservatism. Notably, this includes gradient vector fields of loss functions associated with some generalized linea...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We study whether iterated vector fields (vector fields composed with
themselves) are conservative. We give explicit examples of vector fields for
which this self-composition preserves conservatism. Notably, this includes
gradient vector fields of loss functions associated with some generalized
linear models. As we show, characterizing the set of vector fields satisfying
this condition leads to non-trivial geometric questions. In the context of
federated learning, we show that when clients have loss functions whose
gradients satisfy this condition, federated averaging is equivalent to gradient
descent on a surrogate loss function. We leverage this to derive novel
convergence results for federated learning. By contrast, we demonstrate that
when the client losses violate this property, federated averaging can yield
behavior which is fundamentally distinct from centralized optimization.
Finally, we discuss theoretical and practical questions our analytical
framework raises for federated learning. |
---|---|
DOI: | 10.48550/arxiv.2109.03973 |