One-Point Gradient-Free Methods for Composite Optimization with Applications to Distributed Optimization
This work is devoted to solving the composite optimization problem with the mixture oracle: for the smooth part of the problem, we have access to the gradient, and for the non-smooth part, only to the one-point zero-order oracle. For such a setup, we present a new method based on the sliding algorit...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This work is devoted to solving the composite optimization problem with the
mixture oracle: for the smooth part of the problem, we have access to the
gradient, and for the non-smooth part, only to the one-point zero-order oracle.
For such a setup, we present a new method based on the sliding algorithm. Our
method allows to separate the oracle complexities and compute the gradient for
one of the function as rarely as possible. The paper also present the
applicability of our new method to the problems of distributed optimization and
federated learning. Experimental results confirm the theory. |
---|---|
DOI: | 10.48550/arxiv.2107.05951 |