Learning Variational Inequalities from Data: Fast Generalization Rates under Strong Monotonicity
Variational inequalities (VIs) are a broad class of optimization problems encompassing machine learning problems ranging from standard convex minimization to more complex scenarios like min-max optimization and computing the equilibria of multi-player games. In convex optimization, strong convexity...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Variational inequalities (VIs) are a broad class of optimization problems
encompassing machine learning problems ranging from standard convex
minimization to more complex scenarios like min-max optimization and computing
the equilibria of multi-player games. In convex optimization, strong convexity
allows for fast statistical learning rates requiring only $\Theta(1/\epsilon)$
stochastic first-order oracle calls to find an $\epsilon$-optimal solution,
rather than the standard $\Theta(1/\epsilon^2)$ calls. In this paper, we
explain how one can similarly obtain fast $\Theta(1/\epsilon)$ rates for
learning VIs that satisfy strong monotonicity, a generalization of strong
convexity. Specifically, we demonstrate that standard stability-based
generalization arguments for convex minimization extend directly to VIs when
the domain admits a small covering, or when the operator is integrable and
suboptimality is measured by potential functions; such as when finding
equilibria in multi-player games. |
---|---|
DOI: | 10.48550/arxiv.2410.20649 |