Minimax Optimization with Smooth Algorithmic Adversaries
This paper considers minimax optimization $\min_x \max_y f(x, y)$ in the challenging setting where $f$ can be both nonconvex in $x$ and nonconcave in $y$. Though such optimization problems arise in many machine learning paradigms including training generative adversarial networks (GANs) and adversar...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This paper considers minimax optimization $\min_x \max_y f(x, y)$ in the
challenging setting where $f$ can be both nonconvex in $x$ and nonconcave in
$y$. Though such optimization problems arise in many machine learning paradigms
including training generative adversarial networks (GANs) and adversarially
robust models, many fundamental issues remain in theory, such as the absence of
efficiently computable optimality notions, and cyclic or diverging behavior of
existing algorithms. Our framework sprouts from the practical consideration
that under a computational budget, the max-player can not fully maximize
$f(x,\cdot)$ since nonconcave maximization is NP-hard in general. So, we
propose a new algorithm for the min-player to play against smooth algorithms
deployed by the adversary (i.e., the max-player) instead of against full
maximization. Our algorithm is guaranteed to make monotonic progress (thus
having no limit cycles), and to find an appropriate "stationary point" in a
polynomial number of iterations. Our framework covers practical settings where
the smooth algorithms deployed by the adversary are multi-step stochastic
gradient ascent, and its accelerated version. We further provide complementing
experiments that confirm our theoretical findings and demonstrate the
effectiveness of the proposed approach in practice. |
---|---|
DOI: | 10.48550/arxiv.2106.01488 |