MADA: Meta-Adaptive Optimizers through hyper-gradient Descent
Following the introduction of Adam, several novel adaptive optimizers for deep learning have been proposed. These optimizers typically excel in some tasks but may not outperform Adam uniformly across all tasks. In this work, we introduce Meta-Adaptive Optimizers (MADA), a unified optimizer framework...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Following the introduction of Adam, several novel adaptive optimizers for
deep learning have been proposed. These optimizers typically excel in some
tasks but may not outperform Adam uniformly across all tasks. In this work, we
introduce Meta-Adaptive Optimizers (MADA), a unified optimizer framework that
can generalize several known optimizers and dynamically learn the most suitable
one during training. The key idea in MADA is to parameterize the space of
optimizers and dynamically search through it using hyper-gradient descent
during training. We empirically compare MADA to other popular optimizers on
vision and language tasks, and find that MADA consistently outperforms Adam and
other popular optimizers, and is robust against sub-optimally tuned
hyper-parameters. MADA achieves a greater validation performance improvement
over Adam compared to other popular optimizers during GPT-2 training and
fine-tuning. We also propose AVGrad, a modification of AMSGrad that replaces
the maximum operator with averaging, which is more suitable for hyper-gradient
optimization. Finally, we provide a convergence analysis to show that
parameterized interpolations of optimizers can improve their error bounds (up
to constants), hinting at an advantage for meta-optimizers. |
---|---|
DOI: | 10.48550/arxiv.2401.08893 |