Randomized Sharpness-Aware Training for Boosting Computational Efficiency in Deep Learning
By driving models to converge to flat minima, sharpness-aware learning algorithms (such as SAM) have shown the power to achieve state-of-the-art performances. However, these algorithms will generally incur one extra forward-backward propagation at each training iteration, which largely burdens the c...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | By driving models to converge to flat minima, sharpness-aware learning
algorithms (such as SAM) have shown the power to achieve state-of-the-art
performances. However, these algorithms will generally incur one extra
forward-backward propagation at each training iteration, which largely burdens
the computation especially for scalable models. To this end, we propose a
simple yet efficient training scheme, called Randomized Sharpness-Aware
Training (RST). Optimizers in RST would perform a Bernoulli trial at each
iteration to choose randomly from base algorithms (SGD) and sharpness-aware
algorithms (SAM) with a probability arranged by a predefined scheduling
function. Due to the mixture of base algorithms, the overall count of
propagation pairs could be largely reduced. Also, we give theoretical analysis
on the convergence of RST. Then, we empirically study the computation cost and
effect of various types of scheduling functions, and give directions on setting
appropriate scheduling functions. Further, we extend the RST to a general
framework (G-RST), where we can adjust regularization degree on sharpness
freely for any scheduling function. We show that G-RST can outperform SAM in
most cases while saving 50\% extra computation cost. |
---|---|
DOI: | 10.48550/arxiv.2203.09962 |