Towards Efficient and Scalable Sharpness-Aware Minimization
Recently, Sharpness-Aware Minimization (SAM), which connects the geometry of the loss landscape and generalization, has demonstrated significant performance boosts on training large-scale models such as vision transformers. However, the update rule of SAM requires two sequential (non-parallelizable)...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recently, Sharpness-Aware Minimization (SAM), which connects the geometry of
the loss landscape and generalization, has demonstrated significant performance
boosts on training large-scale models such as vision transformers. However, the
update rule of SAM requires two sequential (non-parallelizable) gradient
computations at each step, which can double the computational overhead. In this
paper, we propose a novel algorithm LookSAM - that only periodically calculates
the inner gradient ascent, to significantly reduce the additional training cost
of SAM. The empirical results illustrate that LookSAM achieves similar accuracy
gains to SAM while being tremendously faster - it enjoys comparable
computational complexity with first-order optimizers such as SGD or Adam. To
further evaluate the performance and scalability of LookSAM, we incorporate a
layer-wise modification and perform experiments in the large-batch training
scenario, which is more prone to converge to sharp local minima. We are the
first to successfully scale up the batch size when training Vision Transformers
(ViTs). With a 64k batch size, we are able to train ViTs from scratch in
minutes while maintaining competitive performance. |
---|---|
DOI: | 10.48550/arxiv.2203.02714 |