Training With Data Dependent Dynamic Learning Rates
Recently many first and second order variants of SGD have been proposed to facilitate training of Deep Neural Networks (DNNs). A common limitation of these works stem from the fact that they use the same learning rate across all instances present in the dataset. This setting is widely adopted under...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recently many first and second order variants of SGD have been proposed to
facilitate training of Deep Neural Networks (DNNs). A common limitation of
these works stem from the fact that they use the same learning rate across all
instances present in the dataset. This setting is widely adopted under the
assumption that loss functions for each instance are similar in nature, and
hence, a common learning rate can be used. In this work, we relax this
assumption and propose an optimization framework which accounts for difference
in loss function characteristics across instances. More specifically, our
optimizer learns a dynamic learning rate for each instance present in the
dataset. Learning a dynamic learning rate for each instance allows our
optimization framework to focus on different modes of training data during
optimization. When applied to an image classification task, across different
CNN architectures, learning dynamic learning rates leads to consistent gains
over standard optimizers. When applied to a dataset containing corrupt
instances, our framework reduces the learning rates on noisy instances, and
improves over the state-of-the-art. Finally, we show that our optimization
framework can be used for personalization of a machine learning model towards a
known targeted data distribution. |
---|---|
DOI: | 10.48550/arxiv.2105.13464 |