Interpolatron: Interpolation or Extrapolation Schemes to Accelerate Optimization for Deep Neural Networks
In this paper we explore acceleration techniques for large scale nonconvex optimization problems with special focuses on deep neural networks. The extrapolation scheme is a classical approach for accelerating stochastic gradient descent for convex optimization, but it does not work well for nonconve...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In this paper we explore acceleration techniques for large scale nonconvex
optimization problems with special focuses on deep neural networks. The
extrapolation scheme is a classical approach for accelerating stochastic
gradient descent for convex optimization, but it does not work well for
nonconvex optimization typically. Alternatively, we propose an interpolation
scheme to accelerate nonconvex optimization and call the method Interpolatron.
We explain motivation behind Interpolatron and conduct a thorough empirical
analysis. Empirical results on DNNs of great depths (e.g., 98-layer ResNet and
200-layer ResNet) on CIFAR-10 and ImageNet show that Interpolatron can converge
much faster than the state-of-the-art methods such as the SGD with momentum and
Adam. Furthermore, Anderson's acceleration, in which mixing coefficients are
computed by least-squares estimation, can also be used to improve the
performance. Both Interpolatron and Anderson's acceleration are easy to
implement and tune. We also show that Interpolatron has linear convergence rate
under certain regularity assumptions. |
---|---|
DOI: | 10.48550/arxiv.1805.06753 |