Avoiding Spurious Local Minima in Deep Quadratic Networks
Despite their practical success, a theoretical understanding of the loss landscape of neural networks has proven challenging due to the high-dimensional, non-convex, and highly nonlinear structure of such models. In this paper, we characterize the training landscape of the mean squared error loss fo...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Despite their practical success, a theoretical understanding of the loss
landscape of neural networks has proven challenging due to the
high-dimensional, non-convex, and highly nonlinear structure of such models. In
this paper, we characterize the training landscape of the mean squared error
loss for neural networks with quadratic activation functions. We prove
existence of spurious local minima and saddle points which can be escaped
easily with probability one when the number of neurons is greater than or equal
to the input dimension and the norm of the training samples is used as a
regressor. We prove that deep overparameterized neural networks with quadratic
activations benefit from similar nice landscape properties. Our theoretical
results are independent of data distribution and fill the existing gap in
theory for two-layer quadratic neural networks. Finally, we empirically
demonstrate convergence to a global minimum for these problems. |
---|---|
DOI: | 10.48550/arxiv.2001.00098 |