Towards Understanding the Importance of Noise in Training Neural Networks
Numerous empirical evidence has corroborated that the noise plays a crucial rule in effective and efficient training of neural networks. The theory behind, however, is still largely unknown. This paper studies this fundamental problem through training a simple two-layer convolutional neural network...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Numerous empirical evidence has corroborated that the noise plays a crucial
rule in effective and efficient training of neural networks. The theory behind,
however, is still largely unknown. This paper studies this fundamental problem
through training a simple two-layer convolutional neural network model.
Although training such a network requires solving a nonconvex optimization
problem with a spurious local optimum and a global optimum, we prove that
perturbed gradient descent and perturbed mini-batch stochastic gradient
algorithms in conjunction with noise annealing is guaranteed to converge to a
global optimum in polynomial time with arbitrary initialization. This implies
that the noise enables the algorithm to efficiently escape from the spurious
local optimum. Numerical experiments are provided to support our theory. |
---|---|
DOI: | 10.48550/arxiv.1909.03172 |