Understanding and Improving Layer Normalization
Layer normalization (LayerNorm) is a technique to normalize the distributions of intermediate layers. It enables smoother gradients, faster training, and better generalization accuracy. However, it is still unclear where the effectiveness stems from. In this paper, our main contribution is to take a...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Layer normalization (LayerNorm) is a technique to normalize the distributions
of intermediate layers. It enables smoother gradients, faster training, and
better generalization accuracy. However, it is still unclear where the
effectiveness stems from. In this paper, our main contribution is to take a
step further in understanding LayerNorm. Many of previous studies believe that
the success of LayerNorm comes from forward normalization. Unlike them, we find
that the derivatives of the mean and variance are more important than forward
normalization by re-centering and re-scaling backward gradients. Furthermore,
we find that the parameters of LayerNorm, including the bias and gain, increase
the risk of over-fitting and do not work in most cases. Experiments show that a
simple version of LayerNorm (LayerNorm-simple) without the bias and gain
outperforms LayerNorm on four datasets. It obtains the state-of-the-art
performance on En-Vi machine translation. To address the over-fitting problem,
we propose a new normalization method, Adaptive Normalization (AdaNorm), by
replacing the bias and gain with a new transformation function. Experiments
show that AdaNorm demonstrates better results than LayerNorm on seven out of
eight datasets. |
---|---|
DOI: | 10.48550/arxiv.1911.07013 |