Growing Cosine Unit: A Novel Oscillatory Activation Function That Can Speedup Training and Reduce Parameters in Convolutional Neural Networks
Convolutional neural networks have been successful in solving many socially important and economically significant problems. This ability to learn complex high-dimensional functions hierarchically can be attributed to the use of nonlinear activation functions. A key discovery that made training deep...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Convolutional neural networks have been successful in solving many socially
important and economically significant problems. This ability to learn complex
high-dimensional functions hierarchically can be attributed to the use of
nonlinear activation functions. A key discovery that made training deep
networks feasible was the adoption of the Rectified Linear Unit (ReLU)
activation function to alleviate the vanishing gradient problem caused by using
saturating activation functions. Since then, many improved variants of the ReLU
activation have been proposed. However, a majority of activation functions used
today are non-oscillatory and monotonically increasing due to their biological
plausibility. This paper demonstrates that oscillatory activation functions can
improve gradient flow and reduce network size. Two theorems on limits of
non-oscillatory activation functions are presented. A new oscillatory
activation function called Growing Cosine Unit(GCU) defined as $C(z) = z\cos z$
that outperforms Sigmoids, Swish, Mish and ReLU on a variety of architectures
and benchmarks is presented. The GCU activation has multiple zeros enabling
single GCU neurons to have multiple hyperplanes in the decision boundary. This
allows single GCU neurons to learn the XOR function without feature
engineering. Experimental results indicate that replacing the activation
function in the convolution layers with the GCU activation function
significantly improves performance on CIFAR-10, CIFAR-100 and Imagenette. |
---|---|
DOI: | 10.48550/arxiv.2108.12943 |