Learning Neural Activations
An artificial neuron is modelled as a weighted summation followed by an activation function which determines its output. A wide variety of activation functions such as rectified linear units (ReLU), leaky-ReLU, Swish, MISH, etc. have been explored in the literature. In this short paper, we explore w...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | An artificial neuron is modelled as a weighted summation followed by an
activation function which determines its output. A wide variety of activation
functions such as rectified linear units (ReLU), leaky-ReLU, Swish, MISH, etc.
have been explored in the literature. In this short paper, we explore what
happens when the activation function of each neuron in an artificial neural
network is learned natively from data alone. This is achieved by modelling the
activation function of each neuron as a small neural network whose weights are
shared by all neurons in the original network. We list our primary findings in
the conclusions section. The code for our analysis is available at:
https://github.com/amina01/Learning-Neural-Activations. |
---|---|
DOI: | 10.48550/arxiv.1912.12187 |