Neural Path Features and Neural Path Kernel : Understanding the role of gates in deep learning
Rectified linear unit (ReLU) activations can also be thought of as 'gates', which, either pass or stop their pre-activation input when they are 'on' (when the pre-activation input is positive) or 'off' (when the pre-activation input is negative) respectively. A deep neu...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Rectified linear unit (ReLU) activations can also be thought of as 'gates',
which, either pass or stop their pre-activation input when they are 'on' (when
the pre-activation input is positive) or 'off' (when the pre-activation input
is negative) respectively. A deep neural network (DNN) with ReLU activations
has many gates, and the on/off status of each gate changes across input
examples as well as network weights. For a given input example, only a subset
of gates are 'active', i.e., on, and the sub-network of weights connected to
these active gates is responsible for producing the output. At randomised
initialisation, the active sub-network corresponding to a given input example
is random. During training, as the weights are learnt, the active sub-networks
are also learnt, and potentially hold very valuable information. In this paper,
we analytically characterise the role of active sub-networks in deep learning.
To this end, we encode the on/off state of the gates of a given input in a
novel 'neural path feature' (NPF), and the weights of the DNN are encoded in a
novel 'neural path value' (NPV). Further, we show that the output of network is
indeed the inner product of NPF and NPV. The main result of the paper shows
that the 'neural path kernel' associated with the NPF is a fundamental quantity
that characterises the information stored in the gates of a DNN. We show via
experiments (on MNIST and CIFAR-10) that in standard DNNs with ReLU activations
NPFs are learnt during training and such learning is key for generalisation.
Furthermore, NPFs and NPVs can be learnt in two separate networks and such
learning also generalises well in experiments. |
---|---|
DOI: | 10.48550/arxiv.2006.10529 |