Half-Space Feature Learning in Neural Networks
There currently exist two extreme viewpoints for neural network feature learning -- (i) Neural networks simply implement a kernel method (a la NTK) and hence no features are learned (ii) Neural networks can represent (and hence learn) intricate hierarchical features suitable for the data. We argue i...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | There currently exist two extreme viewpoints for neural network feature
learning -- (i) Neural networks simply implement a kernel method (a la NTK) and
hence no features are learned (ii) Neural networks can represent (and hence
learn) intricate hierarchical features suitable for the data. We argue in this
paper neither interpretation is likely to be correct based on a novel
viewpoint. Neural networks can be viewed as a mixture of experts, where each
expert corresponds to a (number of layers length) path through a sequence of
hidden units. We use this alternate interpretation to motivate a model, called
the Deep Linearly Gated Network (DLGN), which sits midway between deep linear
networks and ReLU networks. Unlike deep linear networks, the DLGN is capable of
learning non-linear features (which are then linearly combined), and unlike
ReLU networks these features are ultimately simple -- each feature is
effectively an indicator function for a region compactly described as an
intersection of (number of layers) half-spaces in the input space. This
viewpoint allows for a comprehensive global visualization of features, unlike
the local visualizations for neurons based on saliency/activation/gradient
maps. Feature learning in DLGNs is shown to happen and the mechanism with which
this happens is through learning half-spaces in the input space that contain
smooth regions of the target function. Due to the structure of DLGNs, the
neurons in later layers are fundamentally the same as those in earlier layers
-- they all represent a half-space -- however, the dynamics of gradient descent
impart a distinct clustering to the later layer neurons. We hypothesize that
ReLU networks also have similar feature learning behaviour. |
---|---|
DOI: | 10.48550/arxiv.2404.04312 |