Sparse Super-Regular Networks
It has been argued by Thom and Palm that sparsely-connected neural networks (SCNs) show improved performance over fully-connected networks (FCNs). Super-regular networks (SRNs) are neural networks composed of a set of stacked sparse layers of (epsilon, delta)-super-regular pairs, and randomly permut...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | It has been argued by Thom and Palm that sparsely-connected neural networks
(SCNs) show improved performance over fully-connected networks (FCNs).
Super-regular networks (SRNs) are neural networks composed of a set of stacked
sparse layers of (epsilon, delta)-super-regular pairs, and randomly permuted
node order. Using the Blow-up Lemma, we prove that as a result of the
individual super-regularity of each pair of layers, SRNs guarantee a number of
properties that make them suitable replacements for FCNs for many tasks. These
guarantees include edge uniformity across all large-enough subsets, minimum
node in- and out-degree, input-output sensitivity, and the ability to embed
pre-trained constructs. Indeed, SRNs have the capacity to act like FCNs, and
eliminate the need for costly regularization schemes like Dropout. We show that
SRNs perform similarly to X-Nets via readily reproducible experiments, and
offer far greater guarantees and control over network structure. |
---|---|
DOI: | 10.48550/arxiv.2201.01363 |