Neural networks with linear threshold activations: structure and algorithms
In this article we present new results on neural networks with linear threshold activation functions x ↦ 1 { x > 0 } . We precisely characterize the class of functions that are representable by such neural networks and show that 2 hidden layers are necessary and sufficient to represent any functi...
Gespeichert in:
Veröffentlicht in: | Mathematical programming 2024-07, Vol.206 (1-2), p.333-356 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In this article we present new results on neural networks with linear threshold activation functions
x
↦
1
{
x
>
0
}
. We precisely characterize the class of functions that are representable by such neural networks and show that 2 hidden layers are necessary and sufficient to represent any function representable in the class. This is a surprising result in the light of recent exact representability investigations for neural networks using other popular activation functions like rectified linear units (ReLU). We also give upper and lower bounds on the sizes of the neural networks required to represent any function in the class. Finally, we design an algorithm to solve the
empirical risk minimization (ERM)
problem to global optimality for these neural networks with a fixed architecture. The algorithm’s running time is polynomial in the size of the data sample, if the input dimension and the size of the network architecture are considered fixed constants. The algorithm is unique in the sense that it works for any architecture with any number of layers, whereas previous polynomial time globally optimal algorithms work only for restricted classes of architectures. Using these insights, we propose a new class of neural networks that we call
shortcut linear threshold neural networks
. To the best of our knowledge, this way of designing neural networks has not been explored before in the literature. We show that these neural networks have several desirable theoretical properties. |
---|---|
ISSN: | 0025-5610 1436-4646 |
DOI: | 10.1007/s10107-023-02016-5 |