Expressivity of ReLU-Networks under Convex Relaxations
Convex relaxations are a key component of training and certifying provably safe neural networks. However, despite substantial progress, a wide and poorly understood accuracy gap to standard networks remains, raising the question of whether this is due to fundamental limitations of convex relaxations...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Convex relaxations are a key component of training and certifying provably
safe neural networks. However, despite substantial progress, a wide and poorly
understood accuracy gap to standard networks remains, raising the question of
whether this is due to fundamental limitations of convex relaxations. Initial
work investigating this question focused on the simple and widely used IBP
relaxation. It revealed that some univariate, convex, continuous piecewise
linear (CPWL) functions cannot be encoded by any ReLU network such that its
IBP-analysis is precise. To explore whether this limitation is shared by more
advanced convex relaxations, we conduct the first in-depth study on the
expressive power of ReLU networks across all commonly used convex relaxations.
We show that: (i) more advanced relaxations allow a larger class of univariate
functions to be expressed as precisely analyzable ReLU networks, (ii) more
precise relaxations can allow exponentially larger solution spaces of ReLU
networks encoding the same functions, and (iii) even using the most precise
single-neuron relaxations, it is impossible to construct precisely analyzable
ReLU networks that express multivariate, convex, monotone CPWL functions. |
---|---|
DOI: | 10.48550/arxiv.2311.04015 |