Topological Expressivity of ReLU Neural Networks
We study the expressivity of ReLU neural networks in the setting of a binary classification problem from a topological perspective. Recently, empirical studies showed that neural networks operate by changing topology, transforming a topologically complicated data set into a topologically simpler one...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We study the expressivity of ReLU neural networks in the setting of a binary
classification problem from a topological perspective. Recently, empirical
studies showed that neural networks operate by changing topology, transforming
a topologically complicated data set into a topologically simpler one as it
passes through the layers. This topological simplification has been measured by
Betti numbers, which are algebraic invariants of a topological space. We use
the same measure to establish lower and upper bounds on the topological
simplification a ReLU neural network can achieve with a given architecture. We
therefore contribute to a better understanding of the expressivity of ReLU
neural networks in the context of binary classification problems by shedding
light on their ability to capture the underlying topological structure of the
data. In particular the results show that deep ReLU neural networks are
exponentially more powerful than shallow ones in terms of topological
simplification. This provides a mathematically rigorous explanation why deeper
networks are better equipped to handle complex and topologically rich data
sets. |
---|---|
DOI: | 10.48550/arxiv.2310.11130 |