An Empirical Analysis of the Advantages of Finite- v.s. Infinite-Width Bayesian Neural Networks
Comparing Bayesian neural networks (BNNs) with different widths is challenging because, as the width increases, multiple model properties change simultaneously, and, inference in the finite-width case is intractable. In this work, we empirically compare finite- and infinite-width BNNs, and provide q...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Comparing Bayesian neural networks (BNNs) with different widths is
challenging because, as the width increases, multiple model properties change
simultaneously, and, inference in the finite-width case is intractable. In this
work, we empirically compare finite- and infinite-width BNNs, and provide
quantitative and qualitative explanations for their performance difference. We
find that when the model is mis-specified, increasing width can hurt BNN
performance. In these cases, we provide evidence that finite-width BNNs
generalize better partially due to the properties of their frequency spectrum
that allows them to adapt under model mismatch. |
---|---|
DOI: | 10.48550/arxiv.2211.09184 |