A Theory of Usable Information Under Computational Constraints
We propose a new framework for reasoning about information in complex systems. Our foundation is based on a variational extension of Shannon's information theory that takes into account the modeling power and computational constraints of the observer. The resulting \emph{predictive $\mathcal{V}...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We propose a new framework for reasoning about information in complex
systems. Our foundation is based on a variational extension of Shannon's
information theory that takes into account the modeling power and computational
constraints of the observer. The resulting \emph{predictive
$\mathcal{V}$-information} encompasses mutual information and other notions of
informativeness such as the coefficient of determination. Unlike Shannon's
mutual information and in violation of the data processing inequality,
$\mathcal{V}$-information can be created through computation. This is
consistent with deep neural networks extracting hierarchies of progressively
more informative features in representation learning. Additionally, we show
that by incorporating computational constraints, $\mathcal{V}$-information can
be reliably estimated from data even in high dimensions with PAC-style
guarantees. Empirically, we demonstrate predictive $\mathcal{V}$-information is
more effective than mutual information for structure learning and fair
representation learning. |
---|---|
DOI: | 10.48550/arxiv.2002.10689 |