Information Bottleneck Analysis by a Conditional Mutual Information Bound

Task-nuisance decomposition describes why the information bottleneck loss I(z;x)−βI(z;y) is a suitable objective for supervised learning. The true category y is predicted for input x using latent variables z. When n is a nuisance independent from y, I(z;n) can be decreased by reducing I(z;x) since t...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Entropy (Basel, Switzerland) Switzerland), 2021-07, Vol.23 (8), p.974
Hauptverfasser: Tezuka, Taro, Namekawa, Shizuma
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Task-nuisance decomposition describes why the information bottleneck loss I(z;x)−βI(z;y) is a suitable objective for supervised learning. The true category y is predicted for input x using latent variables z. When n is a nuisance independent from y, I(z;n) can be decreased by reducing I(z;x) since the latter upper bounds the former. We extend this framework by demonstrating that conditional mutual information I(z;x|y) provides an alternative upper bound for I(z;n). This bound is applicable even if z is not a sufficient representation of x, that is, I(z;y)≠I(x;y). We used mutual information neural estimation (MINE) to estimate I(z;x|y). Experiments demonstrated that I(z;x|y) is smaller than I(z;x) for layers closer to the input, matching the claim that the former is a tighter bound than the latter. Because of this difference, the information plane differs when I(z;x|y) is used instead of I(z;x).
ISSN:1099-4300
1099-4300
DOI:10.3390/e23080974