Measuring Disentanglement: A Review of Metrics

Learning to disentangle and represent factors of variation in data is an important problem in artificial intelligence. While many advances have been made to learn these representations, it is still unclear how to quantify disentanglement. While several metrics exist, little is known on their implici...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transaction on neural networks and learning systems 2024-07, Vol.35 (7), p.8747-8761
Hauptverfasser: Carbonneau, Marc-Andre, Zaidi, Julian, Boilard, Jonathan, Gagnon, Ghyslain
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Learning to disentangle and represent factors of variation in data is an important problem in artificial intelligence. While many advances have been made to learn these representations, it is still unclear how to quantify disentanglement. While several metrics exist, little is known on their implicit assumptions, what they truly measure, and their limits. In consequence, it is difficult to interpret results when comparing different representations. In this work, we survey supervised disentanglement metrics and thoroughly analyze them. We propose a new taxonomy in which all metrics fall into one of the three families: intervention-based, predictor-based, and information-based. We conduct extensive experiments in which we isolate properties of disentangled representations, allowing stratified comparison along several axes. From our experiment results and analysis, we provide insights on relations between disentangled representation properties. Finally, we share guidelines on how to measure disentanglement.
ISSN:2162-237X
2162-2388
2162-2388
DOI:10.1109/TNNLS.2022.3218982