Properties from Mechanisms: An Equivariance Perspective on Identifiable Representation Learning
A key goal of unsupervised representation learning is "inverting" a data generating process to recover its latent properties. Existing work that provably achieves this goal relies on strong assumptions on relationships between the latent variables (e.g., independence conditional on auxilia...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | A key goal of unsupervised representation learning is "inverting" a data
generating process to recover its latent properties. Existing work that
provably achieves this goal relies on strong assumptions on relationships
between the latent variables (e.g., independence conditional on auxiliary
information). In this paper, we take a very different perspective on the
problem and ask, "Can we instead identify latent properties by leveraging
knowledge of the mechanisms that govern their evolution?" We provide a complete
characterization of the sources of non-identifiability as we vary knowledge
about a set of possible mechanisms. In particular, we prove that if we know the
exact mechanisms under which the latent properties evolve, then identification
can be achieved up to any equivariances that are shared by the underlying
mechanisms. We generalize this characterization to settings where we only know
some hypothesis class over possible mechanisms, as well as settings where the
mechanisms are stochastic. We demonstrate the power of this mechanism-based
perspective by showing that we can leverage our results to generalize existing
identifiable representation learning results. These results suggest that by
exploiting inductive biases on mechanisms, it is possible to design a range of
new identifiable representation learning approaches. |
---|---|
DOI: | 10.48550/arxiv.2110.15796 |