Cooperative Distribution Alignment via JSD Upper Bound
Unsupervised distribution alignment estimates a transformation that maps two or more source distributions to a shared aligned distribution given only samples from each distribution. This task has many applications including generative modeling, unsupervised domain adaptation, and socially aware lear...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Unsupervised distribution alignment estimates a transformation that maps two
or more source distributions to a shared aligned distribution given only
samples from each distribution. This task has many applications including
generative modeling, unsupervised domain adaptation, and socially aware
learning. Most prior works use adversarial learning (i.e., min-max
optimization), which can be challenging to optimize and evaluate. A few recent
works explore non-adversarial flow-based (i.e., invertible) approaches, but
they lack a unified perspective and are limited in efficiently aligning
multiple distributions. Therefore, we propose to unify and generalize previous
flow-based approaches under a single non-adversarial framework, which we prove
is equivalent to minimizing an upper bound on the Jensen-Shannon Divergence
(JSD). Importantly, our problem reduces to a min-min, i.e., cooperative,
problem and can provide a natural evaluation metric for unsupervised
distribution alignment. We show empirical results on both simulated and
real-world datasets to demonstrate the benefits of our approach. Code is
available at https://github.com/inouye-lab/alignment-upper-bound. |
---|---|
DOI: | 10.48550/arxiv.2207.02286 |