Deep MMD Gradient Flow without adversarial training
We propose a gradient flow procedure for generative modeling by transporting particles from an initial source distribution to a target distribution, where the gradient field on the particles is given by a noise-adaptive Wasserstein Gradient of the Maximum Mean Discrepancy (MMD). The noise-adaptive M...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We propose a gradient flow procedure for generative modeling by transporting
particles from an initial source distribution to a target distribution, where
the gradient field on the particles is given by a noise-adaptive Wasserstein
Gradient of the Maximum Mean Discrepancy (MMD). The noise-adaptive MMD is
trained on data distributions corrupted by increasing levels of noise, obtained
via a forward diffusion process, as commonly used in denoising diffusion
probabilistic models. The result is a generalization of MMD Gradient Flow,
which we call Diffusion-MMD-Gradient Flow or DMMD. The divergence training
procedure is related to discriminator training in Generative Adversarial
Networks (GAN), but does not require adversarial training. We obtain
competitive empirical performance in unconditional image generation on CIFAR10,
MNIST, CELEB-A (64 x64) and LSUN Church (64 x 64). Furthermore, we demonstrate
the validity of the approach when MMD is replaced by a lower bound on the KL
divergence. |
---|---|
DOI: | 10.48550/arxiv.2405.06780 |