Practical and Asymptotically Exact Conditional Sampling in Diffusion Models
NeurIPS 2023 Diffusion models have been successful on a range of conditional generation tasks including molecular design and text-to-image generation. However, these achievements have primarily depended on task-specific conditional training or error-prone heuristic approximations. Ideally, a conditi...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | NeurIPS 2023 Diffusion models have been successful on a range of conditional generation
tasks including molecular design and text-to-image generation. However, these
achievements have primarily depended on task-specific conditional training or
error-prone heuristic approximations. Ideally, a conditional generation method
should provide exact samples for a broad range of conditional distributions
without requiring task-specific training. To this end, we introduce the Twisted
Diffusion Sampler, or TDS. TDS is a sequential Monte Carlo (SMC) algorithm that
targets the conditional distributions of diffusion models through simulating a
set of weighted particles. The main idea is to use twisting, an SMC technique
that enjoys good computational efficiency, to incorporate heuristic
approximations without compromising asymptotic exactness. We first find in
simulation and in conditional image generation tasks that TDS provides a
computational statistical trade-off, yielding more accurate approximations with
many particles but with empirical improvements over heuristics with as few as
two particles. We then turn to motif-scaffolding, a core task in protein
design, using a TDS extension to Riemannian diffusion models. On benchmark test
cases, TDS allows flexible conditioning criteria and often outperforms the
state of the art. |
---|---|
DOI: | 10.48550/arxiv.2306.17775 |