Controlled and Conditional Text to Image Generation with Diffusion Prior
Denoising Diffusion models have shown remarkable performance in generating diverse, high quality images from text. Numerous techniques have been proposed on top of or in alignment with models like Stable Diffusion and Imagen that generate images directly from text. A lesser explored approach is DALL...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Denoising Diffusion models have shown remarkable performance in generating
diverse, high quality images from text. Numerous techniques have been proposed
on top of or in alignment with models like Stable Diffusion and Imagen that
generate images directly from text. A lesser explored approach is DALLE-2's two
step process comprising a Diffusion Prior that generates a CLIP image embedding
from text and a Diffusion Decoder that generates an image from a CLIP image
embedding. We explore the capabilities of the Diffusion Prior and the
advantages of an intermediate CLIP representation. We observe that Diffusion
Prior can be used in a memory and compute efficient way to constrain the
generation to a specific domain without altering the larger Diffusion Decoder.
Moreover, we show that the Diffusion Prior can be trained with additional
conditional information such as color histogram to further control the
generation. We show quantitatively and qualitatively that the proposed
approaches perform better than prompt engineering for domain specific
generation and existing baselines for color conditioned generation. We believe
that our observations and results will instigate further research into the
diffusion prior and uncover more of its capabilities. |
---|---|
DOI: | 10.48550/arxiv.2302.11710 |