Kandinsky: an Improved Text-to-Image Synthesis with Image Prior and Latent Diffusion
Text-to-image generation is a significant domain in modern computer vision and has achieved substantial improvements through the evolution of generative architectures. Among these, there are diffusion-based models that have demonstrated essential quality enhancements. These models are generally spli...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Text-to-image generation is a significant domain in modern computer vision
and has achieved substantial improvements through the evolution of generative
architectures. Among these, there are diffusion-based models that have
demonstrated essential quality enhancements. These models are generally split
into two categories: pixel-level and latent-level approaches. We present
Kandinsky1, a novel exploration of latent diffusion architecture, combining the
principles of the image prior models with latent diffusion techniques. The
image prior model is trained separately to map text embeddings to image
embeddings of CLIP. Another distinct feature of the proposed model is the
modified MoVQ implementation, which serves as the image autoencoder component.
Overall, the designed model contains 3.3B parameters. We also deployed a
user-friendly demo system that supports diverse generative modes such as
text-to-image generation, image fusion, text and image fusion, image variations
generation, and text-guided inpainting/outpainting. Additionally, we released
the source code and checkpoints for the Kandinsky models. Experimental
evaluations demonstrate a FID score of 8.03 on the COCO-30K dataset, marking
our model as the top open-source performer in terms of measurable image
generation quality. |
---|---|
DOI: | 10.48550/arxiv.2310.03502 |