Stable Walk: An interactive environment for exploring Stable Diffusion outputs
The past year saw an advancement in text-to-image models. Several models were released as well as services made available for users to use to generate images. These have become popular because without special training, the models can generate images from a simple text prompt. However the parameter s...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The past year saw an advancement in text-to-image models. Several models were released as
well as services made available for users to use to generate images. These have become popular
because without special training, the models can generate images from a simple text prompt.
However the parameter space of these models go beyond the text prompt, and skilled users can
finetune the output of the models using these parameters. In this work we present ongoing work
developing a tool to explore the parameter space of Stable Diffusion. The aim of the tool is to
make it possible to explore the parameter space visually. In particular we present a novel way
of exploring the text embedding space by allowing users to combine several prompts. |
---|---|
ISSN: | 1613-0073 1613-0073 |