TweedieMix: Improving Multi-Concept Fusion for Diffusion-based Image/Video Generation
Despite significant advancements in customizing text-to-image and video generation models, generating images and videos that effectively integrate multiple personalized concepts remains a challenging task. To address this, we present TweedieMix, a novel method for composing customized diffusion mode...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Despite significant advancements in customizing text-to-image and video
generation models, generating images and videos that effectively integrate
multiple personalized concepts remains a challenging task. To address this, we
present TweedieMix, a novel method for composing customized diffusion models
during the inference phase. By analyzing the properties of reverse diffusion
sampling, our approach divides the sampling process into two stages. During the
initial steps, we apply a multiple object-aware sampling technique to ensure
the inclusion of the desired target objects. In the later steps, we blend the
appearances of the custom concepts in the de-noised image space using Tweedie's
formula. Our results demonstrate that TweedieMix can generate multiple
personalized concepts with higher fidelity than existing methods. Moreover, our
framework can be effortlessly extended to image-to-video diffusion models,
enabling the generation of videos that feature multiple personalized concepts.
Results and source code are in our anonymous project page. |
---|---|
DOI: | 10.48550/arxiv.2410.05591 |