DreamMatcher: Appearance Matching Self-Attention for Semantically-Consistent Text-to-Image Personalization
The objective of text-to-image (T2I) personalization is to customize a diffusion model to a user-provided reference concept, generating diverse images of the concept aligned with the target prompts. Conventional methods representing the reference concepts using unique text embeddings often fail to a...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The objective of text-to-image (T2I) personalization is to customize a
diffusion model to a user-provided reference concept, generating diverse images
of the concept aligned with the target prompts. Conventional methods
representing the reference concepts using unique text embeddings often fail to
accurately mimic the appearance of the reference. To address this, one solution
may be explicitly conditioning the reference images into the target denoising
process, known as key-value replacement. However, prior works are constrained
to local editing since they disrupt the structure path of the pre-trained T2I
model. To overcome this, we propose a novel plug-in method, called
DreamMatcher, which reformulates T2I personalization as semantic matching.
Specifically, DreamMatcher replaces the target values with reference values
aligned by semantic matching, while leaving the structure path unchanged to
preserve the versatile capability of pre-trained T2I models for generating
diverse structures. We also introduce a semantic-consistent masking strategy to
isolate the personalized concept from irrelevant regions introduced by the
target prompts. Compatible with existing T2I models, DreamMatcher shows
significant improvements in complex scenarios. Intensive analyses demonstrate
the effectiveness of our approach. |
---|---|
DOI: | 10.48550/arxiv.2402.09812 |