Magic Insert: Style-Aware Drag-and-Drop
We present Magic Insert, a method for dragging-and-dropping subjects from a user-provided image into a target image of a different style in a physically plausible manner while matching the style of the target image. This work formalizes the problem of style-aware drag-and-drop and presents a method...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We present Magic Insert, a method for dragging-and-dropping subjects from a
user-provided image into a target image of a different style in a physically
plausible manner while matching the style of the target image. This work
formalizes the problem of style-aware drag-and-drop and presents a method for
tackling it by addressing two sub-problems: style-aware personalization and
realistic object insertion in stylized images. For style-aware personalization,
our method first fine-tunes a pretrained text-to-image diffusion model using
LoRA and learned text tokens on the subject image, and then infuses it with a
CLIP representation of the target style. For object insertion, we use
Bootstrapped Domain Adaption to adapt a domain-specific photorealistic object
insertion model to the domain of diverse artistic styles. Overall, the method
significantly outperforms traditional approaches such as inpainting. Finally,
we present a dataset, SubjectPlop, to facilitate evaluation and future progress
in this area. Project page: https://magicinsert.github.io/ |
---|---|
DOI: | 10.48550/arxiv.2407.02489 |