LIPE: Learning Personalized Identity Prior for Non-rigid Image Editing
Although recent years have witnessed significant advancements in image editing thanks to the remarkable progress of text-to-image diffusion models, the problem of non-rigid image editing still presents its complexities and challenges. Existing methods often fail to achieve consistent results due to...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Although recent years have witnessed significant advancements in image
editing thanks to the remarkable progress of text-to-image diffusion models,
the problem of non-rigid image editing still presents its complexities and
challenges. Existing methods often fail to achieve consistent results due to
the absence of unique identity characteristics. Thus, learning a personalized
identity prior might help with consistency in the edited results. In this
paper, we explore a novel task: learning the personalized identity prior for
text-based non-rigid image editing. To address the problems in jointly learning
prior and editing the image, we present LIPE, a two-stage framework designed to
customize the generative model utilizing a limited set of images of the same
subject, and subsequently employ the model with learned prior for non-rigid
image editing. Experimental results demonstrate the advantages of our approach
in various editing scenarios over past related leading methods in qualitative
and quantitative ways. |
---|---|
DOI: | 10.48550/arxiv.2406.17236 |