Coherent and Multi-modality Image Inpainting via Latent Space Optimization
With the advancements in denoising diffusion probabilistic models (DDPMs), image inpainting has significantly evolved from merely filling information based on nearby regions to generating content conditioned on various prompts such as text, exemplar images, and sketches. However, existing methods, s...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | With the advancements in denoising diffusion probabilistic models (DDPMs),
image inpainting has significantly evolved from merely filling information
based on nearby regions to generating content conditioned on various prompts
such as text, exemplar images, and sketches. However, existing methods, such as
model fine-tuning and simple concatenation of latent vectors, often result in
generation failures due to overfitting and inconsistency between the inpainted
region and the background. In this paper, we argue that the current large
diffusion models are sufficiently powerful to generate realistic images without
further tuning. Hence, we introduce PILOT (in\textbf{P}ainting v\textbf{I}a
\textbf{L}atent \textbf{O}p\textbf{T}imization), an optimization approach
grounded on a novel \textit{semantic centralization} and \textit{background
preservation loss}. Our method searches latent spaces capable of generating
inpainted regions that exhibit high fidelity to user-provided prompts while
maintaining coherence with the background. Furthermore, we propose a strategy
to balance optimization expense and image quality, significantly enhancing
generation efficiency. Our method seamlessly integrates with any pre-trained
model, including ControlNet and DreamBooth, making it suitable for deployment
in multi-modal editing tools. Our qualitative and quantitative evaluations
demonstrate that PILOT outperforms existing approaches by generating more
coherent, diverse, and faithful inpainted regions in response to provided
prompts. |
---|---|
DOI: | 10.48550/arxiv.2407.08019 |