LayerDiff: Exploring Text-guided Multi-layered Composable Image Synthesis via Layer-Collaborative Diffusion Model
Despite the success of generating high-quality images given any text prompts by diffusion-based generative models, prior works directly generate the entire images, but cannot provide object-wise manipulation capability. To support wider real applications like professional graphic design and digital...
Gespeichert in:
Hauptverfasser: | , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Despite the success of generating high-quality images given any text prompts
by diffusion-based generative models, prior works directly generate the entire
images, but cannot provide object-wise manipulation capability. To support
wider real applications like professional graphic design and digital artistry,
images are frequently created and manipulated in multiple layers to offer
greater flexibility and control. Therefore in this paper, we propose a
layer-collaborative diffusion model, named LayerDiff, specifically designed for
text-guided, multi-layered, composable image synthesis. The composable image
consists of a background layer, a set of foreground layers, and associated mask
layers for each foreground element. To enable this, LayerDiff introduces a
layer-based generation paradigm incorporating multiple layer-collaborative
attention modules to capture inter-layer patterns. Specifically, an inter-layer
attention module is designed to encourage information exchange and learning
between layers, while a text-guided intra-layer attention module incorporates
layer-specific prompts to direct the specific-content generation for each
layer. A layer-specific prompt-enhanced module better captures detailed textual
cues from the global prompt. Additionally, a self-mask guidance sampling
strategy further unleashes the model's ability to generate multi-layered
images. We also present a pipeline that integrates existing perceptual and
generative models to produce a large dataset of high-quality, text-prompted,
multi-layered images. Extensive experiments demonstrate that our LayerDiff
model can generate high-quality multi-layered images with performance
comparable to conventional whole-image generation methods. Moreover, LayerDiff
enables a broader range of controllable generative applications, including
layer-specific image editing and style transfer. |
---|---|
DOI: | 10.48550/arxiv.2403.11929 |