Improving Long-Text Alignment for Text-to-Image Diffusion Models
The rapid advancement of text-to-image (T2I) diffusion models has enabled them to generate unprecedented results from given texts. However, as text inputs become longer, existing encoding methods like CLIP face limitations, and aligning the generated images with long texts becomes challenging. To ta...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The rapid advancement of text-to-image (T2I) diffusion models has enabled
them to generate unprecedented results from given texts. However, as text
inputs become longer, existing encoding methods like CLIP face limitations, and
aligning the generated images with long texts becomes challenging. To tackle
these issues, we propose LongAlign, which includes a segment-level encoding
method for processing long texts and a decomposed preference optimization
method for effective alignment training. For segment-level encoding, long texts
are divided into multiple segments and processed separately. This method
overcomes the maximum input length limits of pretrained encoding models. For
preference optimization, we provide decomposed CLIP-based preference models to
fine-tune diffusion models. Specifically, to utilize CLIP-based preference
models for T2I alignment, we delve into their scoring mechanisms and find that
the preference scores can be decomposed into two components: a text-relevant
part that measures T2I alignment and a text-irrelevant part that assesses other
visual aspects of human preference. Additionally, we find that the
text-irrelevant part contributes to a common overfitting problem during
fine-tuning. To address this, we propose a reweighting strategy that assigns
different weights to these two components, thereby reducing overfitting and
enhancing alignment. After fine-tuning $512 \times 512$ Stable Diffusion (SD)
v1.5 for about 20 hours using our method, the fine-tuned SD outperforms
stronger foundation models in T2I alignment, such as PixArt-$\alpha$ and
Kandinsky v2.2. The code is available at
https://github.com/luping-liu/LongAlign. |
---|---|
DOI: | 10.48550/arxiv.2410.11817 |