Learning to Follow Object-Centric Image Editing Instructions Faithfully
Natural language instructions are a powerful interface for editing the outputs of text-to-image diffusion models. However, several challenges need to be addressed: 1) underspecification (the need to model the implicit meaning of instructions) 2) grounding (the need to localize where the edit has to...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Natural language instructions are a powerful interface for editing the
outputs of text-to-image diffusion models. However, several challenges need to
be addressed: 1) underspecification (the need to model the implicit meaning of
instructions) 2) grounding (the need to localize where the edit has to be
performed), 3) faithfulness (the need to preserve the elements of the image not
affected by the edit instruction). Current approaches focusing on image editing
with natural language instructions rely on automatically generated paired data,
which, as shown in our investigation, is noisy and sometimes nonsensical,
exacerbating the above issues. Building on recent advances in segmentation,
Chain-of-Thought prompting, and visual question answering, we significantly
improve the quality of the paired data. In addition, we enhance the supervision
signal by highlighting parts of the image that need to be changed by the
instruction. The model fine-tuned on the improved data is capable of performing
fine-grained object-centric edits better than state-of-the-art baselines,
mitigating the problems outlined above, as shown by automatic and human
evaluations. Moreover, our model is capable of generalizing to domains unseen
during training, such as visual metaphors. |
---|---|
DOI: | 10.48550/arxiv.2310.19145 |