Object-Attribute Binding in Text-to-Image Generation: Evaluation and Control
Current diffusion models create photorealistic images given a text prompt as input but struggle to correctly bind attributes mentioned in the text to the right objects in the image. This is evidenced by our novel image-graph alignment model called EPViT (Edge Prediction Vision Transformer) for the e...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Trusca, Maria Mihaela Nuyts, Wolf Thomm, Jonathan Honig, Robert Hofmann, Thomas Tuytelaars, Tinne Moens, Marie-Francine |
description | Current diffusion models create photorealistic images given a text prompt as
input but struggle to correctly bind attributes mentioned in the text to the
right objects in the image. This is evidenced by our novel image-graph
alignment model called EPViT (Edge Prediction Vision Transformer) for the
evaluation of image-text alignment. To alleviate the above problem, we propose
focused cross-attention (FCA) that controls the visual attention maps by
syntactic constraints found in the input sentence. Additionally, the syntax
structure of the prompt helps to disentangle the multimodal CLIP embeddings
that are commonly used in T2I generation. The resulting DisCLIP embeddings and
FCA are easily integrated in state-of-the-art diffusion models without
additional training of these models. We show substantial improvements in T2I
generation and especially its attribute-object binding on several
datasets.\footnote{Code and data will be made available upon acceptance. |
doi_str_mv | 10.48550/arxiv.2404.13766 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2404_13766</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2404_13766</sourcerecordid><originalsourceid>FETCH-LOGICAL-a676-2c0323055ef40f0f5960eaee0401b7b0eabf6da92d00987167867df442b44e53</originalsourceid><addsrcrecordid>eNotz71OwzAYhWEvDKhwAUz4Bhy--DdhK1EplSJ1aPfIrj9XRqmDjFuVuwcC03mnIz2EPNRQyUYpeLL5Gi8VlyCrWhitb0m_de94KGxZSo7uXJC-xORjOtKY6B6vhZWJbU72iHSNCbMtcUrPdHWx43luapOn3ZRKnsY7chPs-In3_7sgu9fVvntj_Xa96ZY9s9poxg8guAClMEgIEFSrAS0iSKidcT_tgva25R6gbUytTaOND1JyJyUqsSCPf6-zZvjI8WTz1_CrGmaV-AZI5EcW</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Object-Attribute Binding in Text-to-Image Generation: Evaluation and Control</title><source>arXiv.org</source><creator>Trusca, Maria Mihaela ; Nuyts, Wolf ; Thomm, Jonathan ; Honig, Robert ; Hofmann, Thomas ; Tuytelaars, Tinne ; Moens, Marie-Francine</creator><creatorcontrib>Trusca, Maria Mihaela ; Nuyts, Wolf ; Thomm, Jonathan ; Honig, Robert ; Hofmann, Thomas ; Tuytelaars, Tinne ; Moens, Marie-Francine</creatorcontrib><description>Current diffusion models create photorealistic images given a text prompt as
input but struggle to correctly bind attributes mentioned in the text to the
right objects in the image. This is evidenced by our novel image-graph
alignment model called EPViT (Edge Prediction Vision Transformer) for the
evaluation of image-text alignment. To alleviate the above problem, we propose
focused cross-attention (FCA) that controls the visual attention maps by
syntactic constraints found in the input sentence. Additionally, the syntax
structure of the prompt helps to disentangle the multimodal CLIP embeddings
that are commonly used in T2I generation. The resulting DisCLIP embeddings and
FCA are easily integrated in state-of-the-art diffusion models without
additional training of these models. We show substantial improvements in T2I
generation and especially its attribute-object binding on several
datasets.\footnote{Code and data will be made available upon acceptance.</description><identifier>DOI: 10.48550/arxiv.2404.13766</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2024-04</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2404.13766$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2404.13766$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Trusca, Maria Mihaela</creatorcontrib><creatorcontrib>Nuyts, Wolf</creatorcontrib><creatorcontrib>Thomm, Jonathan</creatorcontrib><creatorcontrib>Honig, Robert</creatorcontrib><creatorcontrib>Hofmann, Thomas</creatorcontrib><creatorcontrib>Tuytelaars, Tinne</creatorcontrib><creatorcontrib>Moens, Marie-Francine</creatorcontrib><title>Object-Attribute Binding in Text-to-Image Generation: Evaluation and Control</title><description>Current diffusion models create photorealistic images given a text prompt as
input but struggle to correctly bind attributes mentioned in the text to the
right objects in the image. This is evidenced by our novel image-graph
alignment model called EPViT (Edge Prediction Vision Transformer) for the
evaluation of image-text alignment. To alleviate the above problem, we propose
focused cross-attention (FCA) that controls the visual attention maps by
syntactic constraints found in the input sentence. Additionally, the syntax
structure of the prompt helps to disentangle the multimodal CLIP embeddings
that are commonly used in T2I generation. The resulting DisCLIP embeddings and
FCA are easily integrated in state-of-the-art diffusion models without
additional training of these models. We show substantial improvements in T2I
generation and especially its attribute-object binding on several
datasets.\footnote{Code and data will be made available upon acceptance.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz71OwzAYhWEvDKhwAUz4Bhy--DdhK1EplSJ1aPfIrj9XRqmDjFuVuwcC03mnIz2EPNRQyUYpeLL5Gi8VlyCrWhitb0m_de94KGxZSo7uXJC-xORjOtKY6B6vhZWJbU72iHSNCbMtcUrPdHWx43luapOn3ZRKnsY7chPs-In3_7sgu9fVvntj_Xa96ZY9s9poxg8guAClMEgIEFSrAS0iSKidcT_tgva25R6gbUytTaOND1JyJyUqsSCPf6-zZvjI8WTz1_CrGmaV-AZI5EcW</recordid><startdate>20240421</startdate><enddate>20240421</enddate><creator>Trusca, Maria Mihaela</creator><creator>Nuyts, Wolf</creator><creator>Thomm, Jonathan</creator><creator>Honig, Robert</creator><creator>Hofmann, Thomas</creator><creator>Tuytelaars, Tinne</creator><creator>Moens, Marie-Francine</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240421</creationdate><title>Object-Attribute Binding in Text-to-Image Generation: Evaluation and Control</title><author>Trusca, Maria Mihaela ; Nuyts, Wolf ; Thomm, Jonathan ; Honig, Robert ; Hofmann, Thomas ; Tuytelaars, Tinne ; Moens, Marie-Francine</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a676-2c0323055ef40f0f5960eaee0401b7b0eabf6da92d00987167867df442b44e53</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Trusca, Maria Mihaela</creatorcontrib><creatorcontrib>Nuyts, Wolf</creatorcontrib><creatorcontrib>Thomm, Jonathan</creatorcontrib><creatorcontrib>Honig, Robert</creatorcontrib><creatorcontrib>Hofmann, Thomas</creatorcontrib><creatorcontrib>Tuytelaars, Tinne</creatorcontrib><creatorcontrib>Moens, Marie-Francine</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Trusca, Maria Mihaela</au><au>Nuyts, Wolf</au><au>Thomm, Jonathan</au><au>Honig, Robert</au><au>Hofmann, Thomas</au><au>Tuytelaars, Tinne</au><au>Moens, Marie-Francine</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Object-Attribute Binding in Text-to-Image Generation: Evaluation and Control</atitle><date>2024-04-21</date><risdate>2024</risdate><abstract>Current diffusion models create photorealistic images given a text prompt as
input but struggle to correctly bind attributes mentioned in the text to the
right objects in the image. This is evidenced by our novel image-graph
alignment model called EPViT (Edge Prediction Vision Transformer) for the
evaluation of image-text alignment. To alleviate the above problem, we propose
focused cross-attention (FCA) that controls the visual attention maps by
syntactic constraints found in the input sentence. Additionally, the syntax
structure of the prompt helps to disentangle the multimodal CLIP embeddings
that are commonly used in T2I generation. The resulting DisCLIP embeddings and
FCA are easily integrated in state-of-the-art diffusion models without
additional training of these models. We show substantial improvements in T2I
generation and especially its attribute-object binding on several
datasets.\footnote{Code and data will be made available upon acceptance.</abstract><doi>10.48550/arxiv.2404.13766</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2404.13766 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2404_13766 |
source | arXiv.org |
subjects | Computer Science - Computer Vision and Pattern Recognition |
title | Object-Attribute Binding in Text-to-Image Generation: Evaluation and Control |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-06T12%3A24%3A46IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Object-Attribute%20Binding%20in%20Text-to-Image%20Generation:%20Evaluation%20and%20Control&rft.au=Trusca,%20Maria%20Mihaela&rft.date=2024-04-21&rft_id=info:doi/10.48550/arxiv.2404.13766&rft_dat=%3Carxiv_GOX%3E2404_13766%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |