Localizing Object-level Shape Variations with Text-to-Image Diffusion Models
Text-to-image models give rise to workflows which often begin with an exploration step, where users sift through a large collection of generated images. The global nature of the text-to-image generation process prevents users from narrowing their exploration to a particular object in the image. In t...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2023-08 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Or Patashnik Garibi, Daniel Idan Azuri Averbuch-Elor, Hadar Cohen-Or, Daniel |
description | Text-to-image models give rise to workflows which often begin with an exploration step, where users sift through a large collection of generated images. The global nature of the text-to-image generation process prevents users from narrowing their exploration to a particular object in the image. In this paper, we present a technique to generate a collection of images that depicts variations in the shape of a specific object, enabling an object-level shape exploration process. Creating plausible variations is challenging as it requires control over the shape of the generated object while respecting its semantics. A particular challenge when generating object variations is accurately localizing the manipulation applied over the object's shape. We introduce a prompt-mixing technique that switches between prompts along the denoising process to attain a variety of shape choices. To localize the image-space operation, we present two techniques that use the self-attention layers in conjunction with the cross-attention layers. Moreover, we show that these localization techniques are general and effective beyond the scope of generating object variations. Extensive results and comparisons demonstrate the effectiveness of our method in generating object variations, and the competence of our localization techniques. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2789002010</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2789002010</sourcerecordid><originalsourceid>FETCH-proquest_journals_27890020103</originalsourceid><addsrcrecordid>eNqNjr0OgjAYABsTE4nyDk2cm5RWBGd_ognGQeJKKn5ASaXIV9T49DL4AE433A03Ip6QMmDxQogJ8RFrzrlYRiIMpUeSxObK6I9uSnq61pA7ZuAJhp4r1QK9qE4rp22D9KVdRVN4O-YsO9xVCXSji6LHwdKjvYHBGRkXyiD4P07JfLdN13vWdvbRA7qstn3XDCoTUbwaNnjA5X_VFza0PTA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2789002010</pqid></control><display><type>article</type><title>Localizing Object-level Shape Variations with Text-to-Image Diffusion Models</title><source>Free E- Journals</source><creator>Or Patashnik ; Garibi, Daniel ; Idan Azuri ; Averbuch-Elor, Hadar ; Cohen-Or, Daniel</creator><creatorcontrib>Or Patashnik ; Garibi, Daniel ; Idan Azuri ; Averbuch-Elor, Hadar ; Cohen-Or, Daniel</creatorcontrib><description>Text-to-image models give rise to workflows which often begin with an exploration step, where users sift through a large collection of generated images. The global nature of the text-to-image generation process prevents users from narrowing their exploration to a particular object in the image. In this paper, we present a technique to generate a collection of images that depicts variations in the shape of a specific object, enabling an object-level shape exploration process. Creating plausible variations is challenging as it requires control over the shape of the generated object while respecting its semantics. A particular challenge when generating object variations is accurately localizing the manipulation applied over the object's shape. We introduce a prompt-mixing technique that switches between prompts along the denoising process to attain a variety of shape choices. To localize the image-space operation, we present two techniques that use the self-attention layers in conjunction with the cross-attention layers. Moreover, we show that these localization techniques are general and effective beyond the scope of generating object variations. Extensive results and comparisons demonstrate the effectiveness of our method in generating object variations, and the competence of our localization techniques.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Image processing ; Localization ; Semantics ; Switches</subject><ispartof>arXiv.org, 2023-08</ispartof><rights>2023. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Or Patashnik</creatorcontrib><creatorcontrib>Garibi, Daniel</creatorcontrib><creatorcontrib>Idan Azuri</creatorcontrib><creatorcontrib>Averbuch-Elor, Hadar</creatorcontrib><creatorcontrib>Cohen-Or, Daniel</creatorcontrib><title>Localizing Object-level Shape Variations with Text-to-Image Diffusion Models</title><title>arXiv.org</title><description>Text-to-image models give rise to workflows which often begin with an exploration step, where users sift through a large collection of generated images. The global nature of the text-to-image generation process prevents users from narrowing their exploration to a particular object in the image. In this paper, we present a technique to generate a collection of images that depicts variations in the shape of a specific object, enabling an object-level shape exploration process. Creating plausible variations is challenging as it requires control over the shape of the generated object while respecting its semantics. A particular challenge when generating object variations is accurately localizing the manipulation applied over the object's shape. We introduce a prompt-mixing technique that switches between prompts along the denoising process to attain a variety of shape choices. To localize the image-space operation, we present two techniques that use the self-attention layers in conjunction with the cross-attention layers. Moreover, we show that these localization techniques are general and effective beyond the scope of generating object variations. Extensive results and comparisons demonstrate the effectiveness of our method in generating object variations, and the competence of our localization techniques.</description><subject>Image processing</subject><subject>Localization</subject><subject>Semantics</subject><subject>Switches</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNjr0OgjAYABsTE4nyDk2cm5RWBGd_ognGQeJKKn5ASaXIV9T49DL4AE433A03Ip6QMmDxQogJ8RFrzrlYRiIMpUeSxObK6I9uSnq61pA7ZuAJhp4r1QK9qE4rp22D9KVdRVN4O-YsO9xVCXSji6LHwdKjvYHBGRkXyiD4P07JfLdN13vWdvbRA7qstn3XDCoTUbwaNnjA5X_VFza0PTA</recordid><startdate>20230813</startdate><enddate>20230813</enddate><creator>Or Patashnik</creator><creator>Garibi, Daniel</creator><creator>Idan Azuri</creator><creator>Averbuch-Elor, Hadar</creator><creator>Cohen-Or, Daniel</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PTHSS</scope></search><sort><creationdate>20230813</creationdate><title>Localizing Object-level Shape Variations with Text-to-Image Diffusion Models</title><author>Or Patashnik ; Garibi, Daniel ; Idan Azuri ; Averbuch-Elor, Hadar ; Cohen-Or, Daniel</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_27890020103</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Image processing</topic><topic>Localization</topic><topic>Semantics</topic><topic>Switches</topic><toplevel>online_resources</toplevel><creatorcontrib>Or Patashnik</creatorcontrib><creatorcontrib>Garibi, Daniel</creatorcontrib><creatorcontrib>Idan Azuri</creatorcontrib><creatorcontrib>Averbuch-Elor, Hadar</creatorcontrib><creatorcontrib>Cohen-Or, Daniel</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Or Patashnik</au><au>Garibi, Daniel</au><au>Idan Azuri</au><au>Averbuch-Elor, Hadar</au><au>Cohen-Or, Daniel</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Localizing Object-level Shape Variations with Text-to-Image Diffusion Models</atitle><jtitle>arXiv.org</jtitle><date>2023-08-13</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>Text-to-image models give rise to workflows which often begin with an exploration step, where users sift through a large collection of generated images. The global nature of the text-to-image generation process prevents users from narrowing their exploration to a particular object in the image. In this paper, we present a technique to generate a collection of images that depicts variations in the shape of a specific object, enabling an object-level shape exploration process. Creating plausible variations is challenging as it requires control over the shape of the generated object while respecting its semantics. A particular challenge when generating object variations is accurately localizing the manipulation applied over the object's shape. We introduce a prompt-mixing technique that switches between prompts along the denoising process to attain a variety of shape choices. To localize the image-space operation, we present two techniques that use the self-attention layers in conjunction with the cross-attention layers. Moreover, we show that these localization techniques are general and effective beyond the scope of generating object variations. Extensive results and comparisons demonstrate the effectiveness of our method in generating object variations, and the competence of our localization techniques.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2023-08 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2789002010 |
source | Free E- Journals |
subjects | Image processing Localization Semantics Switches |
title | Localizing Object-level Shape Variations with Text-to-Image Diffusion Models |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-15T01%3A16%3A10IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Localizing%20Object-level%20Shape%20Variations%20with%20Text-to-Image%20Diffusion%20Models&rft.jtitle=arXiv.org&rft.au=Or%20Patashnik&rft.date=2023-08-13&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2789002010%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2789002010&rft_id=info:pmid/&rfr_iscdi=true |