DesignEdit: Multi-Layered Latent Decomposition and Fusion for Unified & Accurate Image Editing

Recently, how to achieve precise image editing has attracted increasing attention, especially given the remarkable success of text-to-image generation models. To unify various spatial-aware image editing abilities into one framework, we adopt the concept of layers from the design domain to manipulat...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-03
Hauptverfasser: Jia, Yueru, Yuan, Yuhui, Cheng, Aosong, Wang, Chuke, Li, Ji, Jia, Huizhu, Zhang, Shanghang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Jia, Yueru
Yuan, Yuhui
Cheng, Aosong
Wang, Chuke
Li, Ji
Jia, Huizhu
Zhang, Shanghang
description Recently, how to achieve precise image editing has attracted increasing attention, especially given the remarkable success of text-to-image generation models. To unify various spatial-aware image editing abilities into one framework, we adopt the concept of layers from the design domain to manipulate objects flexibly with various operations. The key insight is to transform the spatial-aware image editing task into a combination of two sub-tasks: multi-layered latent decomposition and multi-layered latent fusion. First, we segment the latent representations of the source images into multiple layers, which include several object layers and one incomplete background layer that necessitates reliable inpainting. To avoid extra tuning, we further explore the inner inpainting ability within the self-attention mechanism. We introduce a key-masking self-attention scheme that can propagate the surrounding context information into the masked region while mitigating its impact on the regions outside the mask. Second, we propose an instruction-guided latent fusion that pastes the multi-layered latent representations onto a canvas latent. We also introduce an artifact suppression scheme in the latent space to enhance the inpainting quality. Due to the inherent modular advantages of such multi-layered representations, we can achieve accurate image editing, and we demonstrate that our approach consistently surpasses the latest spatial editing methods, including Self-Guidance and DiffEditor. Last, we show that our approach is a unified framework that supports various accurate image editing tasks on more than six different editing tasks.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2973277760</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2973277760</sourcerecordid><originalsourceid>FETCH-proquest_journals_29732777603</originalsourceid><addsrcrecordid>eNqNjN8KgjAchUcQJOU7_CDoTlhbuuouUimou7pNhk6Z6Gb7c9Hbp9ADdHU-ON85MxQQSrfRfkfIAoXWthhjkjASxzRAr1RY2aisku4Id985Gd34RxhRwY07oRykotT9oK10UivgqoLc2wlrbeCpZC1HdwOnsvRmXMC1542A6VCqZoXmNe-sCH-5ROs8e5wv0WD02wvrilZ7o8aqIAdGCWMswfQ_6wvU-UMr</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2973277760</pqid></control><display><type>article</type><title>DesignEdit: Multi-Layered Latent Decomposition and Fusion for Unified &amp; Accurate Image Editing</title><source>Free E- Journals</source><creator>Jia, Yueru ; Yuan, Yuhui ; Cheng, Aosong ; Wang, Chuke ; Li, Ji ; Jia, Huizhu ; Zhang, Shanghang</creator><creatorcontrib>Jia, Yueru ; Yuan, Yuhui ; Cheng, Aosong ; Wang, Chuke ; Li, Ji ; Jia, Huizhu ; Zhang, Shanghang</creatorcontrib><description>Recently, how to achieve precise image editing has attracted increasing attention, especially given the remarkable success of text-to-image generation models. To unify various spatial-aware image editing abilities into one framework, we adopt the concept of layers from the design domain to manipulate objects flexibly with various operations. The key insight is to transform the spatial-aware image editing task into a combination of two sub-tasks: multi-layered latent decomposition and multi-layered latent fusion. First, we segment the latent representations of the source images into multiple layers, which include several object layers and one incomplete background layer that necessitates reliable inpainting. To avoid extra tuning, we further explore the inner inpainting ability within the self-attention mechanism. We introduce a key-masking self-attention scheme that can propagate the surrounding context information into the masked region while mitigating its impact on the regions outside the mask. Second, we propose an instruction-guided latent fusion that pastes the multi-layered latent representations onto a canvas latent. We also introduce an artifact suppression scheme in the latent space to enhance the inpainting quality. Due to the inherent modular advantages of such multi-layered representations, we can achieve accurate image editing, and we demonstrate that our approach consistently surpasses the latest spatial editing methods, including Self-Guidance and DiffEditor. Last, we show that our approach is a unified framework that supports various accurate image editing tasks on more than six different editing tasks.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Decomposition ; Editing ; Image processing ; Multilayers ; Pastes ; Representations</subject><ispartof>arXiv.org, 2024-03</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Jia, Yueru</creatorcontrib><creatorcontrib>Yuan, Yuhui</creatorcontrib><creatorcontrib>Cheng, Aosong</creatorcontrib><creatorcontrib>Wang, Chuke</creatorcontrib><creatorcontrib>Li, Ji</creatorcontrib><creatorcontrib>Jia, Huizhu</creatorcontrib><creatorcontrib>Zhang, Shanghang</creatorcontrib><title>DesignEdit: Multi-Layered Latent Decomposition and Fusion for Unified &amp; Accurate Image Editing</title><title>arXiv.org</title><description>Recently, how to achieve precise image editing has attracted increasing attention, especially given the remarkable success of text-to-image generation models. To unify various spatial-aware image editing abilities into one framework, we adopt the concept of layers from the design domain to manipulate objects flexibly with various operations. The key insight is to transform the spatial-aware image editing task into a combination of two sub-tasks: multi-layered latent decomposition and multi-layered latent fusion. First, we segment the latent representations of the source images into multiple layers, which include several object layers and one incomplete background layer that necessitates reliable inpainting. To avoid extra tuning, we further explore the inner inpainting ability within the self-attention mechanism. We introduce a key-masking self-attention scheme that can propagate the surrounding context information into the masked region while mitigating its impact on the regions outside the mask. Second, we propose an instruction-guided latent fusion that pastes the multi-layered latent representations onto a canvas latent. We also introduce an artifact suppression scheme in the latent space to enhance the inpainting quality. Due to the inherent modular advantages of such multi-layered representations, we can achieve accurate image editing, and we demonstrate that our approach consistently surpasses the latest spatial editing methods, including Self-Guidance and DiffEditor. Last, we show that our approach is a unified framework that supports various accurate image editing tasks on more than six different editing tasks.</description><subject>Decomposition</subject><subject>Editing</subject><subject>Image processing</subject><subject>Multilayers</subject><subject>Pastes</subject><subject>Representations</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNjN8KgjAchUcQJOU7_CDoTlhbuuouUimou7pNhk6Z6Gb7c9Hbp9ADdHU-ON85MxQQSrfRfkfIAoXWthhjkjASxzRAr1RY2aisku4Id985Gd34RxhRwY07oRykotT9oK10UivgqoLc2wlrbeCpZC1HdwOnsvRmXMC1542A6VCqZoXmNe-sCH-5ROs8e5wv0WD02wvrilZ7o8aqIAdGCWMswfQ_6wvU-UMr</recordid><startdate>20240321</startdate><enddate>20240321</enddate><creator>Jia, Yueru</creator><creator>Yuan, Yuhui</creator><creator>Cheng, Aosong</creator><creator>Wang, Chuke</creator><creator>Li, Ji</creator><creator>Jia, Huizhu</creator><creator>Zhang, Shanghang</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240321</creationdate><title>DesignEdit: Multi-Layered Latent Decomposition and Fusion for Unified &amp; Accurate Image Editing</title><author>Jia, Yueru ; Yuan, Yuhui ; Cheng, Aosong ; Wang, Chuke ; Li, Ji ; Jia, Huizhu ; Zhang, Shanghang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_29732777603</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Decomposition</topic><topic>Editing</topic><topic>Image processing</topic><topic>Multilayers</topic><topic>Pastes</topic><topic>Representations</topic><toplevel>online_resources</toplevel><creatorcontrib>Jia, Yueru</creatorcontrib><creatorcontrib>Yuan, Yuhui</creatorcontrib><creatorcontrib>Cheng, Aosong</creatorcontrib><creatorcontrib>Wang, Chuke</creatorcontrib><creatorcontrib>Li, Ji</creatorcontrib><creatorcontrib>Jia, Huizhu</creatorcontrib><creatorcontrib>Zhang, Shanghang</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Jia, Yueru</au><au>Yuan, Yuhui</au><au>Cheng, Aosong</au><au>Wang, Chuke</au><au>Li, Ji</au><au>Jia, Huizhu</au><au>Zhang, Shanghang</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>DesignEdit: Multi-Layered Latent Decomposition and Fusion for Unified &amp; Accurate Image Editing</atitle><jtitle>arXiv.org</jtitle><date>2024-03-21</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Recently, how to achieve precise image editing has attracted increasing attention, especially given the remarkable success of text-to-image generation models. To unify various spatial-aware image editing abilities into one framework, we adopt the concept of layers from the design domain to manipulate objects flexibly with various operations. The key insight is to transform the spatial-aware image editing task into a combination of two sub-tasks: multi-layered latent decomposition and multi-layered latent fusion. First, we segment the latent representations of the source images into multiple layers, which include several object layers and one incomplete background layer that necessitates reliable inpainting. To avoid extra tuning, we further explore the inner inpainting ability within the self-attention mechanism. We introduce a key-masking self-attention scheme that can propagate the surrounding context information into the masked region while mitigating its impact on the regions outside the mask. Second, we propose an instruction-guided latent fusion that pastes the multi-layered latent representations onto a canvas latent. We also introduce an artifact suppression scheme in the latent space to enhance the inpainting quality. Due to the inherent modular advantages of such multi-layered representations, we can achieve accurate image editing, and we demonstrate that our approach consistently surpasses the latest spatial editing methods, including Self-Guidance and DiffEditor. Last, we show that our approach is a unified framework that supports various accurate image editing tasks on more than six different editing tasks.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-03
issn 2331-8422
language eng
recordid cdi_proquest_journals_2973277760
source Free E- Journals
subjects Decomposition
Editing
Image processing
Multilayers
Pastes
Representations
title DesignEdit: Multi-Layered Latent Decomposition and Fusion for Unified & Accurate Image Editing
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-03T23%3A35%3A09IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=DesignEdit:%20Multi-Layered%20Latent%20Decomposition%20and%20Fusion%20for%20Unified%20&%20Accurate%20Image%20Editing&rft.jtitle=arXiv.org&rft.au=Jia,%20Yueru&rft.date=2024-03-21&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2973277760%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2973277760&rft_id=info:pmid/&rfr_iscdi=true