DragDiffusion: Harnessing Diffusion Models for Interactive Point-based Image Editing
Accurate and controllable image editing is a challenging task that has attracted significant attention recently. Notably, DragGAN is an interactive point-based image editing framework that achieves impressive editing results with pixel-level precision. However, due to its reliance on generative adve...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2024-04 |
---|---|
Hauptverfasser: | , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Shi, Yujun Xue, Chuhui Jun Hao Liew Pan, Jiachun Hanshu Yan Zhang, Wenqing Tan, Vincent Y F Bai, Song |
description | Accurate and controllable image editing is a challenging task that has attracted significant attention recently. Notably, DragGAN is an interactive point-based image editing framework that achieves impressive editing results with pixel-level precision. However, due to its reliance on generative adversarial networks (GANs), its generality is limited by the capacity of pretrained GAN models. In this work, we extend this editing framework to diffusion models and propose a novel approach DragDiffusion. By harnessing large-scale pretrained diffusion models, we greatly enhance the applicability of interactive point-based editing on both real and diffusion-generated images. Our approach involves optimizing the diffusion latents to achieve precise spatial control. The supervision signal of this optimization process is from the diffusion model's UNet features, which are known to contain rich semantic and geometric information. Moreover, we introduce two additional techniques, namely LoRA fine-tuning and latent-MasaCtrl, to further preserve the identity of the original image. Lastly, we present a challenging benchmark dataset called DragBench -- the first benchmark to evaluate the performance of interactive point-based image editing methods. Experiments across a wide range of challenging cases (e.g., images with multiple objects, diverse object categories, various styles, etc.) demonstrate the versatility and generality of DragDiffusion. Code: https://github.com/Yujun-Shi/DragDiffusion. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2830496620</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2830496620</sourcerecordid><originalsourceid>FETCH-proquest_journals_28304966203</originalsourceid><addsrcrecordid>eNqNi8EKgkAURYcgSMp_eNBamGbUrG0aughauJcpnzJiMzVv7PtzEa1bXTjn3AULhJS7KIuFWLGQaOCci3QvkkQGrM6d6nPddRNpa45QKmeQSJsefhQutsWRoLMOKuPRqbvXb4Sr1cZHN0XYQvVQPULRaj9fN2zZqZEw_O6abc9FfSqjp7OvCck3g52cmVUjMsnjQ5oKLv-rPnCeQEc</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2830496620</pqid></control><display><type>article</type><title>DragDiffusion: Harnessing Diffusion Models for Interactive Point-based Image Editing</title><source>Free E- Journals</source><creator>Shi, Yujun ; Xue, Chuhui ; Jun Hao Liew ; Pan, Jiachun ; Hanshu Yan ; Zhang, Wenqing ; Tan, Vincent Y F ; Bai, Song</creator><creatorcontrib>Shi, Yujun ; Xue, Chuhui ; Jun Hao Liew ; Pan, Jiachun ; Hanshu Yan ; Zhang, Wenqing ; Tan, Vincent Y F ; Bai, Song</creatorcontrib><description>Accurate and controllable image editing is a challenging task that has attracted significant attention recently. Notably, DragGAN is an interactive point-based image editing framework that achieves impressive editing results with pixel-level precision. However, due to its reliance on generative adversarial networks (GANs), its generality is limited by the capacity of pretrained GAN models. In this work, we extend this editing framework to diffusion models and propose a novel approach DragDiffusion. By harnessing large-scale pretrained diffusion models, we greatly enhance the applicability of interactive point-based editing on both real and diffusion-generated images. Our approach involves optimizing the diffusion latents to achieve precise spatial control. The supervision signal of this optimization process is from the diffusion model's UNet features, which are known to contain rich semantic and geometric information. Moreover, we introduce two additional techniques, namely LoRA fine-tuning and latent-MasaCtrl, to further preserve the identity of the original image. Lastly, we present a challenging benchmark dataset called DragBench -- the first benchmark to evaluate the performance of interactive point-based image editing methods. Experiments across a wide range of challenging cases (e.g., images with multiple objects, diverse object categories, various styles, etc.) demonstrate the versatility and generality of DragDiffusion. Code: https://github.com/Yujun-Shi/DragDiffusion.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Controllability ; Editing ; Generative adversarial networks ; Interactive control ; Iterative methods</subject><ispartof>arXiv.org, 2024-04</ispartof><rights>2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Shi, Yujun</creatorcontrib><creatorcontrib>Xue, Chuhui</creatorcontrib><creatorcontrib>Jun Hao Liew</creatorcontrib><creatorcontrib>Pan, Jiachun</creatorcontrib><creatorcontrib>Hanshu Yan</creatorcontrib><creatorcontrib>Zhang, Wenqing</creatorcontrib><creatorcontrib>Tan, Vincent Y F</creatorcontrib><creatorcontrib>Bai, Song</creatorcontrib><title>DragDiffusion: Harnessing Diffusion Models for Interactive Point-based Image Editing</title><title>arXiv.org</title><description>Accurate and controllable image editing is a challenging task that has attracted significant attention recently. Notably, DragGAN is an interactive point-based image editing framework that achieves impressive editing results with pixel-level precision. However, due to its reliance on generative adversarial networks (GANs), its generality is limited by the capacity of pretrained GAN models. In this work, we extend this editing framework to diffusion models and propose a novel approach DragDiffusion. By harnessing large-scale pretrained diffusion models, we greatly enhance the applicability of interactive point-based editing on both real and diffusion-generated images. Our approach involves optimizing the diffusion latents to achieve precise spatial control. The supervision signal of this optimization process is from the diffusion model's UNet features, which are known to contain rich semantic and geometric information. Moreover, we introduce two additional techniques, namely LoRA fine-tuning and latent-MasaCtrl, to further preserve the identity of the original image. Lastly, we present a challenging benchmark dataset called DragBench -- the first benchmark to evaluate the performance of interactive point-based image editing methods. Experiments across a wide range of challenging cases (e.g., images with multiple objects, diverse object categories, various styles, etc.) demonstrate the versatility and generality of DragDiffusion. Code: https://github.com/Yujun-Shi/DragDiffusion.</description><subject>Controllability</subject><subject>Editing</subject><subject>Generative adversarial networks</subject><subject>Interactive control</subject><subject>Iterative methods</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNi8EKgkAURYcgSMp_eNBamGbUrG0aughauJcpnzJiMzVv7PtzEa1bXTjn3AULhJS7KIuFWLGQaOCci3QvkkQGrM6d6nPddRNpa45QKmeQSJsefhQutsWRoLMOKuPRqbvXb4Sr1cZHN0XYQvVQPULRaj9fN2zZqZEw_O6abc9FfSqjp7OvCck3g52cmVUjMsnjQ5oKLv-rPnCeQEc</recordid><startdate>20240407</startdate><enddate>20240407</enddate><creator>Shi, Yujun</creator><creator>Xue, Chuhui</creator><creator>Jun Hao Liew</creator><creator>Pan, Jiachun</creator><creator>Hanshu Yan</creator><creator>Zhang, Wenqing</creator><creator>Tan, Vincent Y F</creator><creator>Bai, Song</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240407</creationdate><title>DragDiffusion: Harnessing Diffusion Models for Interactive Point-based Image Editing</title><author>Shi, Yujun ; Xue, Chuhui ; Jun Hao Liew ; Pan, Jiachun ; Hanshu Yan ; Zhang, Wenqing ; Tan, Vincent Y F ; Bai, Song</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_28304966203</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Controllability</topic><topic>Editing</topic><topic>Generative adversarial networks</topic><topic>Interactive control</topic><topic>Iterative methods</topic><toplevel>online_resources</toplevel><creatorcontrib>Shi, Yujun</creatorcontrib><creatorcontrib>Xue, Chuhui</creatorcontrib><creatorcontrib>Jun Hao Liew</creatorcontrib><creatorcontrib>Pan, Jiachun</creatorcontrib><creatorcontrib>Hanshu Yan</creatorcontrib><creatorcontrib>Zhang, Wenqing</creatorcontrib><creatorcontrib>Tan, Vincent Y F</creatorcontrib><creatorcontrib>Bai, Song</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Shi, Yujun</au><au>Xue, Chuhui</au><au>Jun Hao Liew</au><au>Pan, Jiachun</au><au>Hanshu Yan</au><au>Zhang, Wenqing</au><au>Tan, Vincent Y F</au><au>Bai, Song</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>DragDiffusion: Harnessing Diffusion Models for Interactive Point-based Image Editing</atitle><jtitle>arXiv.org</jtitle><date>2024-04-07</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Accurate and controllable image editing is a challenging task that has attracted significant attention recently. Notably, DragGAN is an interactive point-based image editing framework that achieves impressive editing results with pixel-level precision. However, due to its reliance on generative adversarial networks (GANs), its generality is limited by the capacity of pretrained GAN models. In this work, we extend this editing framework to diffusion models and propose a novel approach DragDiffusion. By harnessing large-scale pretrained diffusion models, we greatly enhance the applicability of interactive point-based editing on both real and diffusion-generated images. Our approach involves optimizing the diffusion latents to achieve precise spatial control. The supervision signal of this optimization process is from the diffusion model's UNet features, which are known to contain rich semantic and geometric information. Moreover, we introduce two additional techniques, namely LoRA fine-tuning and latent-MasaCtrl, to further preserve the identity of the original image. Lastly, we present a challenging benchmark dataset called DragBench -- the first benchmark to evaluate the performance of interactive point-based image editing methods. Experiments across a wide range of challenging cases (e.g., images with multiple objects, diverse object categories, various styles, etc.) demonstrate the versatility and generality of DragDiffusion. Code: https://github.com/Yujun-Shi/DragDiffusion.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2024-04 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2830496620 |
source | Free E- Journals |
subjects | Controllability Editing Generative adversarial networks Interactive control Iterative methods |
title | DragDiffusion: Harnessing Diffusion Models for Interactive Point-based Image Editing |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-23T16%3A15%3A09IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=DragDiffusion:%20Harnessing%20Diffusion%20Models%20for%20Interactive%20Point-based%20Image%20Editing&rft.jtitle=arXiv.org&rft.au=Shi,%20Yujun&rft.date=2024-04-07&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2830496620%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2830496620&rft_id=info:pmid/&rfr_iscdi=true |