Noise-Free Score Distillation

Score Distillation Sampling (SDS) has emerged as the de facto approach for text-to-content generation in non-image domains. In this paper, we reexamine the SDS process and introduce a straightforward interpretation that demystifies the necessity for large Classifier-Free Guidance (CFG) scales, roote...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Katzir, Oren, Patashnik, Or, Cohen-Or, Daniel, Lischinski, Dani
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Katzir, Oren
Patashnik, Or
Cohen-Or, Daniel
Lischinski, Dani
description Score Distillation Sampling (SDS) has emerged as the de facto approach for text-to-content generation in non-image domains. In this paper, we reexamine the SDS process and introduce a straightforward interpretation that demystifies the necessity for large Classifier-Free Guidance (CFG) scales, rooted in the distillation of an undesired noise term. Building upon our interpretation, we propose a novel Noise-Free Score Distillation (NFSD) process, which requires minimal modifications to the original SDS framework. Through this streamlined design, we achieve more effective distillation of pre-trained text-to-image diffusion models while using a nominal CFG scale. This strategic choice allows us to prevent the over-smoothing of results, ensuring that the generated data is both realistic and complies with the desired prompt. To demonstrate the efficacy of NFSD, we provide qualitative examples that compare NFSD and SDS, as well as several other methods.
doi_str_mv 10.48550/arxiv.2310.17590
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2310_17590</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2310_17590</sourcerecordid><originalsourceid>FETCH-LOGICAL-a670-cd2c8533895d070e59410cdf9c23217479a80c863b6868d0ed4ceff855ed5c5c3</originalsourceid><addsrcrecordid>eNotzr0KwjAUhuEsDqJegIPoDbSeNk2TjFKtCkUHu5d4cgqBaiUtonfv7_TBO3w8jE0jCBMlBCyNf7h7GPN3iKTQMGSzQ-s6CnJPtDhh62mxdl3vmsb0rr2O2aA2TUeT_45YmW_KbBcUx-0-WxWBSSUEaGNUgnOlhQUJJHQSAdpaY8zjSCZSGwWoUn5OVaoskE2Q6votIitQIB-x-e_266tu3l2Mf1YfZ_V18heK7TZe</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Noise-Free Score Distillation</title><source>arXiv.org</source><creator>Katzir, Oren ; Patashnik, Or ; Cohen-Or, Daniel ; Lischinski, Dani</creator><creatorcontrib>Katzir, Oren ; Patashnik, Or ; Cohen-Or, Daniel ; Lischinski, Dani</creatorcontrib><description>Score Distillation Sampling (SDS) has emerged as the de facto approach for text-to-content generation in non-image domains. In this paper, we reexamine the SDS process and introduce a straightforward interpretation that demystifies the necessity for large Classifier-Free Guidance (CFG) scales, rooted in the distillation of an undesired noise term. Building upon our interpretation, we propose a novel Noise-Free Score Distillation (NFSD) process, which requires minimal modifications to the original SDS framework. Through this streamlined design, we achieve more effective distillation of pre-trained text-to-image diffusion models while using a nominal CFG scale. This strategic choice allows us to prevent the over-smoothing of results, ensuring that the generated data is both realistic and complies with the desired prompt. To demonstrate the efficacy of NFSD, we provide qualitative examples that compare NFSD and SDS, as well as several other methods.</description><identifier>DOI: 10.48550/arxiv.2310.17590</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2023-10</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2310.17590$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2310.17590$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Katzir, Oren</creatorcontrib><creatorcontrib>Patashnik, Or</creatorcontrib><creatorcontrib>Cohen-Or, Daniel</creatorcontrib><creatorcontrib>Lischinski, Dani</creatorcontrib><title>Noise-Free Score Distillation</title><description>Score Distillation Sampling (SDS) has emerged as the de facto approach for text-to-content generation in non-image domains. In this paper, we reexamine the SDS process and introduce a straightforward interpretation that demystifies the necessity for large Classifier-Free Guidance (CFG) scales, rooted in the distillation of an undesired noise term. Building upon our interpretation, we propose a novel Noise-Free Score Distillation (NFSD) process, which requires minimal modifications to the original SDS framework. Through this streamlined design, we achieve more effective distillation of pre-trained text-to-image diffusion models while using a nominal CFG scale. This strategic choice allows us to prevent the over-smoothing of results, ensuring that the generated data is both realistic and complies with the desired prompt. To demonstrate the efficacy of NFSD, we provide qualitative examples that compare NFSD and SDS, as well as several other methods.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzr0KwjAUhuEsDqJegIPoDbSeNk2TjFKtCkUHu5d4cgqBaiUtonfv7_TBO3w8jE0jCBMlBCyNf7h7GPN3iKTQMGSzQ-s6CnJPtDhh62mxdl3vmsb0rr2O2aA2TUeT_45YmW_KbBcUx-0-WxWBSSUEaGNUgnOlhQUJJHQSAdpaY8zjSCZSGwWoUn5OVaoskE2Q6votIitQIB-x-e_266tu3l2Mf1YfZ_V18heK7TZe</recordid><startdate>20231026</startdate><enddate>20231026</enddate><creator>Katzir, Oren</creator><creator>Patashnik, Or</creator><creator>Cohen-Or, Daniel</creator><creator>Lischinski, Dani</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20231026</creationdate><title>Noise-Free Score Distillation</title><author>Katzir, Oren ; Patashnik, Or ; Cohen-Or, Daniel ; Lischinski, Dani</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a670-cd2c8533895d070e59410cdf9c23217479a80c863b6868d0ed4ceff855ed5c5c3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Katzir, Oren</creatorcontrib><creatorcontrib>Patashnik, Or</creatorcontrib><creatorcontrib>Cohen-Or, Daniel</creatorcontrib><creatorcontrib>Lischinski, Dani</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Katzir, Oren</au><au>Patashnik, Or</au><au>Cohen-Or, Daniel</au><au>Lischinski, Dani</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Noise-Free Score Distillation</atitle><date>2023-10-26</date><risdate>2023</risdate><abstract>Score Distillation Sampling (SDS) has emerged as the de facto approach for text-to-content generation in non-image domains. In this paper, we reexamine the SDS process and introduce a straightforward interpretation that demystifies the necessity for large Classifier-Free Guidance (CFG) scales, rooted in the distillation of an undesired noise term. Building upon our interpretation, we propose a novel Noise-Free Score Distillation (NFSD) process, which requires minimal modifications to the original SDS framework. Through this streamlined design, we achieve more effective distillation of pre-trained text-to-image diffusion models while using a nominal CFG scale. This strategic choice allows us to prevent the over-smoothing of results, ensuring that the generated data is both realistic and complies with the desired prompt. To demonstrate the efficacy of NFSD, we provide qualitative examples that compare NFSD and SDS, as well as several other methods.</abstract><doi>10.48550/arxiv.2310.17590</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2310.17590
ispartof
issn
language eng
recordid cdi_arxiv_primary_2310_17590
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title Noise-Free Score Distillation
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-28T02%3A17%3A20IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Noise-Free%20Score%20Distillation&rft.au=Katzir,%20Oren&rft.date=2023-10-26&rft_id=info:doi/10.48550/arxiv.2310.17590&rft_dat=%3Carxiv_GOX%3E2310_17590%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true