Classifier-Free Guidance is a Predictor-Corrector
We investigate the theoretical foundations of classifier-free guidance (CFG). CFG is the dominant method of conditional sampling for text-to-image diffusion models, yet unlike other aspects of diffusion, it remains on shaky theoretical footing. In this paper, we disprove common misconceptions, by sh...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Bradley, Arwen Nakkiran, Preetum |
description | We investigate the theoretical foundations of classifier-free guidance (CFG).
CFG is the dominant method of conditional sampling for text-to-image diffusion
models, yet unlike other aspects of diffusion, it remains on shaky theoretical
footing. In this paper, we disprove common misconceptions, by showing that CFG
interacts differently with DDPM (Ho et al., 2020) and DDIM (Song et al., 2021),
and neither sampler with CFG generates the gamma-powered distribution
$p(x|c)^\gamma p(x)^{1-\gamma}$. Then, we clarify the behavior of CFG by
showing that it is a kind of predictor-corrector method (Song et al., 2020)
that alternates between denoising and sharpening, which we call
predictor-corrector guidance (PCG). We prove that in the SDE limit, CFG is
actually equivalent to combining a DDIM predictor for the conditional
distribution together with a Langevin dynamics corrector for a gamma-powered
distribution (with a carefully chosen gamma). Our work thus provides a lens to
theoretically understand CFG by embedding it in a broader design space of
principled sampling methods. |
doi_str_mv | 10.48550/arxiv.2408.09000 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2408_09000</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2408_09000</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2408_090003</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjGw0DOwNDAw4GQwdM5JLC7OTMtMLdJ1K0pNVXAvzUxJzEtOVcgsVkhUCChKTclMLskv0nXOLypKBbF4GFjTEnOKU3mhNDeDvJtriLOHLtjw-IKizNzEosp4kCXxYEuMCasAAFjPMMc</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Classifier-Free Guidance is a Predictor-Corrector</title><source>arXiv.org</source><creator>Bradley, Arwen ; Nakkiran, Preetum</creator><creatorcontrib>Bradley, Arwen ; Nakkiran, Preetum</creatorcontrib><description>We investigate the theoretical foundations of classifier-free guidance (CFG).
CFG is the dominant method of conditional sampling for text-to-image diffusion
models, yet unlike other aspects of diffusion, it remains on shaky theoretical
footing. In this paper, we disprove common misconceptions, by showing that CFG
interacts differently with DDPM (Ho et al., 2020) and DDIM (Song et al., 2021),
and neither sampler with CFG generates the gamma-powered distribution
$p(x|c)^\gamma p(x)^{1-\gamma}$. Then, we clarify the behavior of CFG by
showing that it is a kind of predictor-corrector method (Song et al., 2020)
that alternates between denoising and sharpening, which we call
predictor-corrector guidance (PCG). We prove that in the SDE limit, CFG is
actually equivalent to combining a DDIM predictor for the conditional
distribution together with a Langevin dynamics corrector for a gamma-powered
distribution (with a carefully chosen gamma). Our work thus provides a lens to
theoretically understand CFG by embedding it in a broader design space of
principled sampling methods.</description><identifier>DOI: 10.48550/arxiv.2408.09000</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Learning</subject><creationdate>2024-08</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,778,883</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2408.09000$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2408.09000$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Bradley, Arwen</creatorcontrib><creatorcontrib>Nakkiran, Preetum</creatorcontrib><title>Classifier-Free Guidance is a Predictor-Corrector</title><description>We investigate the theoretical foundations of classifier-free guidance (CFG).
CFG is the dominant method of conditional sampling for text-to-image diffusion
models, yet unlike other aspects of diffusion, it remains on shaky theoretical
footing. In this paper, we disprove common misconceptions, by showing that CFG
interacts differently with DDPM (Ho et al., 2020) and DDIM (Song et al., 2021),
and neither sampler with CFG generates the gamma-powered distribution
$p(x|c)^\gamma p(x)^{1-\gamma}$. Then, we clarify the behavior of CFG by
showing that it is a kind of predictor-corrector method (Song et al., 2020)
that alternates between denoising and sharpening, which we call
predictor-corrector guidance (PCG). We prove that in the SDE limit, CFG is
actually equivalent to combining a DDIM predictor for the conditional
distribution together with a Langevin dynamics corrector for a gamma-powered
distribution (with a carefully chosen gamma). Our work thus provides a lens to
theoretically understand CFG by embedding it in a broader design space of
principled sampling methods.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjGw0DOwNDAw4GQwdM5JLC7OTMtMLdJ1K0pNVXAvzUxJzEtOVcgsVkhUCChKTclMLskv0nXOLypKBbF4GFjTEnOKU3mhNDeDvJtriLOHLtjw-IKizNzEosp4kCXxYEuMCasAAFjPMMc</recordid><startdate>20240816</startdate><enddate>20240816</enddate><creator>Bradley, Arwen</creator><creator>Nakkiran, Preetum</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240816</creationdate><title>Classifier-Free Guidance is a Predictor-Corrector</title><author>Bradley, Arwen ; Nakkiran, Preetum</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2408_090003</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Bradley, Arwen</creatorcontrib><creatorcontrib>Nakkiran, Preetum</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Bradley, Arwen</au><au>Nakkiran, Preetum</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Classifier-Free Guidance is a Predictor-Corrector</atitle><date>2024-08-16</date><risdate>2024</risdate><abstract>We investigate the theoretical foundations of classifier-free guidance (CFG).
CFG is the dominant method of conditional sampling for text-to-image diffusion
models, yet unlike other aspects of diffusion, it remains on shaky theoretical
footing. In this paper, we disprove common misconceptions, by showing that CFG
interacts differently with DDPM (Ho et al., 2020) and DDIM (Song et al., 2021),
and neither sampler with CFG generates the gamma-powered distribution
$p(x|c)^\gamma p(x)^{1-\gamma}$. Then, we clarify the behavior of CFG by
showing that it is a kind of predictor-corrector method (Song et al., 2020)
that alternates between denoising and sharpening, which we call
predictor-corrector guidance (PCG). We prove that in the SDE limit, CFG is
actually equivalent to combining a DDIM predictor for the conditional
distribution together with a Langevin dynamics corrector for a gamma-powered
distribution (with a carefully chosen gamma). Our work thus provides a lens to
theoretically understand CFG by embedding it in a broader design space of
principled sampling methods.</abstract><doi>10.48550/arxiv.2408.09000</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2408.09000 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2408_09000 |
source | arXiv.org |
subjects | Computer Science - Artificial Intelligence Computer Science - Computer Vision and Pattern Recognition Computer Science - Learning |
title | Classifier-Free Guidance is a Predictor-Corrector |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-15T14%3A56%3A51IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Classifier-Free%20Guidance%20is%20a%20Predictor-Corrector&rft.au=Bradley,%20Arwen&rft.date=2024-08-16&rft_id=info:doi/10.48550/arxiv.2408.09000&rft_dat=%3Carxiv_GOX%3E2408_09000%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |