Adaptive Non-Uniform Timestep Sampling for Diffusion Model Training
As a highly expressive generative model, diffusion models have demonstrated exceptional success across various domains, including image generation, natural language processing, and combinatorial optimization. However, as data distributions grow more complex, training these models to convergence beco...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2024-11 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Kim, Myunsoo Donghyeon Ki Seong-Woong Shim Byung-Jun, Lee |
description | As a highly expressive generative model, diffusion models have demonstrated exceptional success across various domains, including image generation, natural language processing, and combinatorial optimization. However, as data distributions grow more complex, training these models to convergence becomes increasingly computationally intensive. While diffusion models are typically trained using uniform timestep sampling, our research shows that the variance in stochastic gradients varies significantly across timesteps, with high-variance timesteps becoming bottlenecks that hinder faster convergence. To address this issue, we introduce a non-uniform timestep sampling method that prioritizes these more critical timesteps. Our method tracks the impact of gradient updates on the objective for each timestep, adaptively selecting those most likely to minimize the objective effectively. Experimental results demonstrate that this approach not only accelerates the training process, but also leads to improved performance at convergence. Furthermore, our method shows robust performance across various datasets, scheduling strategies, and diffusion architectures, outperforming previously proposed timestep sampling and weighting heuristics that lack this degree of robustness. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3129864686</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3129864686</sourcerecordid><originalsourceid>FETCH-proquest_journals_31298646863</originalsourceid><addsrcrecordid>eNqNjLEKwjAURYMgWLT_8MA50CZtrKNUxUUX27kEmsgrbRKT1u83gx_gdOCey1mRhHGe06pgbEPSEIYsy5g4sLLkCalPvXQzfhQ8rKGtQW39BA1OKszKwVNObkTzgjjDGbVeAloDd9urERov0US5I2stx6DSH7dkf7009Y06b99LDHWDXbyJquM5O1aiEJXg_72-5h06GQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3129864686</pqid></control><display><type>article</type><title>Adaptive Non-Uniform Timestep Sampling for Diffusion Model Training</title><source>Free E- Journals</source><creator>Kim, Myunsoo ; Donghyeon Ki ; Seong-Woong Shim ; Byung-Jun, Lee</creator><creatorcontrib>Kim, Myunsoo ; Donghyeon Ki ; Seong-Woong Shim ; Byung-Jun, Lee</creatorcontrib><description>As a highly expressive generative model, diffusion models have demonstrated exceptional success across various domains, including image generation, natural language processing, and combinatorial optimization. However, as data distributions grow more complex, training these models to convergence becomes increasingly computationally intensive. While diffusion models are typically trained using uniform timestep sampling, our research shows that the variance in stochastic gradients varies significantly across timesteps, with high-variance timesteps becoming bottlenecks that hinder faster convergence. To address this issue, we introduce a non-uniform timestep sampling method that prioritizes these more critical timesteps. Our method tracks the impact of gradient updates on the objective for each timestep, adaptively selecting those most likely to minimize the objective effectively. Experimental results demonstrate that this approach not only accelerates the training process, but also leads to improved performance at convergence. Furthermore, our method shows robust performance across various datasets, scheduling strategies, and diffusion architectures, outperforming previously proposed timestep sampling and weighting heuristics that lack this degree of robustness.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Adaptive sampling ; Combinatorial analysis ; Convergence ; Diffusion rate ; Image processing ; Natural language processing ; Sampling methods ; Speech recognition</subject><ispartof>arXiv.org, 2024-11</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Kim, Myunsoo</creatorcontrib><creatorcontrib>Donghyeon Ki</creatorcontrib><creatorcontrib>Seong-Woong Shim</creatorcontrib><creatorcontrib>Byung-Jun, Lee</creatorcontrib><title>Adaptive Non-Uniform Timestep Sampling for Diffusion Model Training</title><title>arXiv.org</title><description>As a highly expressive generative model, diffusion models have demonstrated exceptional success across various domains, including image generation, natural language processing, and combinatorial optimization. However, as data distributions grow more complex, training these models to convergence becomes increasingly computationally intensive. While diffusion models are typically trained using uniform timestep sampling, our research shows that the variance in stochastic gradients varies significantly across timesteps, with high-variance timesteps becoming bottlenecks that hinder faster convergence. To address this issue, we introduce a non-uniform timestep sampling method that prioritizes these more critical timesteps. Our method tracks the impact of gradient updates on the objective for each timestep, adaptively selecting those most likely to minimize the objective effectively. Experimental results demonstrate that this approach not only accelerates the training process, but also leads to improved performance at convergence. Furthermore, our method shows robust performance across various datasets, scheduling strategies, and diffusion architectures, outperforming previously proposed timestep sampling and weighting heuristics that lack this degree of robustness.</description><subject>Adaptive sampling</subject><subject>Combinatorial analysis</subject><subject>Convergence</subject><subject>Diffusion rate</subject><subject>Image processing</subject><subject>Natural language processing</subject><subject>Sampling methods</subject><subject>Speech recognition</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNqNjLEKwjAURYMgWLT_8MA50CZtrKNUxUUX27kEmsgrbRKT1u83gx_gdOCey1mRhHGe06pgbEPSEIYsy5g4sLLkCalPvXQzfhQ8rKGtQW39BA1OKszKwVNObkTzgjjDGbVeAloDd9urERov0US5I2stx6DSH7dkf7009Y06b99LDHWDXbyJquM5O1aiEJXg_72-5h06GQ</recordid><startdate>20241115</startdate><enddate>20241115</enddate><creator>Kim, Myunsoo</creator><creator>Donghyeon Ki</creator><creator>Seong-Woong Shim</creator><creator>Byung-Jun, Lee</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20241115</creationdate><title>Adaptive Non-Uniform Timestep Sampling for Diffusion Model Training</title><author>Kim, Myunsoo ; Donghyeon Ki ; Seong-Woong Shim ; Byung-Jun, Lee</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_31298646863</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Adaptive sampling</topic><topic>Combinatorial analysis</topic><topic>Convergence</topic><topic>Diffusion rate</topic><topic>Image processing</topic><topic>Natural language processing</topic><topic>Sampling methods</topic><topic>Speech recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Kim, Myunsoo</creatorcontrib><creatorcontrib>Donghyeon Ki</creatorcontrib><creatorcontrib>Seong-Woong Shim</creatorcontrib><creatorcontrib>Byung-Jun, Lee</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Kim, Myunsoo</au><au>Donghyeon Ki</au><au>Seong-Woong Shim</au><au>Byung-Jun, Lee</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Adaptive Non-Uniform Timestep Sampling for Diffusion Model Training</atitle><jtitle>arXiv.org</jtitle><date>2024-11-15</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>As a highly expressive generative model, diffusion models have demonstrated exceptional success across various domains, including image generation, natural language processing, and combinatorial optimization. However, as data distributions grow more complex, training these models to convergence becomes increasingly computationally intensive. While diffusion models are typically trained using uniform timestep sampling, our research shows that the variance in stochastic gradients varies significantly across timesteps, with high-variance timesteps becoming bottlenecks that hinder faster convergence. To address this issue, we introduce a non-uniform timestep sampling method that prioritizes these more critical timesteps. Our method tracks the impact of gradient updates on the objective for each timestep, adaptively selecting those most likely to minimize the objective effectively. Experimental results demonstrate that this approach not only accelerates the training process, but also leads to improved performance at convergence. Furthermore, our method shows robust performance across various datasets, scheduling strategies, and diffusion architectures, outperforming previously proposed timestep sampling and weighting heuristics that lack this degree of robustness.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2024-11 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_3129864686 |
source | Free E- Journals |
subjects | Adaptive sampling Combinatorial analysis Convergence Diffusion rate Image processing Natural language processing Sampling methods Speech recognition |
title | Adaptive Non-Uniform Timestep Sampling for Diffusion Model Training |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-10T15%3A12%3A44IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Adaptive%20Non-Uniform%20Timestep%20Sampling%20for%20Diffusion%20Model%20Training&rft.jtitle=arXiv.org&rft.au=Kim,%20Myunsoo&rft.date=2024-11-15&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3129864686%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3129864686&rft_id=info:pmid/&rfr_iscdi=true |