Linear Combination of Saved Checkpoints Makes Consistency and Diffusion Models Better

Diffusion Models (DM) and Consistency Models (CM) are two types of popular generative models with good generation quality on various tasks. When training DM and CM, intermediate weight checkpoints are not fully utilized and only the last converged checkpoint is used. In this work, we find that high-...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-04
Hauptverfasser: Liu, Enshu, Zhu, Junyi, Lin, Zinan, Ning, Xuefei, Blaschko, Matthew B, Yekhanin, Sergey, Shengen Yan, Dai, Guohao, Yang, Huazhong, Wang, Yu
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Liu, Enshu
Zhu, Junyi
Lin, Zinan
Ning, Xuefei
Blaschko, Matthew B
Yekhanin, Sergey
Shengen Yan
Dai, Guohao
Yang, Huazhong
Wang, Yu
description Diffusion Models (DM) and Consistency Models (CM) are two types of popular generative models with good generation quality on various tasks. When training DM and CM, intermediate weight checkpoints are not fully utilized and only the last converged checkpoint is used. In this work, we find that high-quality model weights often lie in a basin which cannot be reached by SGD but can be obtained by proper checkpoint averaging. Based on these observations, we propose LCSC, a simple but effective and efficient method to enhance the performance of DM and CM, by combining checkpoints along the training trajectory with coefficients deduced from evolutionary search. We demonstrate the value of LCSC through two use cases: \(\textbf{(a) Reducing training cost.}\) With LCSC, we only need to train DM/CM with fewer number of iterations and/or lower batch sizes to obtain comparable sample quality with the fully trained model. For example, LCSC achieves considerable training speedups for CM (23\(\times\) on CIFAR-10 and 15\(\times\) on ImageNet-64). \(\textbf{(b) Enhancing pre-trained models.}\) Assuming full training is already done, LCSC can further improve the generation quality or speed of the final converged models. For example, LCSC achieves better performance using 1 number of function evaluation (NFE) than the base model with 2 NFE on consistency distillation, and decreases the NFE of DM from 15 to 9 while maintaining the generation quality on CIFAR-10. Our code is available at https://github.com/imagination-research/LCSC.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3032802545</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3032802545</sourcerecordid><originalsourceid>FETCH-proquest_journals_30328025453</originalsourceid><addsrcrecordid>eNqNi7EKwjAUAIMgWLT_EHAuxKTRzlbFwU7qXGL7gmlrUvNSwb-3gh_gdMPdTUjEhVglWcr5jMSIDWOMrzdcShGR68lYUJ7m7nEzVgXjLHWantULaprfoWp7Z2xAWqgWcMwsGgxgqzdVtqY7o_WA36lwNXRItxAC-AWZatUhxD_OyfKwv-THpPfuOQCGsnGDt6MqBRM8Y1ymUvxXfQClr0CS</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3032802545</pqid></control><display><type>article</type><title>Linear Combination of Saved Checkpoints Makes Consistency and Diffusion Models Better</title><source>Free E- Journals</source><creator>Liu, Enshu ; Zhu, Junyi ; Lin, Zinan ; Ning, Xuefei ; Blaschko, Matthew B ; Yekhanin, Sergey ; Shengen Yan ; Dai, Guohao ; Yang, Huazhong ; Wang, Yu</creator><creatorcontrib>Liu, Enshu ; Zhu, Junyi ; Lin, Zinan ; Ning, Xuefei ; Blaschko, Matthew B ; Yekhanin, Sergey ; Shengen Yan ; Dai, Guohao ; Yang, Huazhong ; Wang, Yu</creatorcontrib><description>Diffusion Models (DM) and Consistency Models (CM) are two types of popular generative models with good generation quality on various tasks. When training DM and CM, intermediate weight checkpoints are not fully utilized and only the last converged checkpoint is used. In this work, we find that high-quality model weights often lie in a basin which cannot be reached by SGD but can be obtained by proper checkpoint averaging. Based on these observations, we propose LCSC, a simple but effective and efficient method to enhance the performance of DM and CM, by combining checkpoints along the training trajectory with coefficients deduced from evolutionary search. We demonstrate the value of LCSC through two use cases: \(\textbf{(a) Reducing training cost.}\) With LCSC, we only need to train DM/CM with fewer number of iterations and/or lower batch sizes to obtain comparable sample quality with the fully trained model. For example, LCSC achieves considerable training speedups for CM (23\(\times\) on CIFAR-10 and 15\(\times\) on ImageNet-64). \(\textbf{(b) Enhancing pre-trained models.}\) Assuming full training is already done, LCSC can further improve the generation quality or speed of the final converged models. For example, LCSC achieves better performance using 1 number of function evaluation (NFE) than the base model with 2 NFE on consistency distillation, and decreases the NFE of DM from 15 to 9 while maintaining the generation quality on CIFAR-10. Our code is available at https://github.com/imagination-research/LCSC.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Consistency ; Convergence ; Distillation</subject><ispartof>arXiv.org, 2024-04</ispartof><rights>2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Liu, Enshu</creatorcontrib><creatorcontrib>Zhu, Junyi</creatorcontrib><creatorcontrib>Lin, Zinan</creatorcontrib><creatorcontrib>Ning, Xuefei</creatorcontrib><creatorcontrib>Blaschko, Matthew B</creatorcontrib><creatorcontrib>Yekhanin, Sergey</creatorcontrib><creatorcontrib>Shengen Yan</creatorcontrib><creatorcontrib>Dai, Guohao</creatorcontrib><creatorcontrib>Yang, Huazhong</creatorcontrib><creatorcontrib>Wang, Yu</creatorcontrib><title>Linear Combination of Saved Checkpoints Makes Consistency and Diffusion Models Better</title><title>arXiv.org</title><description>Diffusion Models (DM) and Consistency Models (CM) are two types of popular generative models with good generation quality on various tasks. When training DM and CM, intermediate weight checkpoints are not fully utilized and only the last converged checkpoint is used. In this work, we find that high-quality model weights often lie in a basin which cannot be reached by SGD but can be obtained by proper checkpoint averaging. Based on these observations, we propose LCSC, a simple but effective and efficient method to enhance the performance of DM and CM, by combining checkpoints along the training trajectory with coefficients deduced from evolutionary search. We demonstrate the value of LCSC through two use cases: \(\textbf{(a) Reducing training cost.}\) With LCSC, we only need to train DM/CM with fewer number of iterations and/or lower batch sizes to obtain comparable sample quality with the fully trained model. For example, LCSC achieves considerable training speedups for CM (23\(\times\) on CIFAR-10 and 15\(\times\) on ImageNet-64). \(\textbf{(b) Enhancing pre-trained models.}\) Assuming full training is already done, LCSC can further improve the generation quality or speed of the final converged models. For example, LCSC achieves better performance using 1 number of function evaluation (NFE) than the base model with 2 NFE on consistency distillation, and decreases the NFE of DM from 15 to 9 while maintaining the generation quality on CIFAR-10. Our code is available at https://github.com/imagination-research/LCSC.</description><subject>Consistency</subject><subject>Convergence</subject><subject>Distillation</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNqNi7EKwjAUAIMgWLT_EHAuxKTRzlbFwU7qXGL7gmlrUvNSwb-3gh_gdMPdTUjEhVglWcr5jMSIDWOMrzdcShGR68lYUJ7m7nEzVgXjLHWantULaprfoWp7Z2xAWqgWcMwsGgxgqzdVtqY7o_WA36lwNXRItxAC-AWZatUhxD_OyfKwv-THpPfuOQCGsnGDt6MqBRM8Y1ymUvxXfQClr0CS</recordid><startdate>20240408</startdate><enddate>20240408</enddate><creator>Liu, Enshu</creator><creator>Zhu, Junyi</creator><creator>Lin, Zinan</creator><creator>Ning, Xuefei</creator><creator>Blaschko, Matthew B</creator><creator>Yekhanin, Sergey</creator><creator>Shengen Yan</creator><creator>Dai, Guohao</creator><creator>Yang, Huazhong</creator><creator>Wang, Yu</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240408</creationdate><title>Linear Combination of Saved Checkpoints Makes Consistency and Diffusion Models Better</title><author>Liu, Enshu ; Zhu, Junyi ; Lin, Zinan ; Ning, Xuefei ; Blaschko, Matthew B ; Yekhanin, Sergey ; Shengen Yan ; Dai, Guohao ; Yang, Huazhong ; Wang, Yu</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_30328025453</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Consistency</topic><topic>Convergence</topic><topic>Distillation</topic><toplevel>online_resources</toplevel><creatorcontrib>Liu, Enshu</creatorcontrib><creatorcontrib>Zhu, Junyi</creatorcontrib><creatorcontrib>Lin, Zinan</creatorcontrib><creatorcontrib>Ning, Xuefei</creatorcontrib><creatorcontrib>Blaschko, Matthew B</creatorcontrib><creatorcontrib>Yekhanin, Sergey</creatorcontrib><creatorcontrib>Shengen Yan</creatorcontrib><creatorcontrib>Dai, Guohao</creatorcontrib><creatorcontrib>Yang, Huazhong</creatorcontrib><creatorcontrib>Wang, Yu</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Liu, Enshu</au><au>Zhu, Junyi</au><au>Lin, Zinan</au><au>Ning, Xuefei</au><au>Blaschko, Matthew B</au><au>Yekhanin, Sergey</au><au>Shengen Yan</au><au>Dai, Guohao</au><au>Yang, Huazhong</au><au>Wang, Yu</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Linear Combination of Saved Checkpoints Makes Consistency and Diffusion Models Better</atitle><jtitle>arXiv.org</jtitle><date>2024-04-08</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Diffusion Models (DM) and Consistency Models (CM) are two types of popular generative models with good generation quality on various tasks. When training DM and CM, intermediate weight checkpoints are not fully utilized and only the last converged checkpoint is used. In this work, we find that high-quality model weights often lie in a basin which cannot be reached by SGD but can be obtained by proper checkpoint averaging. Based on these observations, we propose LCSC, a simple but effective and efficient method to enhance the performance of DM and CM, by combining checkpoints along the training trajectory with coefficients deduced from evolutionary search. We demonstrate the value of LCSC through two use cases: \(\textbf{(a) Reducing training cost.}\) With LCSC, we only need to train DM/CM with fewer number of iterations and/or lower batch sizes to obtain comparable sample quality with the fully trained model. For example, LCSC achieves considerable training speedups for CM (23\(\times\) on CIFAR-10 and 15\(\times\) on ImageNet-64). \(\textbf{(b) Enhancing pre-trained models.}\) Assuming full training is already done, LCSC can further improve the generation quality or speed of the final converged models. For example, LCSC achieves better performance using 1 number of function evaluation (NFE) than the base model with 2 NFE on consistency distillation, and decreases the NFE of DM from 15 to 9 while maintaining the generation quality on CIFAR-10. Our code is available at https://github.com/imagination-research/LCSC.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-04
issn 2331-8422
language eng
recordid cdi_proquest_journals_3032802545
source Free E- Journals
subjects Consistency
Convergence
Distillation
title Linear Combination of Saved Checkpoints Makes Consistency and Diffusion Models Better
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-06T20%3A08%3A25IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Linear%20Combination%20of%20Saved%20Checkpoints%20Makes%20Consistency%20and%20Diffusion%20Models%20Better&rft.jtitle=arXiv.org&rft.au=Liu,%20Enshu&rft.date=2024-04-08&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3032802545%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3032802545&rft_id=info:pmid/&rfr_iscdi=true