Score Mismatching for Generative Modeling

We propose a new score-based model with one-step sampling. Previously, score-based models were burdened with heavy computations due to iterative sampling. For substituting the iterative process, we train a standalone generator to compress all the time steps with the gradient backpropagated from the...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Ye, Senmao, Liu, Fei
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Ye, Senmao
Liu, Fei
description We propose a new score-based model with one-step sampling. Previously, score-based models were burdened with heavy computations due to iterative sampling. For substituting the iterative process, we train a standalone generator to compress all the time steps with the gradient backpropagated from the score network. In order to produce meaningful gradients for the generator, the score network is trained to simultaneously match the real data distribution and mismatch the fake data distribution. This model has the following advantages: 1) For sampling, it generates a fake image with only one step forward. 2) For training, it only needs 10 diffusion steps.3) Compared with consistency model, it is free of the ill-posed problem caused by consistency loss. On the popular CIFAR-10 dataset, our model outperforms Consistency Model and Denoising Score Matching, which demonstrates the potential of the framework. We further provide more examples on the MINIST and LSUN datasets. The code is available on GitHub.
doi_str_mv 10.48550/arxiv.2309.11043
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2309_11043</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2309_11043</sourcerecordid><originalsourceid>FETCH-LOGICAL-a673-4bc18a67c2e7614902d4d2ceb51e5ed752147362290398c573317986aa5649f3</originalsourceid><addsrcrecordid>eNotjj0LwjAURbM4iPoDnOzq0Jrk5aMZRbQKioPuJaavGlArsRT999bqdC_3wOUQMmY0EamUdGbDyzcJB2oSxqiAPpkeXBUw2vnnzdbu4u_nqKxClOEdg61906KqwGu7D0mvtNcnjv45IIfV8rhYx9t9tlnMt7FVGmJxcixtm-OoFROG8kIU3OFJMpRYaMmZ0KA4NxRM6qQGYNqkylqphClhQCa_1041fwR_s-Gdf5XzThk-lD45mg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Score Mismatching for Generative Modeling</title><source>arXiv.org</source><creator>Ye, Senmao ; Liu, Fei</creator><creatorcontrib>Ye, Senmao ; Liu, Fei</creatorcontrib><description>We propose a new score-based model with one-step sampling. Previously, score-based models were burdened with heavy computations due to iterative sampling. For substituting the iterative process, we train a standalone generator to compress all the time steps with the gradient backpropagated from the score network. In order to produce meaningful gradients for the generator, the score network is trained to simultaneously match the real data distribution and mismatch the fake data distribution. This model has the following advantages: 1) For sampling, it generates a fake image with only one step forward. 2) For training, it only needs 10 diffusion steps.3) Compared with consistency model, it is free of the ill-posed problem caused by consistency loss. On the popular CIFAR-10 dataset, our model outperforms Consistency Model and Denoising Score Matching, which demonstrates the potential of the framework. We further provide more examples on the MINIST and LSUN datasets. The code is available on GitHub.</description><identifier>DOI: 10.48550/arxiv.2309.11043</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2023-09</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2309.11043$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2309.11043$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Ye, Senmao</creatorcontrib><creatorcontrib>Liu, Fei</creatorcontrib><title>Score Mismatching for Generative Modeling</title><description>We propose a new score-based model with one-step sampling. Previously, score-based models were burdened with heavy computations due to iterative sampling. For substituting the iterative process, we train a standalone generator to compress all the time steps with the gradient backpropagated from the score network. In order to produce meaningful gradients for the generator, the score network is trained to simultaneously match the real data distribution and mismatch the fake data distribution. This model has the following advantages: 1) For sampling, it generates a fake image with only one step forward. 2) For training, it only needs 10 diffusion steps.3) Compared with consistency model, it is free of the ill-posed problem caused by consistency loss. On the popular CIFAR-10 dataset, our model outperforms Consistency Model and Denoising Score Matching, which demonstrates the potential of the framework. We further provide more examples on the MINIST and LSUN datasets. The code is available on GitHub.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotjj0LwjAURbM4iPoDnOzq0Jrk5aMZRbQKioPuJaavGlArsRT999bqdC_3wOUQMmY0EamUdGbDyzcJB2oSxqiAPpkeXBUw2vnnzdbu4u_nqKxClOEdg61906KqwGu7D0mvtNcnjv45IIfV8rhYx9t9tlnMt7FVGmJxcixtm-OoFROG8kIU3OFJMpRYaMmZ0KA4NxRM6qQGYNqkylqphClhQCa_1041fwR_s-Gdf5XzThk-lD45mg</recordid><startdate>20230919</startdate><enddate>20230919</enddate><creator>Ye, Senmao</creator><creator>Liu, Fei</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230919</creationdate><title>Score Mismatching for Generative Modeling</title><author>Ye, Senmao ; Liu, Fei</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a673-4bc18a67c2e7614902d4d2ceb51e5ed752147362290398c573317986aa5649f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Ye, Senmao</creatorcontrib><creatorcontrib>Liu, Fei</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Ye, Senmao</au><au>Liu, Fei</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Score Mismatching for Generative Modeling</atitle><date>2023-09-19</date><risdate>2023</risdate><abstract>We propose a new score-based model with one-step sampling. Previously, score-based models were burdened with heavy computations due to iterative sampling. For substituting the iterative process, we train a standalone generator to compress all the time steps with the gradient backpropagated from the score network. In order to produce meaningful gradients for the generator, the score network is trained to simultaneously match the real data distribution and mismatch the fake data distribution. This model has the following advantages: 1) For sampling, it generates a fake image with only one step forward. 2) For training, it only needs 10 diffusion steps.3) Compared with consistency model, it is free of the ill-posed problem caused by consistency loss. On the popular CIFAR-10 dataset, our model outperforms Consistency Model and Denoising Score Matching, which demonstrates the potential of the framework. We further provide more examples on the MINIST and LSUN datasets. The code is available on GitHub.</abstract><doi>10.48550/arxiv.2309.11043</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2309.11043
ispartof
issn
language eng
recordid cdi_arxiv_primary_2309_11043
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title Score Mismatching for Generative Modeling
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-05T10%3A21%3A36IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Score%20Mismatching%20for%20Generative%20Modeling&rft.au=Ye,%20Senmao&rft.date=2023-09-19&rft_id=info:doi/10.48550/arxiv.2309.11043&rft_dat=%3Carxiv_GOX%3E2309_11043%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true