Scalable balanced training of conditional generative adversarial neural networks on image data
We propose a distributed approach to train deep convolutional generative adversarial neural network (DC-CGANs) models. Our method reduces the imbalance between generator and discriminator by partitioning the training data according to data labels, and enhances scalability by performing a parallel tr...
Gespeichert in:
Veröffentlicht in: | The Journal of supercomputing 2021-11, Vol.77 (11), p.13358-13384 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 13384 |
---|---|
container_issue | 11 |
container_start_page | 13358 |
container_title | The Journal of supercomputing |
container_volume | 77 |
creator | Lupo Pasini, Massimiliano Gabbi, Vittorio Yin, Junqi Perotto, Simona Laanait, Nouamane |
description | We propose a distributed approach to train deep convolutional generative adversarial neural network (DC-CGANs) models. Our method reduces the imbalance between generator and discriminator by partitioning the training data according to data labels, and enhances scalability by performing a parallel training where multiple generators are concurrently trained, each one of them focusing on a single data label. Performance is assessed in terms of inception score, Fréchet inception distance, and image quality on MNIST, CIFAR10, CIFAR100, and ImageNet1k datasets, showing a significant improvement in comparison to state-of-the-art techniques to training DC-CGANs. Weak scaling is attained on all the four datasets using up to 1000 processes and 2000 NVIDIA V100 GPUs on the OLCF supercomputer Summit. |
doi_str_mv | 10.1007/s11227-021-03808-2 |
format | Article |
fullrecord | <record><control><sourceid>proquest_osti_</sourceid><recordid>TN_cdi_osti_scitechconnect_1783019</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2585945481</sourcerecordid><originalsourceid>FETCH-LOGICAL-c346t-985ac997cc1a710523bea83cd17c2cc105b69168088c38aa7bc6acebbc8654653</originalsourceid><addsrcrecordid>eNp9kEtLBDEQhIMouK7-AU9Bz6N5TCaZo4gvEDyoV0NPT3aNrokmWcV_b3QEb54Kmq-KriJkn7Mjzpg-zpwLoRsmeMOkYaYRG2TGlZYNa027SWasF6wxqhXbZCfnJ8ZYK7WckYdbhBUMK0eHqgHdSEsCH3xY0rigGMPoi48BVnTpgktQ_LujML67lCH5eg5unX6kfMT0nGkM1L_A0tERCuySrQWsstv71Tm5Pz-7O71srm8urk5PrhuUbVea3ijAvteIHDRnSsjBgZE4co2iHpkaup53tZhBaQD0gB2gGwY0nWo7JefkYMqNuXib0ReHj_X54LBYro1kvK_Q4QS9pvi2drnYp7hOtVq2QhnVt6o1vFJiojDFnJNb2NdUC6VPy5n9HttOY9s6tv0Z24pqkpMpVzgsXfqL_sf1BZikgmQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2585945481</pqid></control><display><type>article</type><title>Scalable balanced training of conditional generative adversarial neural networks on image data</title><source>SpringerLink Journals</source><creator>Lupo Pasini, Massimiliano ; Gabbi, Vittorio ; Yin, Junqi ; Perotto, Simona ; Laanait, Nouamane</creator><creatorcontrib>Lupo Pasini, Massimiliano ; Gabbi, Vittorio ; Yin, Junqi ; Perotto, Simona ; Laanait, Nouamane ; Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility (OLCF)</creatorcontrib><description>We propose a distributed approach to train deep convolutional generative adversarial neural network (DC-CGANs) models. Our method reduces the imbalance between generator and discriminator by partitioning the training data according to data labels, and enhances scalability by performing a parallel training where multiple generators are concurrently trained, each one of them focusing on a single data label. Performance is assessed in terms of inception score, Fréchet inception distance, and image quality on MNIST, CIFAR10, CIFAR100, and ImageNet1k datasets, showing a significant improvement in comparison to state-of-the-art techniques to training DC-CGANs. Weak scaling is attained on all the four datasets using up to 1000 processes and 2000 NVIDIA V100 GPUs on the OLCF supercomputer Summit.</description><identifier>ISSN: 0920-8542</identifier><identifier>EISSN: 1573-0484</identifier><identifier>DOI: 10.1007/s11227-021-03808-2</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Compilers ; Computer Science ; Computer vision ; Datasets ; Deep learning ; Generative adversarial neural networks ; Image quality ; Interpreters ; MATHEMATICS AND COMPUTING ; Neural networks ; Processor Architectures ; Programming Languages ; Supercomputing ; Training</subject><ispartof>The Journal of supercomputing, 2021-11, Vol.77 (11), p.13358-13384</ispartof><rights>This is a U.S. government work and not under copyright protection in the U.S.; foreign copyright protection may apply 2021</rights><rights>This is a U.S. government work and not under copyright protection in the U.S.; foreign copyright protection may apply 2021.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c346t-985ac997cc1a710523bea83cd17c2cc105b69168088c38aa7bc6acebbc8654653</citedby><cites>FETCH-LOGICAL-c346t-985ac997cc1a710523bea83cd17c2cc105b69168088c38aa7bc6acebbc8654653</cites><orcidid>0000-0002-4980-6924 ; 0000000329947516 ; 0000000249806924 ; 0000000338435520</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11227-021-03808-2$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s11227-021-03808-2$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>230,314,776,780,881,27903,27904,41467,42536,51298</link.rule.ids><backlink>$$Uhttps://www.osti.gov/servlets/purl/1783019$$D View this record in Osti.gov$$Hfree_for_read</backlink></links><search><creatorcontrib>Lupo Pasini, Massimiliano</creatorcontrib><creatorcontrib>Gabbi, Vittorio</creatorcontrib><creatorcontrib>Yin, Junqi</creatorcontrib><creatorcontrib>Perotto, Simona</creatorcontrib><creatorcontrib>Laanait, Nouamane</creatorcontrib><creatorcontrib>Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility (OLCF)</creatorcontrib><title>Scalable balanced training of conditional generative adversarial neural networks on image data</title><title>The Journal of supercomputing</title><addtitle>J Supercomput</addtitle><description>We propose a distributed approach to train deep convolutional generative adversarial neural network (DC-CGANs) models. Our method reduces the imbalance between generator and discriminator by partitioning the training data according to data labels, and enhances scalability by performing a parallel training where multiple generators are concurrently trained, each one of them focusing on a single data label. Performance is assessed in terms of inception score, Fréchet inception distance, and image quality on MNIST, CIFAR10, CIFAR100, and ImageNet1k datasets, showing a significant improvement in comparison to state-of-the-art techniques to training DC-CGANs. Weak scaling is attained on all the four datasets using up to 1000 processes and 2000 NVIDIA V100 GPUs on the OLCF supercomputer Summit.</description><subject>Compilers</subject><subject>Computer Science</subject><subject>Computer vision</subject><subject>Datasets</subject><subject>Deep learning</subject><subject>Generative adversarial neural networks</subject><subject>Image quality</subject><subject>Interpreters</subject><subject>MATHEMATICS AND COMPUTING</subject><subject>Neural networks</subject><subject>Processor Architectures</subject><subject>Programming Languages</subject><subject>Supercomputing</subject><subject>Training</subject><issn>0920-8542</issn><issn>1573-0484</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><recordid>eNp9kEtLBDEQhIMouK7-AU9Bz6N5TCaZo4gvEDyoV0NPT3aNrokmWcV_b3QEb54Kmq-KriJkn7Mjzpg-zpwLoRsmeMOkYaYRG2TGlZYNa027SWasF6wxqhXbZCfnJ8ZYK7WckYdbhBUMK0eHqgHdSEsCH3xY0rigGMPoi48BVnTpgktQ_LujML67lCH5eg5unX6kfMT0nGkM1L_A0tERCuySrQWsstv71Tm5Pz-7O71srm8urk5PrhuUbVea3ijAvteIHDRnSsjBgZE4co2iHpkaup53tZhBaQD0gB2gGwY0nWo7JefkYMqNuXib0ReHj_X54LBYro1kvK_Q4QS9pvi2drnYp7hOtVq2QhnVt6o1vFJiojDFnJNb2NdUC6VPy5n9HttOY9s6tv0Z24pqkpMpVzgsXfqL_sf1BZikgmQ</recordid><startdate>20211101</startdate><enddate>20211101</enddate><creator>Lupo Pasini, Massimiliano</creator><creator>Gabbi, Vittorio</creator><creator>Yin, Junqi</creator><creator>Perotto, Simona</creator><creator>Laanait, Nouamane</creator><general>Springer US</general><general>Springer Nature B.V</general><general>Springer</general><scope>AAYXX</scope><scope>CITATION</scope><scope>OIOZB</scope><scope>OTOTI</scope><orcidid>https://orcid.org/0000-0002-4980-6924</orcidid><orcidid>https://orcid.org/0000000329947516</orcidid><orcidid>https://orcid.org/0000000249806924</orcidid><orcidid>https://orcid.org/0000000338435520</orcidid></search><sort><creationdate>20211101</creationdate><title>Scalable balanced training of conditional generative adversarial neural networks on image data</title><author>Lupo Pasini, Massimiliano ; Gabbi, Vittorio ; Yin, Junqi ; Perotto, Simona ; Laanait, Nouamane</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c346t-985ac997cc1a710523bea83cd17c2cc105b69168088c38aa7bc6acebbc8654653</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Compilers</topic><topic>Computer Science</topic><topic>Computer vision</topic><topic>Datasets</topic><topic>Deep learning</topic><topic>Generative adversarial neural networks</topic><topic>Image quality</topic><topic>Interpreters</topic><topic>MATHEMATICS AND COMPUTING</topic><topic>Neural networks</topic><topic>Processor Architectures</topic><topic>Programming Languages</topic><topic>Supercomputing</topic><topic>Training</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Lupo Pasini, Massimiliano</creatorcontrib><creatorcontrib>Gabbi, Vittorio</creatorcontrib><creatorcontrib>Yin, Junqi</creatorcontrib><creatorcontrib>Perotto, Simona</creatorcontrib><creatorcontrib>Laanait, Nouamane</creatorcontrib><creatorcontrib>Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility (OLCF)</creatorcontrib><collection>CrossRef</collection><collection>OSTI.GOV - Hybrid</collection><collection>OSTI.GOV</collection><jtitle>The Journal of supercomputing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Lupo Pasini, Massimiliano</au><au>Gabbi, Vittorio</au><au>Yin, Junqi</au><au>Perotto, Simona</au><au>Laanait, Nouamane</au><aucorp>Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility (OLCF)</aucorp><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Scalable balanced training of conditional generative adversarial neural networks on image data</atitle><jtitle>The Journal of supercomputing</jtitle><stitle>J Supercomput</stitle><date>2021-11-01</date><risdate>2021</risdate><volume>77</volume><issue>11</issue><spage>13358</spage><epage>13384</epage><pages>13358-13384</pages><issn>0920-8542</issn><eissn>1573-0484</eissn><abstract>We propose a distributed approach to train deep convolutional generative adversarial neural network (DC-CGANs) models. Our method reduces the imbalance between generator and discriminator by partitioning the training data according to data labels, and enhances scalability by performing a parallel training where multiple generators are concurrently trained, each one of them focusing on a single data label. Performance is assessed in terms of inception score, Fréchet inception distance, and image quality on MNIST, CIFAR10, CIFAR100, and ImageNet1k datasets, showing a significant improvement in comparison to state-of-the-art techniques to training DC-CGANs. Weak scaling is attained on all the four datasets using up to 1000 processes and 2000 NVIDIA V100 GPUs on the OLCF supercomputer Summit.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s11227-021-03808-2</doi><tpages>27</tpages><orcidid>https://orcid.org/0000-0002-4980-6924</orcidid><orcidid>https://orcid.org/0000000329947516</orcidid><orcidid>https://orcid.org/0000000249806924</orcidid><orcidid>https://orcid.org/0000000338435520</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0920-8542 |
ispartof | The Journal of supercomputing, 2021-11, Vol.77 (11), p.13358-13384 |
issn | 0920-8542 1573-0484 |
language | eng |
recordid | cdi_osti_scitechconnect_1783019 |
source | SpringerLink Journals |
subjects | Compilers Computer Science Computer vision Datasets Deep learning Generative adversarial neural networks Image quality Interpreters MATHEMATICS AND COMPUTING Neural networks Processor Architectures Programming Languages Supercomputing Training |
title | Scalable balanced training of conditional generative adversarial neural networks on image data |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-22T02%3A30%3A43IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_osti_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Scalable%20balanced%20training%20of%20conditional%20generative%20adversarial%20neural%20networks%20on%20image%20data&rft.jtitle=The%20Journal%20of%20supercomputing&rft.au=Lupo%20Pasini,%20Massimiliano&rft.aucorp=Oak%20Ridge%20National%20Laboratory%20(ORNL),%20Oak%20Ridge,%20TN%20(United%20States).%20Oak%20Ridge%20Leadership%20Computing%20Facility%20(OLCF)&rft.date=2021-11-01&rft.volume=77&rft.issue=11&rft.spage=13358&rft.epage=13384&rft.pages=13358-13384&rft.issn=0920-8542&rft.eissn=1573-0484&rft_id=info:doi/10.1007/s11227-021-03808-2&rft_dat=%3Cproquest_osti_%3E2585945481%3C/proquest_osti_%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2585945481&rft_id=info:pmid/&rfr_iscdi=true |