Scalable and Practical Natural Gradient for Large-Scale Deep Learning

Large-scale distributed training of deep neural networks results in models with worse generalization performance as a result of the increase in the effective mini-batch size. Previous approaches attempt to address this problem by varying the learning rate and batch size over epochs and layers, or ad...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on pattern analysis and machine intelligence 2022-01, Vol.44 (1), p.404-415
Hauptverfasser: Osawa, Kazuki, Tsuji, Yohei, Ueno, Yuichiro, Naruse, Akira, Foo, Chuan-Sheng, Yokota, Rio
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 415
container_issue 1
container_start_page 404
container_title IEEE transactions on pattern analysis and machine intelligence
container_volume 44
creator Osawa, Kazuki
Tsuji, Yohei
Ueno, Yuichiro
Naruse, Akira
Foo, Chuan-Sheng
Yokota, Rio
description Large-scale distributed training of deep neural networks results in models with worse generalization performance as a result of the increase in the effective mini-batch size. Previous approaches attempt to address this problem by varying the learning rate and batch size over epochs and layers, or ad hoc modifications of batch normalization. We propose scalable and practical natural gradient descent (SP-NGD), a principled approach for training models that allows them to attain similar generalization performance to models trained with first-order optimization methods, but with accelerated convergence. Furthermore, SP-NGD scales to large mini-batch sizes with a negligible computational overhead as compared to first-order methods. We evaluated SP-NGD on a benchmark task where highly optimized first-order methods are available as references: training a ResNet-50 model for image classification on ImageNet. We demonstrate convergence to a top-1 validation accuracy of 75.4 percent in 5.5 minutes using a mini-batch size of 32,768 with 1,024 GPUs, as well as an accuracy of 74.9 percent with an extremely large mini-batch size of 131,072 in 873 steps of SP-NGD.
doi_str_mv 10.1109/TPAMI.2020.3004354
format Article
fullrecord <record><control><sourceid>proquest_ieee_</sourceid><recordid>TN_cdi_proquest_journals_2607875753</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9123671</ieee_id><sourcerecordid>2607875753</sourcerecordid><originalsourceid>FETCH-LOGICAL-c505t-7845d4db3007d242fa1d771bf6e76354d1679f289e269f0116b47e2eb82b73f43</originalsourceid><addsrcrecordid>eNpdkE1Lw0AQhhdRbP34AwoS8OIldXf2K3uUqrUQP0A9L5tkUlLSpG6Sg__era09eBqYed7h5SHkgtEJY9TcfrzdPc8nQIFOOKWCS3FAxsxwE3PJzSEZU6YgThJIRuSk65aUMiEpPyYjDlpSbWBMHt5zV7usxsg1RfTmXd5XYRO9uH7wYc68Kyps-qhsfZQ6v8B4k8DoHnEdpeh8UzWLM3JUurrD8908JZ-PDx_Tpzh9nc2nd2mcSyr7WCdCFqLIQltdgIDSsUJrlpUKtQr1C6a0KSExCMqUlDGVCY2AWQKZ5qXgp-Rm-3ft268Bu96uqi7HunYNtkNnQXCqFABjAb3-hy7bwTehnQVFdaKlljxQsKVy33adx9KufbVy_tsyajeS7a9ku5Fsd5JD6Gr3eshWWOwjf1YDcLkFKkTcnw0DrjTjPwR9fWI</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2607875753</pqid></control><display><type>article</type><title>Scalable and Practical Natural Gradient for Large-Scale Deep Learning</title><source>MEDLINE</source><source>IEEE Electronic Library (IEL)</source><creator>Osawa, Kazuki ; Tsuji, Yohei ; Ueno, Yuichiro ; Naruse, Akira ; Foo, Chuan-Sheng ; Yokota, Rio</creator><creatorcontrib>Osawa, Kazuki ; Tsuji, Yohei ; Ueno, Yuichiro ; Naruse, Akira ; Foo, Chuan-Sheng ; Yokota, Rio</creatorcontrib><description>Large-scale distributed training of deep neural networks results in models with worse generalization performance as a result of the increase in the effective mini-batch size. Previous approaches attempt to address this problem by varying the learning rate and batch size over epochs and layers, or ad hoc modifications of batch normalization. We propose scalable and practical natural gradient descent (SP-NGD), a principled approach for training models that allows them to attain similar generalization performance to models trained with first-order optimization methods, but with accelerated convergence. Furthermore, SP-NGD scales to large mini-batch sizes with a negligible computational overhead as compared to first-order methods. We evaluated SP-NGD on a benchmark task where highly optimized first-order methods are available as references: training a ResNet-50 model for image classification on ImageNet. We demonstrate convergence to a top-1 validation accuracy of 75.4 percent in 5.5 minutes using a mini-batch size of 32,768 with 1,024 GPUs, as well as an accuracy of 74.9 percent with an extremely large mini-batch size of 131,072 in 873 steps of SP-NGD.</description><identifier>ISSN: 0162-8828</identifier><identifier>EISSN: 1939-3539</identifier><identifier>EISSN: 2160-9292</identifier><identifier>DOI: 10.1109/TPAMI.2020.3004354</identifier><identifier>PMID: 32750792</identifier><identifier>CODEN: ITPIDJ</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Accuracy ; Algorithms ; Artificial neural networks ; Benchmarking ; Computational modeling ; Convergence ; Data models ; deep convolutional neural networks ; Deep Learning ; distributed deep learning ; Image classification ; Machine learning ; Natural gradient descent ; Neural networks ; Neural Networks, Computer ; Optimization ; Servers ; Stochastic processes ; Training</subject><ispartof>IEEE transactions on pattern analysis and machine intelligence, 2022-01, Vol.44 (1), p.404-415</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c505t-7845d4db3007d242fa1d771bf6e76354d1679f289e269f0116b47e2eb82b73f43</citedby><cites>FETCH-LOGICAL-c505t-7845d4db3007d242fa1d771bf6e76354d1679f289e269f0116b47e2eb82b73f43</cites><orcidid>0000-0001-8108-2324 ; 0000-0001-6390-9797 ; 0000-0001-8763-2075 ; 0000-0002-3140-0854 ; 0000-0002-4748-5792 ; 0000-0001-7573-7873</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9123671$$EHTML$$P50$$Gieee$$Hfree_for_read</linktohtml><link.rule.ids>315,781,785,797,27929,27930,54763</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/32750792$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Osawa, Kazuki</creatorcontrib><creatorcontrib>Tsuji, Yohei</creatorcontrib><creatorcontrib>Ueno, Yuichiro</creatorcontrib><creatorcontrib>Naruse, Akira</creatorcontrib><creatorcontrib>Foo, Chuan-Sheng</creatorcontrib><creatorcontrib>Yokota, Rio</creatorcontrib><title>Scalable and Practical Natural Gradient for Large-Scale Deep Learning</title><title>IEEE transactions on pattern analysis and machine intelligence</title><addtitle>TPAMI</addtitle><addtitle>IEEE Trans Pattern Anal Mach Intell</addtitle><description>Large-scale distributed training of deep neural networks results in models with worse generalization performance as a result of the increase in the effective mini-batch size. Previous approaches attempt to address this problem by varying the learning rate and batch size over epochs and layers, or ad hoc modifications of batch normalization. We propose scalable and practical natural gradient descent (SP-NGD), a principled approach for training models that allows them to attain similar generalization performance to models trained with first-order optimization methods, but with accelerated convergence. Furthermore, SP-NGD scales to large mini-batch sizes with a negligible computational overhead as compared to first-order methods. We evaluated SP-NGD on a benchmark task where highly optimized first-order methods are available as references: training a ResNet-50 model for image classification on ImageNet. We demonstrate convergence to a top-1 validation accuracy of 75.4 percent in 5.5 minutes using a mini-batch size of 32,768 with 1,024 GPUs, as well as an accuracy of 74.9 percent with an extremely large mini-batch size of 131,072 in 873 steps of SP-NGD.</description><subject>Accuracy</subject><subject>Algorithms</subject><subject>Artificial neural networks</subject><subject>Benchmarking</subject><subject>Computational modeling</subject><subject>Convergence</subject><subject>Data models</subject><subject>deep convolutional neural networks</subject><subject>Deep Learning</subject><subject>distributed deep learning</subject><subject>Image classification</subject><subject>Machine learning</subject><subject>Natural gradient descent</subject><subject>Neural networks</subject><subject>Neural Networks, Computer</subject><subject>Optimization</subject><subject>Servers</subject><subject>Stochastic processes</subject><subject>Training</subject><issn>0162-8828</issn><issn>1939-3539</issn><issn>2160-9292</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>ESBDL</sourceid><sourceid>RIE</sourceid><sourceid>EIF</sourceid><recordid>eNpdkE1Lw0AQhhdRbP34AwoS8OIldXf2K3uUqrUQP0A9L5tkUlLSpG6Sg__era09eBqYed7h5SHkgtEJY9TcfrzdPc8nQIFOOKWCS3FAxsxwE3PJzSEZU6YgThJIRuSk65aUMiEpPyYjDlpSbWBMHt5zV7usxsg1RfTmXd5XYRO9uH7wYc68Kyps-qhsfZQ6v8B4k8DoHnEdpeh8UzWLM3JUurrD8908JZ-PDx_Tpzh9nc2nd2mcSyr7WCdCFqLIQltdgIDSsUJrlpUKtQr1C6a0KSExCMqUlDGVCY2AWQKZ5qXgp-Rm-3ft268Bu96uqi7HunYNtkNnQXCqFABjAb3-hy7bwTehnQVFdaKlljxQsKVy33adx9KufbVy_tsyajeS7a9ku5Fsd5JD6Gr3eshWWOwjf1YDcLkFKkTcnw0DrjTjPwR9fWI</recordid><startdate>20220101</startdate><enddate>20220101</enddate><creator>Osawa, Kazuki</creator><creator>Tsuji, Yohei</creator><creator>Ueno, Yuichiro</creator><creator>Naruse, Akira</creator><creator>Foo, Chuan-Sheng</creator><creator>Yokota, Rio</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>ESBDL</scope><scope>RIA</scope><scope>RIE</scope><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0001-8108-2324</orcidid><orcidid>https://orcid.org/0000-0001-6390-9797</orcidid><orcidid>https://orcid.org/0000-0001-8763-2075</orcidid><orcidid>https://orcid.org/0000-0002-3140-0854</orcidid><orcidid>https://orcid.org/0000-0002-4748-5792</orcidid><orcidid>https://orcid.org/0000-0001-7573-7873</orcidid></search><sort><creationdate>20220101</creationdate><title>Scalable and Practical Natural Gradient for Large-Scale Deep Learning</title><author>Osawa, Kazuki ; Tsuji, Yohei ; Ueno, Yuichiro ; Naruse, Akira ; Foo, Chuan-Sheng ; Yokota, Rio</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c505t-7845d4db3007d242fa1d771bf6e76354d1679f289e269f0116b47e2eb82b73f43</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Accuracy</topic><topic>Algorithms</topic><topic>Artificial neural networks</topic><topic>Benchmarking</topic><topic>Computational modeling</topic><topic>Convergence</topic><topic>Data models</topic><topic>deep convolutional neural networks</topic><topic>Deep Learning</topic><topic>distributed deep learning</topic><topic>Image classification</topic><topic>Machine learning</topic><topic>Natural gradient descent</topic><topic>Neural networks</topic><topic>Neural Networks, Computer</topic><topic>Optimization</topic><topic>Servers</topic><topic>Stochastic processes</topic><topic>Training</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Osawa, Kazuki</creatorcontrib><creatorcontrib>Tsuji, Yohei</creatorcontrib><creatorcontrib>Ueno, Yuichiro</creatorcontrib><creatorcontrib>Naruse, Akira</creatorcontrib><creatorcontrib>Foo, Chuan-Sheng</creatorcontrib><creatorcontrib>Yokota, Rio</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE Open Access Journals</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transactions on pattern analysis and machine intelligence</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Osawa, Kazuki</au><au>Tsuji, Yohei</au><au>Ueno, Yuichiro</au><au>Naruse, Akira</au><au>Foo, Chuan-Sheng</au><au>Yokota, Rio</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Scalable and Practical Natural Gradient for Large-Scale Deep Learning</atitle><jtitle>IEEE transactions on pattern analysis and machine intelligence</jtitle><stitle>TPAMI</stitle><addtitle>IEEE Trans Pattern Anal Mach Intell</addtitle><date>2022-01-01</date><risdate>2022</risdate><volume>44</volume><issue>1</issue><spage>404</spage><epage>415</epage><pages>404-415</pages><issn>0162-8828</issn><eissn>1939-3539</eissn><eissn>2160-9292</eissn><coden>ITPIDJ</coden><abstract>Large-scale distributed training of deep neural networks results in models with worse generalization performance as a result of the increase in the effective mini-batch size. Previous approaches attempt to address this problem by varying the learning rate and batch size over epochs and layers, or ad hoc modifications of batch normalization. We propose scalable and practical natural gradient descent (SP-NGD), a principled approach for training models that allows them to attain similar generalization performance to models trained with first-order optimization methods, but with accelerated convergence. Furthermore, SP-NGD scales to large mini-batch sizes with a negligible computational overhead as compared to first-order methods. We evaluated SP-NGD on a benchmark task where highly optimized first-order methods are available as references: training a ResNet-50 model for image classification on ImageNet. We demonstrate convergence to a top-1 validation accuracy of 75.4 percent in 5.5 minutes using a mini-batch size of 32,768 with 1,024 GPUs, as well as an accuracy of 74.9 percent with an extremely large mini-batch size of 131,072 in 873 steps of SP-NGD.</abstract><cop>United States</cop><pub>IEEE</pub><pmid>32750792</pmid><doi>10.1109/TPAMI.2020.3004354</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0001-8108-2324</orcidid><orcidid>https://orcid.org/0000-0001-6390-9797</orcidid><orcidid>https://orcid.org/0000-0001-8763-2075</orcidid><orcidid>https://orcid.org/0000-0002-3140-0854</orcidid><orcidid>https://orcid.org/0000-0002-4748-5792</orcidid><orcidid>https://orcid.org/0000-0001-7573-7873</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0162-8828
ispartof IEEE transactions on pattern analysis and machine intelligence, 2022-01, Vol.44 (1), p.404-415
issn 0162-8828
1939-3539
2160-9292
language eng
recordid cdi_proquest_journals_2607875753
source MEDLINE; IEEE Electronic Library (IEL)
subjects Accuracy
Algorithms
Artificial neural networks
Benchmarking
Computational modeling
Convergence
Data models
deep convolutional neural networks
Deep Learning
distributed deep learning
Image classification
Machine learning
Natural gradient descent
Neural networks
Neural Networks, Computer
Optimization
Servers
Stochastic processes
Training
title Scalable and Practical Natural Gradient for Large-Scale Deep Learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-12T21%3A10%3A29IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_ieee_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Scalable%20and%20Practical%20Natural%20Gradient%20for%20Large-Scale%20Deep%20Learning&rft.jtitle=IEEE%20transactions%20on%20pattern%20analysis%20and%20machine%20intelligence&rft.au=Osawa,%20Kazuki&rft.date=2022-01-01&rft.volume=44&rft.issue=1&rft.spage=404&rft.epage=415&rft.pages=404-415&rft.issn=0162-8828&rft.eissn=1939-3539&rft.coden=ITPIDJ&rft_id=info:doi/10.1109/TPAMI.2020.3004354&rft_dat=%3Cproquest_ieee_%3E2607875753%3C/proquest_ieee_%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2607875753&rft_id=info:pmid/32750792&rft_ieee_id=9123671&rfr_iscdi=true