Avoiding Communication in Primal and Dual Block Coordinate Descent Methods

Primal and dual block coordinate descent methods are iterative methods for solving regularized and unregularized optimization problems. Distributed-memory parallel implementations of these methods have become popular in analyzing large machine learning datasets. However, existing implementations com...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:SIAM journal on scientific computing 2019-01, Vol.41 (1), p.C1-C27
Hauptverfasser: Devarakonda, Aditya, Fountoulakis, Kimon, Demmel, James, Mahoney, Michael W.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page C27
container_issue 1
container_start_page C1
container_title SIAM journal on scientific computing
container_volume 41
creator Devarakonda, Aditya
Fountoulakis, Kimon
Demmel, James
Mahoney, Michael W.
description Primal and dual block coordinate descent methods are iterative methods for solving regularized and unregularized optimization problems. Distributed-memory parallel implementations of these methods have become popular in analyzing large machine learning datasets. However, existing implementations communicate at every iteration, which, on modern data center and supercomputing architectures, often dominates the cost of floating-point computation. Recent results on communication-avoiding Krylov subspace methods suggest that large speedups are possible by re-organizing iterative algorithms to avoid communication. We show how applying similar algorithmic transformations can lead to primal and dual block coordinate descent methods that only communicate every $s$ iterations---where $s$ is a tuning parameter---instead of every iteration for the regularized least-squares problem. We show that the communication-avoiding variants reduce the number of synchronizations by a factor of $s$ on distributed-memory parallel machines without altering the convergence rate and attain strong scaling speedups of up to $6.1\times$ over the “standard algorithm" on a Cray XC30 supercomputer.
doi_str_mv 10.1137/17M1134433
format Article
fullrecord <record><control><sourceid>crossref_osti_</sourceid><recordid>TN_cdi_osti_scitechconnect_1544140</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>10_1137_17M1134433</sourcerecordid><originalsourceid>FETCH-LOGICAL-c294t-68df5bc2473f94d6135e4de20940e5ffb33f34b0b27b5f68022bcc8e4caf108c3</originalsourceid><addsrcrecordid>eNpFkM1KxDAYRYMoOI5ufILgUqjm50vTLseOv8ygC12X9mviRKeJNBnBt7dlBFfnLg4X7iXknLMrzqW-5no9EkDKAzLjrFSZ5qU-nHIOWSG0OiYnMX4wxnMoxYw8Lb6D65x_p1Xo-5132CQXPHWevgyub7a08R1d7sZwsw34OWphGP0mGbo0EY1PdG3SJnTxlBzZZhvN2R_n5O3u9rV6yFbP94_VYpWhKCFledFZ1aIALW0JXc6lMtAZwUpgRlnbSmkltKwVulU2L5gQLWJhABvLWYFyTi72vSEmV0d0yeAGg_cGU80VAAc2Spd7CYcQ42Bs_TXNGX5qzurpqvr_KvkLXIVajw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Avoiding Communication in Primal and Dual Block Coordinate Descent Methods</title><source>SIAM Journals Online</source><creator>Devarakonda, Aditya ; Fountoulakis, Kimon ; Demmel, James ; Mahoney, Michael W.</creator><creatorcontrib>Devarakonda, Aditya ; Fountoulakis, Kimon ; Demmel, James ; Mahoney, Michael W. ; Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC)</creatorcontrib><description>Primal and dual block coordinate descent methods are iterative methods for solving regularized and unregularized optimization problems. Distributed-memory parallel implementations of these methods have become popular in analyzing large machine learning datasets. However, existing implementations communicate at every iteration, which, on modern data center and supercomputing architectures, often dominates the cost of floating-point computation. Recent results on communication-avoiding Krylov subspace methods suggest that large speedups are possible by re-organizing iterative algorithms to avoid communication. We show how applying similar algorithmic transformations can lead to primal and dual block coordinate descent methods that only communicate every $s$ iterations---where $s$ is a tuning parameter---instead of every iteration for the regularized least-squares problem. We show that the communication-avoiding variants reduce the number of synchronizations by a factor of $s$ on distributed-memory parallel machines without altering the convergence rate and attain strong scaling speedups of up to $6.1\times$ over the “standard algorithm" on a Cray XC30 supercomputer.</description><identifier>ISSN: 1064-8275</identifier><identifier>EISSN: 1095-7197</identifier><identifier>DOI: 10.1137/17M1134433</identifier><language>eng</language><publisher>United States: SIAM</publisher><subject>MATHEMATICS AND COMPUTING</subject><ispartof>SIAM journal on scientific computing, 2019-01, Vol.41 (1), p.C1-C27</ispartof><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c294t-68df5bc2473f94d6135e4de20940e5ffb33f34b0b27b5f68022bcc8e4caf108c3</citedby><cites>FETCH-LOGICAL-c294t-68df5bc2473f94d6135e4de20940e5ffb33f34b0b27b5f68022bcc8e4caf108c3</cites><orcidid>0000-0002-8251-9150 ; 0000000282519150</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>230,314,776,780,881,3171,27901,27902</link.rule.ids><backlink>$$Uhttps://www.osti.gov/servlets/purl/1544140$$D View this record in Osti.gov$$Hfree_for_read</backlink></links><search><creatorcontrib>Devarakonda, Aditya</creatorcontrib><creatorcontrib>Fountoulakis, Kimon</creatorcontrib><creatorcontrib>Demmel, James</creatorcontrib><creatorcontrib>Mahoney, Michael W.</creatorcontrib><creatorcontrib>Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC)</creatorcontrib><title>Avoiding Communication in Primal and Dual Block Coordinate Descent Methods</title><title>SIAM journal on scientific computing</title><description>Primal and dual block coordinate descent methods are iterative methods for solving regularized and unregularized optimization problems. Distributed-memory parallel implementations of these methods have become popular in analyzing large machine learning datasets. However, existing implementations communicate at every iteration, which, on modern data center and supercomputing architectures, often dominates the cost of floating-point computation. Recent results on communication-avoiding Krylov subspace methods suggest that large speedups are possible by re-organizing iterative algorithms to avoid communication. We show how applying similar algorithmic transformations can lead to primal and dual block coordinate descent methods that only communicate every $s$ iterations---where $s$ is a tuning parameter---instead of every iteration for the regularized least-squares problem. We show that the communication-avoiding variants reduce the number of synchronizations by a factor of $s$ on distributed-memory parallel machines without altering the convergence rate and attain strong scaling speedups of up to $6.1\times$ over the “standard algorithm" on a Cray XC30 supercomputer.</description><subject>MATHEMATICS AND COMPUTING</subject><issn>1064-8275</issn><issn>1095-7197</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><recordid>eNpFkM1KxDAYRYMoOI5ufILgUqjm50vTLseOv8ygC12X9mviRKeJNBnBt7dlBFfnLg4X7iXknLMrzqW-5no9EkDKAzLjrFSZ5qU-nHIOWSG0OiYnMX4wxnMoxYw8Lb6D65x_p1Xo-5132CQXPHWevgyub7a08R1d7sZwsw34OWphGP0mGbo0EY1PdG3SJnTxlBzZZhvN2R_n5O3u9rV6yFbP94_VYpWhKCFledFZ1aIALW0JXc6lMtAZwUpgRlnbSmkltKwVulU2L5gQLWJhABvLWYFyTi72vSEmV0d0yeAGg_cGU80VAAc2Spd7CYcQ42Bs_TXNGX5qzurpqvr_KvkLXIVajw</recordid><startdate>20190101</startdate><enddate>20190101</enddate><creator>Devarakonda, Aditya</creator><creator>Fountoulakis, Kimon</creator><creator>Demmel, James</creator><creator>Mahoney, Michael W.</creator><general>SIAM</general><scope>AAYXX</scope><scope>CITATION</scope><scope>OIOZB</scope><scope>OTOTI</scope><orcidid>https://orcid.org/0000-0002-8251-9150</orcidid><orcidid>https://orcid.org/0000000282519150</orcidid></search><sort><creationdate>20190101</creationdate><title>Avoiding Communication in Primal and Dual Block Coordinate Descent Methods</title><author>Devarakonda, Aditya ; Fountoulakis, Kimon ; Demmel, James ; Mahoney, Michael W.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c294t-68df5bc2473f94d6135e4de20940e5ffb33f34b0b27b5f68022bcc8e4caf108c3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>MATHEMATICS AND COMPUTING</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Devarakonda, Aditya</creatorcontrib><creatorcontrib>Fountoulakis, Kimon</creatorcontrib><creatorcontrib>Demmel, James</creatorcontrib><creatorcontrib>Mahoney, Michael W.</creatorcontrib><creatorcontrib>Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC)</creatorcontrib><collection>CrossRef</collection><collection>OSTI.GOV - Hybrid</collection><collection>OSTI.GOV</collection><jtitle>SIAM journal on scientific computing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Devarakonda, Aditya</au><au>Fountoulakis, Kimon</au><au>Demmel, James</au><au>Mahoney, Michael W.</au><aucorp>Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC)</aucorp><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Avoiding Communication in Primal and Dual Block Coordinate Descent Methods</atitle><jtitle>SIAM journal on scientific computing</jtitle><date>2019-01-01</date><risdate>2019</risdate><volume>41</volume><issue>1</issue><spage>C1</spage><epage>C27</epage><pages>C1-C27</pages><issn>1064-8275</issn><eissn>1095-7197</eissn><abstract>Primal and dual block coordinate descent methods are iterative methods for solving regularized and unregularized optimization problems. Distributed-memory parallel implementations of these methods have become popular in analyzing large machine learning datasets. However, existing implementations communicate at every iteration, which, on modern data center and supercomputing architectures, often dominates the cost of floating-point computation. Recent results on communication-avoiding Krylov subspace methods suggest that large speedups are possible by re-organizing iterative algorithms to avoid communication. We show how applying similar algorithmic transformations can lead to primal and dual block coordinate descent methods that only communicate every $s$ iterations---where $s$ is a tuning parameter---instead of every iteration for the regularized least-squares problem. We show that the communication-avoiding variants reduce the number of synchronizations by a factor of $s$ on distributed-memory parallel machines without altering the convergence rate and attain strong scaling speedups of up to $6.1\times$ over the “standard algorithm" on a Cray XC30 supercomputer.</abstract><cop>United States</cop><pub>SIAM</pub><doi>10.1137/17M1134433</doi><orcidid>https://orcid.org/0000-0002-8251-9150</orcidid><orcidid>https://orcid.org/0000000282519150</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1064-8275
ispartof SIAM journal on scientific computing, 2019-01, Vol.41 (1), p.C1-C27
issn 1064-8275
1095-7197
language eng
recordid cdi_osti_scitechconnect_1544140
source SIAM Journals Online
subjects MATHEMATICS AND COMPUTING
title Avoiding Communication in Primal and Dual Block Coordinate Descent Methods
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-13T23%3A02%3A20IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-crossref_osti_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Avoiding%20Communication%20in%20Primal%20and%20Dual%20Block%20Coordinate%20Descent%20Methods&rft.jtitle=SIAM%20journal%20on%20scientific%20computing&rft.au=Devarakonda,%20Aditya&rft.aucorp=Lawrence%20Berkeley%20National%20Lab.%20(LBNL),%20Berkeley,%20CA%20(United%20States).%20National%20Energy%20Research%20Scientific%20Computing%20Center%20(NERSC)&rft.date=2019-01-01&rft.volume=41&rft.issue=1&rft.spage=C1&rft.epage=C27&rft.pages=C1-C27&rft.issn=1064-8275&rft.eissn=1095-7197&rft_id=info:doi/10.1137/17M1134433&rft_dat=%3Ccrossref_osti_%3E10_1137_17M1134433%3C/crossref_osti_%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true