Numerical reproducibility for the parallel reduction on multi- and many-core architectures

•A parallel algorithm to compute correctly-rounded floating-point sums•Highly-optimized implementations for modern CPUs, GPUs and Xeon Phi•As fast as memory bandwidth allows for large sums with moderate dynamic range•Scales well with the problem size and resources used on a cluster of compute nodes...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Parallel computing 2015-11, Vol.49, p.83-97
Hauptverfasser: Collange, Caroline, Defour, David, Graillat, Stef, Iakymchuk, Roman
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 97
container_issue
container_start_page 83
container_title Parallel computing
container_volume 49
creator Collange, Caroline
Defour, David
Graillat, Stef
Iakymchuk, Roman
description •A parallel algorithm to compute correctly-rounded floating-point sums•Highly-optimized implementations for modern CPUs, GPUs and Xeon Phi•As fast as memory bandwidth allows for large sums with moderate dynamic range•Scales well with the problem size and resources used on a cluster of compute nodes On modern multi-core, many-core, and heterogeneous architectures, floating-point computations, especially reductions, may become non-deterministic and, therefore, non-reproducible mainly due to the non-associativity of floating-point operations. We introduce an approach to compute the correctly rounded sums of large floating-point vectors accurately and efficiently, achieving deterministic results by construction. Our multi-level algorithm consists of two main stages: first, a filtering stage that relies on fast vectorized floating-point expansion; second, an accumulation stage based on superaccumulators in a high-radix carry-save representation. We present implementations on recent Intel desktop and server processors, Intel Xeon Phi co-processors, and both AMD and NVIDIA GPUs. We show that numerical reproducibility and bit-perfect accuracy can be achieved at no additional cost for large sums that have dynamic ranges of up to 90 orders of magnitude by leveraging arithmetic units that are left underused by standard reduction algorithms.
doi_str_mv 10.1016/j.parco.2015.09.001
format Article
fullrecord <record><control><sourceid>hal_cross</sourceid><recordid>TN_cdi_hal_primary_oai_HAL_lirmm_01206348v1</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0167819115001155</els_id><sourcerecordid>oai_HAL_lirmm_01206348v1</sourcerecordid><originalsourceid>FETCH-LOGICAL-c450t-15b9c172da70cb1cef75edbc39e64eb974897ced163e831a3201779f3f36ee133</originalsourceid><addsrcrecordid>eNp9kE9Lw0AQxRdRsP75BF5yl8SZbJLNHjyUolYoetGLl2WzmdAtm2zZpIV-e7dWPAoDc5j3HvN-jN0hZAhYPWyyrQ7GZzlgmYHMAPCMzbAWeSo4r87ZLKpEWqPES3Y1jhsAqIoaZuzrbddTsEa7JNA2-HZnbGOdnQ5J50MyrSmJ0do5OgridbJ-SOL0OzfZNNFDm_R6OKTGB0riE2s7kZl2gcYbdtFpN9Lt775mn89PH4tlunp_eV3MV6kpSphSLBtpUOStFmAaNNSJktrGcElVQY0URS2FoRYrTjVHzWNJIWTHO14RIefX7P6Uu9ZObYPtdTgor61azlfK2dD3CjCHihf1HqOan9Qm-HEM1P1ZENQRptqoH5jqCFOBVBFmdD2eXBSL7C0FNRpLQ3zLhlhXtd7-6_8GFiN_wg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Numerical reproducibility for the parallel reduction on multi- and many-core architectures</title><source>Access via ScienceDirect (Elsevier)</source><creator>Collange, Caroline ; Defour, David ; Graillat, Stef ; Iakymchuk, Roman</creator><creatorcontrib>Collange, Caroline ; Defour, David ; Graillat, Stef ; Iakymchuk, Roman</creatorcontrib><description>•A parallel algorithm to compute correctly-rounded floating-point sums•Highly-optimized implementations for modern CPUs, GPUs and Xeon Phi•As fast as memory bandwidth allows for large sums with moderate dynamic range•Scales well with the problem size and resources used on a cluster of compute nodes On modern multi-core, many-core, and heterogeneous architectures, floating-point computations, especially reductions, may become non-deterministic and, therefore, non-reproducible mainly due to the non-associativity of floating-point operations. We introduce an approach to compute the correctly rounded sums of large floating-point vectors accurately and efficiently, achieving deterministic results by construction. Our multi-level algorithm consists of two main stages: first, a filtering stage that relies on fast vectorized floating-point expansion; second, an accumulation stage based on superaccumulators in a high-radix carry-save representation. We present implementations on recent Intel desktop and server processors, Intel Xeon Phi co-processors, and both AMD and NVIDIA GPUs. We show that numerical reproducibility and bit-perfect accuracy can be achieved at no additional cost for large sums that have dynamic ranges of up to 90 orders of magnitude by leveraging arithmetic units that are left underused by standard reduction algorithms.</description><identifier>ISSN: 0167-8191</identifier><identifier>EISSN: 1872-7336</identifier><identifier>DOI: 10.1016/j.parco.2015.09.001</identifier><language>eng</language><publisher>Elsevier B.V</publisher><subject>Accuracy ; Computer Arithmetic ; Computer Science ; Error-free transformations ; Hardware Architecture ; Long accumulator ; Multi- and many-core architectures ; Parallel floating-point summation ; Reproducibility</subject><ispartof>Parallel computing, 2015-11, Vol.49, p.83-97</ispartof><rights>2015 Elsevier B.V.</rights><rights>Distributed under a Creative Commons Attribution 4.0 International License</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c450t-15b9c172da70cb1cef75edbc39e64eb974897ced163e831a3201779f3f36ee133</citedby><cites>FETCH-LOGICAL-c450t-15b9c172da70cb1cef75edbc39e64eb974897ced163e831a3201779f3f36ee133</cites><orcidid>0000-0003-2414-700X ; 0000-0001-9923-2394</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://dx.doi.org/10.1016/j.parco.2015.09.001$$EHTML$$P50$$Gelsevier$$H</linktohtml><link.rule.ids>230,314,780,784,885,3550,27924,27925,45995</link.rule.ids><backlink>$$Uhttps://hal-lirmm.ccsd.cnrs.fr/lirmm-01206348$$DView record in HAL$$Hfree_for_read</backlink></links><search><creatorcontrib>Collange, Caroline</creatorcontrib><creatorcontrib>Defour, David</creatorcontrib><creatorcontrib>Graillat, Stef</creatorcontrib><creatorcontrib>Iakymchuk, Roman</creatorcontrib><title>Numerical reproducibility for the parallel reduction on multi- and many-core architectures</title><title>Parallel computing</title><description>•A parallel algorithm to compute correctly-rounded floating-point sums•Highly-optimized implementations for modern CPUs, GPUs and Xeon Phi•As fast as memory bandwidth allows for large sums with moderate dynamic range•Scales well with the problem size and resources used on a cluster of compute nodes On modern multi-core, many-core, and heterogeneous architectures, floating-point computations, especially reductions, may become non-deterministic and, therefore, non-reproducible mainly due to the non-associativity of floating-point operations. We introduce an approach to compute the correctly rounded sums of large floating-point vectors accurately and efficiently, achieving deterministic results by construction. Our multi-level algorithm consists of two main stages: first, a filtering stage that relies on fast vectorized floating-point expansion; second, an accumulation stage based on superaccumulators in a high-radix carry-save representation. We present implementations on recent Intel desktop and server processors, Intel Xeon Phi co-processors, and both AMD and NVIDIA GPUs. We show that numerical reproducibility and bit-perfect accuracy can be achieved at no additional cost for large sums that have dynamic ranges of up to 90 orders of magnitude by leveraging arithmetic units that are left underused by standard reduction algorithms.</description><subject>Accuracy</subject><subject>Computer Arithmetic</subject><subject>Computer Science</subject><subject>Error-free transformations</subject><subject>Hardware Architecture</subject><subject>Long accumulator</subject><subject>Multi- and many-core architectures</subject><subject>Parallel floating-point summation</subject><subject>Reproducibility</subject><issn>0167-8191</issn><issn>1872-7336</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2015</creationdate><recordtype>article</recordtype><recordid>eNp9kE9Lw0AQxRdRsP75BF5yl8SZbJLNHjyUolYoetGLl2WzmdAtm2zZpIV-e7dWPAoDc5j3HvN-jN0hZAhYPWyyrQ7GZzlgmYHMAPCMzbAWeSo4r87ZLKpEWqPES3Y1jhsAqIoaZuzrbddTsEa7JNA2-HZnbGOdnQ5J50MyrSmJ0do5OgridbJ-SOL0OzfZNNFDm_R6OKTGB0riE2s7kZl2gcYbdtFpN9Lt775mn89PH4tlunp_eV3MV6kpSphSLBtpUOStFmAaNNSJktrGcElVQY0URS2FoRYrTjVHzWNJIWTHO14RIefX7P6Uu9ZObYPtdTgor61azlfK2dD3CjCHihf1HqOan9Qm-HEM1P1ZENQRptqoH5jqCFOBVBFmdD2eXBSL7C0FNRpLQ3zLhlhXtd7-6_8GFiN_wg</recordid><startdate>201511</startdate><enddate>201511</enddate><creator>Collange, Caroline</creator><creator>Defour, David</creator><creator>Graillat, Stef</creator><creator>Iakymchuk, Roman</creator><general>Elsevier B.V</general><general>Elsevier</general><scope>AAYXX</scope><scope>CITATION</scope><scope>1XC</scope><scope>VOOES</scope><orcidid>https://orcid.org/0000-0003-2414-700X</orcidid><orcidid>https://orcid.org/0000-0001-9923-2394</orcidid></search><sort><creationdate>201511</creationdate><title>Numerical reproducibility for the parallel reduction on multi- and many-core architectures</title><author>Collange, Caroline ; Defour, David ; Graillat, Stef ; Iakymchuk, Roman</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c450t-15b9c172da70cb1cef75edbc39e64eb974897ced163e831a3201779f3f36ee133</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2015</creationdate><topic>Accuracy</topic><topic>Computer Arithmetic</topic><topic>Computer Science</topic><topic>Error-free transformations</topic><topic>Hardware Architecture</topic><topic>Long accumulator</topic><topic>Multi- and many-core architectures</topic><topic>Parallel floating-point summation</topic><topic>Reproducibility</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Collange, Caroline</creatorcontrib><creatorcontrib>Defour, David</creatorcontrib><creatorcontrib>Graillat, Stef</creatorcontrib><creatorcontrib>Iakymchuk, Roman</creatorcontrib><collection>CrossRef</collection><collection>Hyper Article en Ligne (HAL)</collection><collection>Hyper Article en Ligne (HAL) (Open Access)</collection><jtitle>Parallel computing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Collange, Caroline</au><au>Defour, David</au><au>Graillat, Stef</au><au>Iakymchuk, Roman</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Numerical reproducibility for the parallel reduction on multi- and many-core architectures</atitle><jtitle>Parallel computing</jtitle><date>2015-11</date><risdate>2015</risdate><volume>49</volume><spage>83</spage><epage>97</epage><pages>83-97</pages><issn>0167-8191</issn><eissn>1872-7336</eissn><abstract>•A parallel algorithm to compute correctly-rounded floating-point sums•Highly-optimized implementations for modern CPUs, GPUs and Xeon Phi•As fast as memory bandwidth allows for large sums with moderate dynamic range•Scales well with the problem size and resources used on a cluster of compute nodes On modern multi-core, many-core, and heterogeneous architectures, floating-point computations, especially reductions, may become non-deterministic and, therefore, non-reproducible mainly due to the non-associativity of floating-point operations. We introduce an approach to compute the correctly rounded sums of large floating-point vectors accurately and efficiently, achieving deterministic results by construction. Our multi-level algorithm consists of two main stages: first, a filtering stage that relies on fast vectorized floating-point expansion; second, an accumulation stage based on superaccumulators in a high-radix carry-save representation. We present implementations on recent Intel desktop and server processors, Intel Xeon Phi co-processors, and both AMD and NVIDIA GPUs. We show that numerical reproducibility and bit-perfect accuracy can be achieved at no additional cost for large sums that have dynamic ranges of up to 90 orders of magnitude by leveraging arithmetic units that are left underused by standard reduction algorithms.</abstract><pub>Elsevier B.V</pub><doi>10.1016/j.parco.2015.09.001</doi><tpages>15</tpages><orcidid>https://orcid.org/0000-0003-2414-700X</orcidid><orcidid>https://orcid.org/0000-0001-9923-2394</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0167-8191
ispartof Parallel computing, 2015-11, Vol.49, p.83-97
issn 0167-8191
1872-7336
language eng
recordid cdi_hal_primary_oai_HAL_lirmm_01206348v1
source Access via ScienceDirect (Elsevier)
subjects Accuracy
Computer Arithmetic
Computer Science
Error-free transformations
Hardware Architecture
Long accumulator
Multi- and many-core architectures
Parallel floating-point summation
Reproducibility
title Numerical reproducibility for the parallel reduction on multi- and many-core architectures
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-02T22%3A32%3A03IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-hal_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Numerical%20reproducibility%20for%20the%20parallel%20reduction%20on%20multi-%20and%20many-core%20architectures&rft.jtitle=Parallel%20computing&rft.au=Collange,%20Caroline&rft.date=2015-11&rft.volume=49&rft.spage=83&rft.epage=97&rft.pages=83-97&rft.issn=0167-8191&rft.eissn=1872-7336&rft_id=info:doi/10.1016/j.parco.2015.09.001&rft_dat=%3Chal_cross%3Eoai_HAL_lirmm_01206348v1%3C/hal_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_els_id=S0167819115001155&rfr_iscdi=true