A Metric for the Temporal Characterization of Parallel Programs

We consider the time-dependent demands for data movement that a parallel program makes on the architecture that executes it. The result is an architecture-independent metric that represents the temporal behavior of data-movement requirements. Programs are described as series of computations and data...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of parallel and distributed computing 1997-11, Vol.46 (2), p.113-124
Hauptverfasser: Rodriguez, Bernardo, Jordan, Harry, Alaghband, Gita
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 124
container_issue 2
container_start_page 113
container_title Journal of parallel and distributed computing
container_volume 46
creator Rodriguez, Bernardo
Jordan, Harry
Alaghband, Gita
description We consider the time-dependent demands for data movement that a parallel program makes on the architecture that executes it. The result is an architecture-independent metric that represents the temporal behavior of data-movement requirements. Programs are described as series of computations and data movements, and while message passing is not ruled out, we focus on explicit parallel programs using a fixed number of processes in a distributed shared-memory environment. Operations are assumed to be explicitly allocated to processors when the metric is applied, which might correspond to intermediate code in a parallelizing compiler. The metric is called the interprocess read (IR) temporal metric. A key to developing an architecture-independent temporal metric is modeling program execution time in an architecture-independent way. This is possible because well-synchronized parallel programs make coordinated progress above a certain level of granularity. Our execution time characterization takes into account barrier synchronization and critical sections. We illustrate the metric using instruction count on simple code fragments and then from multiprocessor program traces (Splash benchmarks). Results of running the benchmarks on simulated network architectures show that the IR metric for the time scale of network response predicts performance better than whole program measures.
doi_str_mv 10.1006/jpdc.1997.1379
format Article
fullrecord <record><control><sourceid>elsevier_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1006_jpdc_1997_1379</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0743731597913794</els_id><sourcerecordid>S0743731597913794</sourcerecordid><originalsourceid>FETCH-LOGICAL-c315t-aa0048c07e39656436d926a821351d0cbaa24e85e1e5bcdcb13432cfacfd63013</originalsourceid><addsrcrecordid>eNp1j01Lw0AQhhdRsH5cPe_Ba-JsNtkkJynFL6jYQz0v08nEbkm7YTcI-utNqHjzNDC8z7zzCHGjIFUA5m7XN5Squi5Tpcv6RMwU1CaBKq9OxQzKXCelVsW5uIhxB6BUUVYzcT-XrzwER7L1QQ5blmve9z5gJxdbDEgDB_eNg_MH6Vu5Glddx51cBf8RcB-vxFmLXeTr33kp3h8f1ovnZPn29LKYLxMaO4cEESCvCErWtSlMrk1TZwarTOlCNUAbxCznqmDFxYYa2iid64xapLYxGpS-FOnxLgUfY-DW9sHtMXxZBXbSt5O-nfTtpD8Ct0egx0jYtQEP5OIflY36OZgxVh1jPD7_6TjYSI4PxI0LTINtvPuv4QfXiG22</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>A Metric for the Temporal Characterization of Parallel Programs</title><source>Elsevier ScienceDirect Journals</source><creator>Rodriguez, Bernardo ; Jordan, Harry ; Alaghband, Gita</creator><creatorcontrib>Rodriguez, Bernardo ; Jordan, Harry ; Alaghband, Gita</creatorcontrib><description>We consider the time-dependent demands for data movement that a parallel program makes on the architecture that executes it. The result is an architecture-independent metric that represents the temporal behavior of data-movement requirements. Programs are described as series of computations and data movements, and while message passing is not ruled out, we focus on explicit parallel programs using a fixed number of processes in a distributed shared-memory environment. Operations are assumed to be explicitly allocated to processors when the metric is applied, which might correspond to intermediate code in a parallelizing compiler. The metric is called the interprocess read (IR) temporal metric. A key to developing an architecture-independent temporal metric is modeling program execution time in an architecture-independent way. This is possible because well-synchronized parallel programs make coordinated progress above a certain level of granularity. Our execution time characterization takes into account barrier synchronization and critical sections. We illustrate the metric using instruction count on simple code fragments and then from multiprocessor program traces (Splash benchmarks). Results of running the benchmarks on simulated network architectures show that the IR metric for the time scale of network response predicts performance better than whole program measures.</description><identifier>ISSN: 0743-7315</identifier><identifier>EISSN: 1096-0848</identifier><identifier>DOI: 10.1006/jpdc.1997.1379</identifier><language>eng</language><publisher>San Diego, CA: Elsevier Inc</publisher><subject>Applied sciences ; Computer science; control theory; systems ; Computer systems and distributed systems. User interface ; Computer systems performance. Reliability ; Exact sciences and technology ; Software ; Software engineering</subject><ispartof>Journal of parallel and distributed computing, 1997-11, Vol.46 (2), p.113-124</ispartof><rights>1997 Academic Press</rights><rights>1998 INIST-CNRS</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c315t-aa0048c07e39656436d926a821351d0cbaa24e85e1e5bcdcb13432cfacfd63013</citedby><cites>FETCH-LOGICAL-c315t-aa0048c07e39656436d926a821351d0cbaa24e85e1e5bcdcb13432cfacfd63013</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.sciencedirect.com/science/article/pii/S0743731597913794$$EHTML$$P50$$Gelsevier$$H</linktohtml><link.rule.ids>314,776,780,3537,27901,27902,65306</link.rule.ids><backlink>$$Uhttp://pascal-francis.inist.fr/vibad/index.php?action=getRecordDetail&amp;idt=2157406$$DView record in Pascal Francis$$Hfree_for_read</backlink></links><search><creatorcontrib>Rodriguez, Bernardo</creatorcontrib><creatorcontrib>Jordan, Harry</creatorcontrib><creatorcontrib>Alaghband, Gita</creatorcontrib><title>A Metric for the Temporal Characterization of Parallel Programs</title><title>Journal of parallel and distributed computing</title><description>We consider the time-dependent demands for data movement that a parallel program makes on the architecture that executes it. The result is an architecture-independent metric that represents the temporal behavior of data-movement requirements. Programs are described as series of computations and data movements, and while message passing is not ruled out, we focus on explicit parallel programs using a fixed number of processes in a distributed shared-memory environment. Operations are assumed to be explicitly allocated to processors when the metric is applied, which might correspond to intermediate code in a parallelizing compiler. The metric is called the interprocess read (IR) temporal metric. A key to developing an architecture-independent temporal metric is modeling program execution time in an architecture-independent way. This is possible because well-synchronized parallel programs make coordinated progress above a certain level of granularity. Our execution time characterization takes into account barrier synchronization and critical sections. We illustrate the metric using instruction count on simple code fragments and then from multiprocessor program traces (Splash benchmarks). Results of running the benchmarks on simulated network architectures show that the IR metric for the time scale of network response predicts performance better than whole program measures.</description><subject>Applied sciences</subject><subject>Computer science; control theory; systems</subject><subject>Computer systems and distributed systems. User interface</subject><subject>Computer systems performance. Reliability</subject><subject>Exact sciences and technology</subject><subject>Software</subject><subject>Software engineering</subject><issn>0743-7315</issn><issn>1096-0848</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>1997</creationdate><recordtype>article</recordtype><recordid>eNp1j01Lw0AQhhdRsH5cPe_Ba-JsNtkkJynFL6jYQz0v08nEbkm7YTcI-utNqHjzNDC8z7zzCHGjIFUA5m7XN5Squi5Tpcv6RMwU1CaBKq9OxQzKXCelVsW5uIhxB6BUUVYzcT-XrzwER7L1QQ5blmve9z5gJxdbDEgDB_eNg_MH6Vu5Glddx51cBf8RcB-vxFmLXeTr33kp3h8f1ovnZPn29LKYLxMaO4cEESCvCErWtSlMrk1TZwarTOlCNUAbxCznqmDFxYYa2iid64xapLYxGpS-FOnxLgUfY-DW9sHtMXxZBXbSt5O-nfTtpD8Ct0egx0jYtQEP5OIflY36OZgxVh1jPD7_6TjYSI4PxI0LTINtvPuv4QfXiG22</recordid><startdate>19971101</startdate><enddate>19971101</enddate><creator>Rodriguez, Bernardo</creator><creator>Jordan, Harry</creator><creator>Alaghband, Gita</creator><general>Elsevier Inc</general><general>Elsevier</general><scope>IQODW</scope><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>19971101</creationdate><title>A Metric for the Temporal Characterization of Parallel Programs</title><author>Rodriguez, Bernardo ; Jordan, Harry ; Alaghband, Gita</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c315t-aa0048c07e39656436d926a821351d0cbaa24e85e1e5bcdcb13432cfacfd63013</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>1997</creationdate><topic>Applied sciences</topic><topic>Computer science; control theory; systems</topic><topic>Computer systems and distributed systems. User interface</topic><topic>Computer systems performance. Reliability</topic><topic>Exact sciences and technology</topic><topic>Software</topic><topic>Software engineering</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Rodriguez, Bernardo</creatorcontrib><creatorcontrib>Jordan, Harry</creatorcontrib><creatorcontrib>Alaghband, Gita</creatorcontrib><collection>Pascal-Francis</collection><collection>CrossRef</collection><jtitle>Journal of parallel and distributed computing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Rodriguez, Bernardo</au><au>Jordan, Harry</au><au>Alaghband, Gita</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A Metric for the Temporal Characterization of Parallel Programs</atitle><jtitle>Journal of parallel and distributed computing</jtitle><date>1997-11-01</date><risdate>1997</risdate><volume>46</volume><issue>2</issue><spage>113</spage><epage>124</epage><pages>113-124</pages><issn>0743-7315</issn><eissn>1096-0848</eissn><abstract>We consider the time-dependent demands for data movement that a parallel program makes on the architecture that executes it. The result is an architecture-independent metric that represents the temporal behavior of data-movement requirements. Programs are described as series of computations and data movements, and while message passing is not ruled out, we focus on explicit parallel programs using a fixed number of processes in a distributed shared-memory environment. Operations are assumed to be explicitly allocated to processors when the metric is applied, which might correspond to intermediate code in a parallelizing compiler. The metric is called the interprocess read (IR) temporal metric. A key to developing an architecture-independent temporal metric is modeling program execution time in an architecture-independent way. This is possible because well-synchronized parallel programs make coordinated progress above a certain level of granularity. Our execution time characterization takes into account barrier synchronization and critical sections. We illustrate the metric using instruction count on simple code fragments and then from multiprocessor program traces (Splash benchmarks). Results of running the benchmarks on simulated network architectures show that the IR metric for the time scale of network response predicts performance better than whole program measures.</abstract><cop>San Diego, CA</cop><pub>Elsevier Inc</pub><doi>10.1006/jpdc.1997.1379</doi><tpages>12</tpages></addata></record>
fulltext fulltext
identifier ISSN: 0743-7315
ispartof Journal of parallel and distributed computing, 1997-11, Vol.46 (2), p.113-124
issn 0743-7315
1096-0848
language eng
recordid cdi_crossref_primary_10_1006_jpdc_1997_1379
source Elsevier ScienceDirect Journals
subjects Applied sciences
Computer science
control theory
systems
Computer systems and distributed systems. User interface
Computer systems performance. Reliability
Exact sciences and technology
Software
Software engineering
title A Metric for the Temporal Characterization of Parallel Programs
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-04T11%3A26%3A22IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-elsevier_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20Metric%20for%20the%20Temporal%20Characterization%20of%20Parallel%20Programs&rft.jtitle=Journal%20of%20parallel%20and%20distributed%20computing&rft.au=Rodriguez,%20Bernardo&rft.date=1997-11-01&rft.volume=46&rft.issue=2&rft.spage=113&rft.epage=124&rft.pages=113-124&rft.issn=0743-7315&rft.eissn=1096-0848&rft_id=info:doi/10.1006/jpdc.1997.1379&rft_dat=%3Celsevier_cross%3ES0743731597913794%3C/elsevier_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_els_id=S0743731597913794&rfr_iscdi=true