A middleware approach for pipelining communications in clusters

The Pipelining Communications Middleware (PCM) approach provides a flexible, simple, high-performance mechanism to connect parallel programs running on high performance computers or clusters. This approach enables parallel programs to communicate and coordinate with each other to address larger prob...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Cluster computing 2007-12, Vol.10 (4), p.409-424
Hauptverfasser: Fide, Sevin, Jenks, Stephen
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 424
container_issue 4
container_start_page 409
container_title Cluster computing
container_volume 10
creator Fide, Sevin
Jenks, Stephen
description The Pipelining Communications Middleware (PCM) approach provides a flexible, simple, high-performance mechanism to connect parallel programs running on high performance computers or clusters. This approach enables parallel programs to communicate and coordinate with each other to address larger problems than a single program can solve. The motivation behind the PCM approach grew out of using files as an intermediate transfer stage between processing by different programs. Our approach supersedes this practice by using streaming data set transfers as an “online” communication channel between simultaneously active parallel programs. Thus, the PCM approach addresses the issue of sending data from a parallel program to another parallel program without exposing details such as number of nodes allocated to the program, specific node identifiers, etc. This paper outlines and analyzes our proposed computation and communication model to provide efficient and convenient communications between parallel programs running on high performance computing systems or clusters. We also discuss the PCM challenges as well as current PCM implementations. Our approach achieves scalability, transparency, coordination, synchronization and flow control, and efficient programming. We experimented with data parallel applications to evaluate the performance of the PCM approach. Our experiment results show that the PCM approach achieves nearly ideal throughput that scales linearly with the underlying network medium speed. PCM performs well with small and large data transfers. Furthermore, our experiments show that network infrastructure plays the most significant role in the PCM performance.
doi_str_mv 10.1007/s10586-007-0026-7
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_30955267</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>30955267</sourcerecordid><originalsourceid>FETCH-LOGICAL-c374t-4975e0bb5cd40c17c4395dd83709a22b37dfc354c4c2d48e25e201fa136c48de3</originalsourceid><addsrcrecordid>eNpdkMtKxDAUhoMoOF4ewF1BcFfNtafZOQzeYMCNrkMmSTVDm9SkRXwbn8UnM8O4cnE4P5yPw8-H0AXB1wRjuMkEi7apSyxDmxoO0IIIYDUIzg5LZuUKrYBjdJLzFmMsgcoFul3-fA_e2t596uQqPY4pavNedTFVox9d74MPb5WJwzAHb_TkY8iVD5Xp5zy5lM_QUaf77M7_9il6vb97WT3W6-eHp9VyXRsGfKq5BOHwZiOM5dgQMJxJYW3LAEtN6YaB7QwT3HBDLW8dFY5i0mnCGsNb69gputr_LQU_ZpcnNfhsXN_r4OKcFcNSCNpAAS__gds4p1C6KSpJSwmIlheK7CmTYs7JdWpMftDpSxGsdkbV3qjaxZ1RBewXvNNpeQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2918217584</pqid></control><display><type>article</type><title>A middleware approach for pipelining communications in clusters</title><source>ProQuest Central UK/Ireland</source><source>SpringerLink Journals - AutoHoldings</source><source>ProQuest Central</source><creator>Fide, Sevin ; Jenks, Stephen</creator><creatorcontrib>Fide, Sevin ; Jenks, Stephen</creatorcontrib><description>The Pipelining Communications Middleware (PCM) approach provides a flexible, simple, high-performance mechanism to connect parallel programs running on high performance computers or clusters. This approach enables parallel programs to communicate and coordinate with each other to address larger problems than a single program can solve. The motivation behind the PCM approach grew out of using files as an intermediate transfer stage between processing by different programs. Our approach supersedes this practice by using streaming data set transfers as an “online” communication channel between simultaneously active parallel programs. Thus, the PCM approach addresses the issue of sending data from a parallel program to another parallel program without exposing details such as number of nodes allocated to the program, specific node identifiers, etc. This paper outlines and analyzes our proposed computation and communication model to provide efficient and convenient communications between parallel programs running on high performance computing systems or clusters. We also discuss the PCM challenges as well as current PCM implementations. Our approach achieves scalability, transparency, coordination, synchronization and flow control, and efficient programming. We experimented with data parallel applications to evaluate the performance of the PCM approach. Our experiment results show that the PCM approach achieves nearly ideal throughput that scales linearly with the underlying network medium speed. PCM performs well with small and large data transfers. Furthermore, our experiments show that network infrastructure plays the most significant role in the PCM performance.</description><identifier>ISSN: 1386-7857</identifier><identifier>EISSN: 1573-7543</identifier><identifier>DOI: 10.1007/s10586-007-0026-7</identifier><language>eng</language><publisher>Dordrecht: Springer Nature B.V</publisher><subject>Clusters ; Flow control ; Middleware ; Parallel programming ; Performance evaluation ; Pipelining (computers) ; Synchronism</subject><ispartof>Cluster computing, 2007-12, Vol.10 (4), p.409-424</ispartof><rights>Springer Science+Business Media, LLC 2007.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c374t-4975e0bb5cd40c17c4395dd83709a22b37dfc354c4c2d48e25e201fa136c48de3</citedby><cites>FETCH-LOGICAL-c374t-4975e0bb5cd40c17c4395dd83709a22b37dfc354c4c2d48e25e201fa136c48de3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2918217584?pq-origsite=primo$$EHTML$$P50$$Gproquest$$H</linktohtml><link.rule.ids>314,780,784,21386,27922,27923,33742,33743,43803,64383,64385,64387,72239</link.rule.ids></links><search><creatorcontrib>Fide, Sevin</creatorcontrib><creatorcontrib>Jenks, Stephen</creatorcontrib><title>A middleware approach for pipelining communications in clusters</title><title>Cluster computing</title><description>The Pipelining Communications Middleware (PCM) approach provides a flexible, simple, high-performance mechanism to connect parallel programs running on high performance computers or clusters. This approach enables parallel programs to communicate and coordinate with each other to address larger problems than a single program can solve. The motivation behind the PCM approach grew out of using files as an intermediate transfer stage between processing by different programs. Our approach supersedes this practice by using streaming data set transfers as an “online” communication channel between simultaneously active parallel programs. Thus, the PCM approach addresses the issue of sending data from a parallel program to another parallel program without exposing details such as number of nodes allocated to the program, specific node identifiers, etc. This paper outlines and analyzes our proposed computation and communication model to provide efficient and convenient communications between parallel programs running on high performance computing systems or clusters. We also discuss the PCM challenges as well as current PCM implementations. Our approach achieves scalability, transparency, coordination, synchronization and flow control, and efficient programming. We experimented with data parallel applications to evaluate the performance of the PCM approach. Our experiment results show that the PCM approach achieves nearly ideal throughput that scales linearly with the underlying network medium speed. PCM performs well with small and large data transfers. Furthermore, our experiments show that network infrastructure plays the most significant role in the PCM performance.</description><subject>Clusters</subject><subject>Flow control</subject><subject>Middleware</subject><subject>Parallel programming</subject><subject>Performance evaluation</subject><subject>Pipelining (computers)</subject><subject>Synchronism</subject><issn>1386-7857</issn><issn>1573-7543</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2007</creationdate><recordtype>article</recordtype><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><recordid>eNpdkMtKxDAUhoMoOF4ewF1BcFfNtafZOQzeYMCNrkMmSTVDm9SkRXwbn8UnM8O4cnE4P5yPw8-H0AXB1wRjuMkEi7apSyxDmxoO0IIIYDUIzg5LZuUKrYBjdJLzFmMsgcoFul3-fA_e2t596uQqPY4pavNedTFVox9d74MPb5WJwzAHb_TkY8iVD5Xp5zy5lM_QUaf77M7_9il6vb97WT3W6-eHp9VyXRsGfKq5BOHwZiOM5dgQMJxJYW3LAEtN6YaB7QwT3HBDLW8dFY5i0mnCGsNb69gputr_LQU_ZpcnNfhsXN_r4OKcFcNSCNpAAS__gds4p1C6KSpJSwmIlheK7CmTYs7JdWpMftDpSxGsdkbV3qjaxZ1RBewXvNNpeQ</recordid><startdate>20071201</startdate><enddate>20071201</enddate><creator>Fide, Sevin</creator><creator>Jenks, Stephen</creator><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>8FE</scope><scope>8FG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K7-</scope><scope>P5Z</scope><scope>P62</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>7SC</scope><scope>8FD</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>20071201</creationdate><title>A middleware approach for pipelining communications in clusters</title><author>Fide, Sevin ; Jenks, Stephen</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c374t-4975e0bb5cd40c17c4395dd83709a22b37dfc354c4c2d48e25e201fa136c48de3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2007</creationdate><topic>Clusters</topic><topic>Flow control</topic><topic>Middleware</topic><topic>Parallel programming</topic><topic>Performance evaluation</topic><topic>Pipelining (computers)</topic><topic>Synchronism</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Fide, Sevin</creatorcontrib><creatorcontrib>Jenks, Stephen</creatorcontrib><collection>CrossRef</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer Science Database</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Cluster computing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Fide, Sevin</au><au>Jenks, Stephen</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A middleware approach for pipelining communications in clusters</atitle><jtitle>Cluster computing</jtitle><date>2007-12-01</date><risdate>2007</risdate><volume>10</volume><issue>4</issue><spage>409</spage><epage>424</epage><pages>409-424</pages><issn>1386-7857</issn><eissn>1573-7543</eissn><abstract>The Pipelining Communications Middleware (PCM) approach provides a flexible, simple, high-performance mechanism to connect parallel programs running on high performance computers or clusters. This approach enables parallel programs to communicate and coordinate with each other to address larger problems than a single program can solve. The motivation behind the PCM approach grew out of using files as an intermediate transfer stage between processing by different programs. Our approach supersedes this practice by using streaming data set transfers as an “online” communication channel between simultaneously active parallel programs. Thus, the PCM approach addresses the issue of sending data from a parallel program to another parallel program without exposing details such as number of nodes allocated to the program, specific node identifiers, etc. This paper outlines and analyzes our proposed computation and communication model to provide efficient and convenient communications between parallel programs running on high performance computing systems or clusters. We also discuss the PCM challenges as well as current PCM implementations. Our approach achieves scalability, transparency, coordination, synchronization and flow control, and efficient programming. We experimented with data parallel applications to evaluate the performance of the PCM approach. Our experiment results show that the PCM approach achieves nearly ideal throughput that scales linearly with the underlying network medium speed. PCM performs well with small and large data transfers. Furthermore, our experiments show that network infrastructure plays the most significant role in the PCM performance.</abstract><cop>Dordrecht</cop><pub>Springer Nature B.V</pub><doi>10.1007/s10586-007-0026-7</doi><tpages>16</tpages></addata></record>
fulltext fulltext
identifier ISSN: 1386-7857
ispartof Cluster computing, 2007-12, Vol.10 (4), p.409-424
issn 1386-7857
1573-7543
language eng
recordid cdi_proquest_miscellaneous_30955267
source ProQuest Central UK/Ireland; SpringerLink Journals - AutoHoldings; ProQuest Central
subjects Clusters
Flow control
Middleware
Parallel programming
Performance evaluation
Pipelining (computers)
Synchronism
title A middleware approach for pipelining communications in clusters
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-10T00%3A35%3A01IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%C2%A0middleware%20approach%20for%20pipelining%20communications%20in%20clusters&rft.jtitle=Cluster%20computing&rft.au=Fide,%20Sevin&rft.date=2007-12-01&rft.volume=10&rft.issue=4&rft.spage=409&rft.epage=424&rft.pages=409-424&rft.issn=1386-7857&rft.eissn=1573-7543&rft_id=info:doi/10.1007/s10586-007-0026-7&rft_dat=%3Cproquest_cross%3E30955267%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2918217584&rft_id=info:pmid/&rfr_iscdi=true