Easing Message-Passing Parallel Programming Through a Data Balancing Service

The message passing model is now widely used for parallel computing, but is still difficult to use with some applications. The explicit data distribution or the explicit dynamic creation of parallel tasks can require a complex algorithm. In this paper, in order to avoid explicit data distribution, w...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Román-Alonso, Graciela, Castro-García, Miguel A., Buenabad-Chávez, Jorge
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 302
container_issue
container_start_page 295
container_title
container_volume
creator Román-Alonso, Graciela
Castro-García, Miguel A.
Buenabad-Chávez, Jorge
description The message passing model is now widely used for parallel computing, but is still difficult to use with some applications. The explicit data distribution or the explicit dynamic creation of parallel tasks can require a complex algorithm. In this paper, in order to avoid explicit data distribution, we propose a programming approach based on a data load balancing service for MPI-C. Using a parallel version of the merge sort algorithm, we show how our service avoids explicit data distribution completely, easing parallel programming. Some performance results are presented which compare our approach to a version of merge sort with explicit data distribution.
doi_str_mv 10.1007/978-3-540-30218-6_42
format Conference Proceeding
fullrecord <record><control><sourceid>pascalfrancis_sprin</sourceid><recordid>TN_cdi_pascalfrancis_primary_16177369</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>16177369</sourcerecordid><originalsourceid>FETCH-LOGICAL-p228t-ce069a1b63123844c74ca21ce8a0dc55944a479f8c417bb01e70235b6766c9dc3</originalsourceid><addsrcrecordid>eNotkEtPwzAQhM1LIpT-Aw65cDTYXseOj1DKQyqiEuVsbVwnLSRNZRck_j1Oy15WOzNajT5Crji74YzpW6NLCrSQjAITvKTKSnFELiApe0Eck4wrzimANCcHQwBXAKckGyLUaAnnZBzjJ0sjADQUGZlNMa43Tf7qY8TG0znG_T3HgG3r23we-iZg1w3iYhX672aVY_6AO8zvscWNG4x3H37Wzl-Ssxrb6Mf_e0Q-HqeLyTOdvT29TO5mdCtEuaPOM2WQVwq4gFJKp6VDwZ0vkS1dURgpUWpTl05yXVWMe536FpXSSjmzdDAi14e_W4wO2zoMNaLdhnWH4dcmDlqDMiknDrmYrE3jg636_itazuwA1SaoFmwiZfcM7QAV_gAS4WQ7</addsrcrecordid><sourcetype>Index Database</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Easing Message-Passing Parallel Programming Through a Data Balancing Service</title><source>Springer Books</source><creator>Román-Alonso, Graciela ; Castro-García, Miguel A. ; Buenabad-Chávez, Jorge</creator><contributor>Dongarra, Jack ; Kranzlmüller, Dieter ; Kacsuk, Péter</contributor><creatorcontrib>Román-Alonso, Graciela ; Castro-García, Miguel A. ; Buenabad-Chávez, Jorge ; Dongarra, Jack ; Kranzlmüller, Dieter ; Kacsuk, Péter</creatorcontrib><description>The message passing model is now widely used for parallel computing, but is still difficult to use with some applications. The explicit data distribution or the explicit dynamic creation of parallel tasks can require a complex algorithm. In this paper, in order to avoid explicit data distribution, we propose a programming approach based on a data load balancing service for MPI-C. Using a parallel version of the merge sort algorithm, we show how our service avoids explicit data distribution completely, easing parallel programming. Some performance results are presented which compare our approach to a version of merge sort with explicit data distribution.</description><identifier>ISSN: 0302-9743</identifier><identifier>ISBN: 3540231633</identifier><identifier>ISBN: 9783540231639</identifier><identifier>EISSN: 1611-3349</identifier><identifier>EISBN: 3540302182</identifier><identifier>EISBN: 9783540302186</identifier><identifier>DOI: 10.1007/978-3-540-30218-6_42</identifier><language>eng</language><publisher>Berlin, Heidelberg: Springer Berlin Heidelberg</publisher><subject>Applied sciences ; Computer science; control theory; systems ; Computer systems and distributed systems. User interface ; Data List ; Exact sciences and technology ; List Element ; Parallel Programming ; Reduce Execution Time ; Software ; Workload State</subject><ispartof>Lecture notes in computer science, 2004, p.295-302</ispartof><rights>Springer-Verlag Berlin Heidelberg 2004</rights><rights>2004 INIST-CNRS</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/978-3-540-30218-6_42$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/978-3-540-30218-6_42$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>309,310,775,776,780,785,786,789,4036,4037,27902,38232,41418,42487</link.rule.ids><backlink>$$Uhttp://pascal-francis.inist.fr/vibad/index.php?action=getRecordDetail&amp;idt=16177369$$DView record in Pascal Francis$$Hfree_for_read</backlink></links><search><contributor>Dongarra, Jack</contributor><contributor>Kranzlmüller, Dieter</contributor><contributor>Kacsuk, Péter</contributor><creatorcontrib>Román-Alonso, Graciela</creatorcontrib><creatorcontrib>Castro-García, Miguel A.</creatorcontrib><creatorcontrib>Buenabad-Chávez, Jorge</creatorcontrib><title>Easing Message-Passing Parallel Programming Through a Data Balancing Service</title><title>Lecture notes in computer science</title><description>The message passing model is now widely used for parallel computing, but is still difficult to use with some applications. The explicit data distribution or the explicit dynamic creation of parallel tasks can require a complex algorithm. In this paper, in order to avoid explicit data distribution, we propose a programming approach based on a data load balancing service for MPI-C. Using a parallel version of the merge sort algorithm, we show how our service avoids explicit data distribution completely, easing parallel programming. Some performance results are presented which compare our approach to a version of merge sort with explicit data distribution.</description><subject>Applied sciences</subject><subject>Computer science; control theory; systems</subject><subject>Computer systems and distributed systems. User interface</subject><subject>Data List</subject><subject>Exact sciences and technology</subject><subject>List Element</subject><subject>Parallel Programming</subject><subject>Reduce Execution Time</subject><subject>Software</subject><subject>Workload State</subject><issn>0302-9743</issn><issn>1611-3349</issn><isbn>3540231633</isbn><isbn>9783540231639</isbn><isbn>3540302182</isbn><isbn>9783540302186</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2004</creationdate><recordtype>conference_proceeding</recordtype><recordid>eNotkEtPwzAQhM1LIpT-Aw65cDTYXseOj1DKQyqiEuVsbVwnLSRNZRck_j1Oy15WOzNajT5Crji74YzpW6NLCrSQjAITvKTKSnFELiApe0Eck4wrzimANCcHQwBXAKckGyLUaAnnZBzjJ0sjADQUGZlNMa43Tf7qY8TG0znG_T3HgG3r23we-iZg1w3iYhX672aVY_6AO8zvscWNG4x3H37Wzl-Ssxrb6Mf_e0Q-HqeLyTOdvT29TO5mdCtEuaPOM2WQVwq4gFJKp6VDwZ0vkS1dURgpUWpTl05yXVWMe536FpXSSjmzdDAi14e_W4wO2zoMNaLdhnWH4dcmDlqDMiknDrmYrE3jg636_itazuwA1SaoFmwiZfcM7QAV_gAS4WQ7</recordid><startdate>2004</startdate><enddate>2004</enddate><creator>Román-Alonso, Graciela</creator><creator>Castro-García, Miguel A.</creator><creator>Buenabad-Chávez, Jorge</creator><general>Springer Berlin Heidelberg</general><general>Springer</general><scope>IQODW</scope></search><sort><creationdate>2004</creationdate><title>Easing Message-Passing Parallel Programming Through a Data Balancing Service</title><author>Román-Alonso, Graciela ; Castro-García, Miguel A. ; Buenabad-Chávez, Jorge</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-p228t-ce069a1b63123844c74ca21ce8a0dc55944a479f8c417bb01e70235b6766c9dc3</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2004</creationdate><topic>Applied sciences</topic><topic>Computer science; control theory; systems</topic><topic>Computer systems and distributed systems. User interface</topic><topic>Data List</topic><topic>Exact sciences and technology</topic><topic>List Element</topic><topic>Parallel Programming</topic><topic>Reduce Execution Time</topic><topic>Software</topic><topic>Workload State</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Román-Alonso, Graciela</creatorcontrib><creatorcontrib>Castro-García, Miguel A.</creatorcontrib><creatorcontrib>Buenabad-Chávez, Jorge</creatorcontrib><collection>Pascal-Francis</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Román-Alonso, Graciela</au><au>Castro-García, Miguel A.</au><au>Buenabad-Chávez, Jorge</au><au>Dongarra, Jack</au><au>Kranzlmüller, Dieter</au><au>Kacsuk, Péter</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Easing Message-Passing Parallel Programming Through a Data Balancing Service</atitle><btitle>Lecture notes in computer science</btitle><date>2004</date><risdate>2004</risdate><spage>295</spage><epage>302</epage><pages>295-302</pages><issn>0302-9743</issn><eissn>1611-3349</eissn><isbn>3540231633</isbn><isbn>9783540231639</isbn><eisbn>3540302182</eisbn><eisbn>9783540302186</eisbn><abstract>The message passing model is now widely used for parallel computing, but is still difficult to use with some applications. The explicit data distribution or the explicit dynamic creation of parallel tasks can require a complex algorithm. In this paper, in order to avoid explicit data distribution, we propose a programming approach based on a data load balancing service for MPI-C. Using a parallel version of the merge sort algorithm, we show how our service avoids explicit data distribution completely, easing parallel programming. Some performance results are presented which compare our approach to a version of merge sort with explicit data distribution.</abstract><cop>Berlin, Heidelberg</cop><pub>Springer Berlin Heidelberg</pub><doi>10.1007/978-3-540-30218-6_42</doi><tpages>8</tpages></addata></record>
fulltext fulltext
identifier ISSN: 0302-9743
ispartof Lecture notes in computer science, 2004, p.295-302
issn 0302-9743
1611-3349
language eng
recordid cdi_pascalfrancis_primary_16177369
source Springer Books
subjects Applied sciences
Computer science
control theory
systems
Computer systems and distributed systems. User interface
Data List
Exact sciences and technology
List Element
Parallel Programming
Reduce Execution Time
Software
Workload State
title Easing Message-Passing Parallel Programming Through a Data Balancing Service
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-29T10%3A03%3A36IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-pascalfrancis_sprin&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Easing%20Message-Passing%20Parallel%20Programming%20Through%20a%20Data%20Balancing%20Service&rft.btitle=Lecture%20notes%20in%20computer%20science&rft.au=Rom%C3%A1n-Alonso,%20Graciela&rft.date=2004&rft.spage=295&rft.epage=302&rft.pages=295-302&rft.issn=0302-9743&rft.eissn=1611-3349&rft.isbn=3540231633&rft.isbn_list=9783540231639&rft_id=info:doi/10.1007/978-3-540-30218-6_42&rft_dat=%3Cpascalfrancis_sprin%3E16177369%3C/pascalfrancis_sprin%3E%3Curl%3E%3C/url%3E&rft.eisbn=3540302182&rft.eisbn_list=9783540302186&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true