MPI backend for an automatic parallelizing compiler

Many naive parallel processing schemes were not as successful as many researchers thought, because of the heavy cost of communication and synchronization resulting from parallelization. In this paper, we identify the reasons for this poor performance and the compiler requirements for performance imp...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Kwon, Daesuk, Han, Sangyong, Kim, Heunghwan
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 157
container_issue
container_start_page 152
container_title
container_volume
creator Kwon, Daesuk
Han, Sangyong
Kim, Heunghwan
description Many naive parallel processing schemes were not as successful as many researchers thought, because of the heavy cost of communication and synchronization resulting from parallelization. In this paper, we identify the reasons for this poor performance and the compiler requirements for performance improvement. We realized that the parallelizing decisions should be derived from the overhead information. We added this idea to the automatic parallelizing compiler SUIF. We substituted the original backend of SUIF with our backend using MPI, and gave it the capability to validate parallelization decisions based on overhead parameters. This backend converts shared memory-based parallel programs into distributed memory-based parallel programs with MPI function calls without excessive parallelization, which causes performance degradation.
doi_str_mv 10.1109/ISPAN.1999.778932
format Conference Proceeding
fullrecord <record><control><sourceid>proquest_6IE</sourceid><recordid>TN_cdi_ieee_primary_778932</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>778932</ieee_id><sourcerecordid>27005378</sourcerecordid><originalsourceid>FETCH-LOGICAL-i203t-fd141e26af6c94612672e639b123367bcfdf5f482b98029ecfdc5329ef49c9783</originalsourceid><addsrcrecordid>eNotkMlOwzAURS0GibTwAbDKil2Ch8T2W1YVhUoFKgESu8hxnpHBGciwgK-nIqzuvdLRWVxCLhlNGaNws33erx5TBgCpUhoEPyIRFypPcq7ejsmCKgk55YLpExIxqlWSUQ1nZDEMH5RyKkBGRDzst3Fp7Cc2VezaPjZNbKaxrc3obdyZ3oSAwf_45j22bd35gP05OXUmDHjxn0vyurl9Wd8nu6e77Xq1S_xBPiauYhlDLo2TFjLJuFQcpYCScSGkKq2rXO4yzUvQlAMets3FobgMLCgtluR69nZ9-zXhMBa1HyyGYBpsp6HgitJc_IFXM-gRseh6X5v-u5hPEb8kxFMj</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype><pqid>27005378</pqid></control><display><type>conference_proceeding</type><title>MPI backend for an automatic parallelizing compiler</title><source>IEEE Electronic Library (IEL) Conference Proceedings</source><creator>Kwon, Daesuk ; Han, Sangyong ; Kim, Heunghwan</creator><creatorcontrib>Kwon, Daesuk ; Han, Sangyong ; Kim, Heunghwan</creatorcontrib><description>Many naive parallel processing schemes were not as successful as many researchers thought, because of the heavy cost of communication and synchronization resulting from parallelization. In this paper, we identify the reasons for this poor performance and the compiler requirements for performance improvement. We realized that the parallelizing decisions should be derived from the overhead information. We added this idea to the automatic parallelizing compiler SUIF. We substituted the original backend of SUIF with our backend using MPI, and gave it the capability to validate parallelization decisions based on overhead parameters. This backend converts shared memory-based parallel programs into distributed memory-based parallel programs with MPI function calls without excessive parallelization, which causes performance degradation.</description><identifier>ISSN: 1087-4089</identifier><identifier>ISBN: 0769502318</identifier><identifier>ISBN: 9780769502311</identifier><identifier>EISSN: 2375-527X</identifier><identifier>DOI: 10.1109/ISPAN.1999.778932</identifier><language>eng</language><publisher>IEEE</publisher><subject>Clocks ; Computer science ; Costs ; Frequency ; High performance computing ; Linux ; Microprocessors ; Multiprocessing systems ; Parallel processing ; Parallel programming</subject><ispartof>Proceedings / International Symposium on Parallel Architectures, Algorithms, and Networks (ISPAN), 1999, p.152-157</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/778932$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,2056,4048,4049,27924,54919</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/778932$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Kwon, Daesuk</creatorcontrib><creatorcontrib>Han, Sangyong</creatorcontrib><creatorcontrib>Kim, Heunghwan</creatorcontrib><title>MPI backend for an automatic parallelizing compiler</title><title>Proceedings / International Symposium on Parallel Architectures, Algorithms, and Networks (ISPAN)</title><addtitle>ISPAN</addtitle><description>Many naive parallel processing schemes were not as successful as many researchers thought, because of the heavy cost of communication and synchronization resulting from parallelization. In this paper, we identify the reasons for this poor performance and the compiler requirements for performance improvement. We realized that the parallelizing decisions should be derived from the overhead information. We added this idea to the automatic parallelizing compiler SUIF. We substituted the original backend of SUIF with our backend using MPI, and gave it the capability to validate parallelization decisions based on overhead parameters. This backend converts shared memory-based parallel programs into distributed memory-based parallel programs with MPI function calls without excessive parallelization, which causes performance degradation.</description><subject>Clocks</subject><subject>Computer science</subject><subject>Costs</subject><subject>Frequency</subject><subject>High performance computing</subject><subject>Linux</subject><subject>Microprocessors</subject><subject>Multiprocessing systems</subject><subject>Parallel processing</subject><subject>Parallel programming</subject><issn>1087-4089</issn><issn>2375-527X</issn><isbn>0769502318</isbn><isbn>9780769502311</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>1999</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><sourceid>RIE</sourceid><recordid>eNotkMlOwzAURS0GibTwAbDKil2Ch8T2W1YVhUoFKgESu8hxnpHBGciwgK-nIqzuvdLRWVxCLhlNGaNws33erx5TBgCpUhoEPyIRFypPcq7ejsmCKgk55YLpExIxqlWSUQ1nZDEMH5RyKkBGRDzst3Fp7Cc2VezaPjZNbKaxrc3obdyZ3oSAwf_45j22bd35gP05OXUmDHjxn0vyurl9Wd8nu6e77Xq1S_xBPiauYhlDLo2TFjLJuFQcpYCScSGkKq2rXO4yzUvQlAMets3FobgMLCgtluR69nZ9-zXhMBa1HyyGYBpsp6HgitJc_IFXM-gRseh6X5v-u5hPEb8kxFMj</recordid><startdate>1999</startdate><enddate>1999</enddate><creator>Kwon, Daesuk</creator><creator>Han, Sangyong</creator><creator>Kim, Heunghwan</creator><general>IEEE</general><scope>6IE</scope><scope>6IL</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIL</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>1999</creationdate><title>MPI backend for an automatic parallelizing compiler</title><author>Kwon, Daesuk ; Han, Sangyong ; Kim, Heunghwan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i203t-fd141e26af6c94612672e639b123367bcfdf5f482b98029ecfdc5329ef49c9783</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>1999</creationdate><topic>Clocks</topic><topic>Computer science</topic><topic>Costs</topic><topic>Frequency</topic><topic>High performance computing</topic><topic>Linux</topic><topic>Microprocessors</topic><topic>Multiprocessing systems</topic><topic>Parallel processing</topic><topic>Parallel programming</topic><toplevel>online_resources</toplevel><creatorcontrib>Kwon, Daesuk</creatorcontrib><creatorcontrib>Han, Sangyong</creatorcontrib><creatorcontrib>Kim, Heunghwan</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library (IEL)</collection><collection>IEEE Proceedings Order Plans (POP All) 1998-Present</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Kwon, Daesuk</au><au>Han, Sangyong</au><au>Kim, Heunghwan</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>MPI backend for an automatic parallelizing compiler</atitle><btitle>Proceedings / International Symposium on Parallel Architectures, Algorithms, and Networks (ISPAN)</btitle><stitle>ISPAN</stitle><date>1999</date><risdate>1999</risdate><spage>152</spage><epage>157</epage><pages>152-157</pages><issn>1087-4089</issn><eissn>2375-527X</eissn><isbn>0769502318</isbn><isbn>9780769502311</isbn><abstract>Many naive parallel processing schemes were not as successful as many researchers thought, because of the heavy cost of communication and synchronization resulting from parallelization. In this paper, we identify the reasons for this poor performance and the compiler requirements for performance improvement. We realized that the parallelizing decisions should be derived from the overhead information. We added this idea to the automatic parallelizing compiler SUIF. We substituted the original backend of SUIF with our backend using MPI, and gave it the capability to validate parallelization decisions based on overhead parameters. This backend converts shared memory-based parallel programs into distributed memory-based parallel programs with MPI function calls without excessive parallelization, which causes performance degradation.</abstract><pub>IEEE</pub><doi>10.1109/ISPAN.1999.778932</doi><tpages>6</tpages></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1087-4089
ispartof Proceedings / International Symposium on Parallel Architectures, Algorithms, and Networks (ISPAN), 1999, p.152-157
issn 1087-4089
2375-527X
language eng
recordid cdi_ieee_primary_778932
source IEEE Electronic Library (IEL) Conference Proceedings
subjects Clocks
Computer science
Costs
Frequency
High performance computing
Linux
Microprocessors
Multiprocessing systems
Parallel processing
Parallel programming
title MPI backend for an automatic parallelizing compiler
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-11T15%3A18%3A55IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_6IE&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=MPI%20backend%20for%20an%20automatic%20parallelizing%20compiler&rft.btitle=Proceedings%20/%20International%20Symposium%20on%20Parallel%20Architectures,%20Algorithms,%20and%20Networks%20(ISPAN)&rft.au=Kwon,%20Daesuk&rft.date=1999&rft.spage=152&rft.epage=157&rft.pages=152-157&rft.issn=1087-4089&rft.eissn=2375-527X&rft.isbn=0769502318&rft.isbn_list=9780769502311&rft_id=info:doi/10.1109/ISPAN.1999.778932&rft_dat=%3Cproquest_6IE%3E27005378%3C/proquest_6IE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=27005378&rft_id=info:pmid/&rft_ieee_id=778932&rfr_iscdi=true