Optimizing OpenMP programs on software distributed shared memory systems
This paper describes compiler techniques that can translate standard OpenMP applications into code for distributed computer systems. OpenMP has emerged as an important model and language extension for shared-memory parallel programming. However, despite OpenMP's success on these platforms, it i...
Gespeichert in:
Veröffentlicht in: | International journal of parallel programming 2003-06, Vol.31 (3), p.225-249 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 249 |
---|---|
container_issue | 3 |
container_start_page | 225 |
container_title | International journal of parallel programming |
container_volume | 31 |
creator | Min, Seung-jai Basumallik, Ayon Eigenmann, Rudolf |
description | This paper describes compiler techniques that can translate standard OpenMP applications into code for distributed computer systems. OpenMP has emerged as an important model and language extension for shared-memory parallel programming. However, despite OpenMP's success on these platforms, it is not currently being used on distributed system. The long-term goal of our project is to quantify the degree to which such a use is possible and develop supporting compiler techniques. Our present compiler techniques translate OpenMP programs into a form suitable for execution on a Software DSM system. We have implemented a compiler that performs this basic translation, and we have studied a number of hand optimizations that improve the baseline performance. Our approach complements related efforts that have proposed language extensions for efficient execution of OpenMP programs on distributed systems. Our results show that, while kernel benchmarks can show high efficiency of OpenMP programs on distributed systems, full applications need careful consideration of shared data access patterns. A naive translation (similar to OpenMP compilers for SMPs) leads to acceptable performance in very few applications only. However, additional optimizations, including access privatization, selective touch, and dynamic scheduling, resulting in 31% average improvement on our benchmarks. [PUBLICATION ABSTRACT] |
doi_str_mv | 10.1023/A:1023090719310 |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_miscellaneous_27928968</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>429166851</sourcerecordid><originalsourceid>FETCH-LOGICAL-c286t-df553576bdbfc45011377f16732554eb0f15d5c7471f842fb3e76716b85ebbf23</originalsourceid><addsrcrecordid>eNpdjk1Lw0AURQdRsFbXbgcX7qJvPt68ibtS1AqVutB1ySQzNaVJamaC1F9vRFeuDlwOl8PYpYAbAVLdzu5-ADmQyJWAIzYRSCojo-GYTcBazEijPWVnMW4BICdrJ2yx2qe6qb_qdsNXe98-v_B93236oom8a3nsQvoses-rOqa-dkPyFY_v41Lxxjddf-DxEJNv4jk7CcUu-os_Ttnbw_3rfJEtV49P89kyK6U1KasCokIyrnKh1AhCKKIgDCmJqL2DILDCkjSJYLUMTnkyJIyz6J0LUk3Z9e_vmPkx-JjWTR1Lv9sVre-GuJaUS5sbO4pX_8RtN_Tt2LaWoGWOElB9A17oW5g</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>204295205</pqid></control><display><type>article</type><title>Optimizing OpenMP programs on software distributed shared memory systems</title><source>Springer Nature - Complete Springer Journals</source><creator>Min, Seung-jai ; Basumallik, Ayon ; Eigenmann, Rudolf</creator><creatorcontrib>Min, Seung-jai ; Basumallik, Ayon ; Eigenmann, Rudolf</creatorcontrib><description>This paper describes compiler techniques that can translate standard OpenMP applications into code for distributed computer systems. OpenMP has emerged as an important model and language extension for shared-memory parallel programming. However, despite OpenMP's success on these platforms, it is not currently being used on distributed system. The long-term goal of our project is to quantify the degree to which such a use is possible and develop supporting compiler techniques. Our present compiler techniques translate OpenMP programs into a form suitable for execution on a Software DSM system. We have implemented a compiler that performs this basic translation, and we have studied a number of hand optimizations that improve the baseline performance. Our approach complements related efforts that have proposed language extensions for efficient execution of OpenMP programs on distributed systems. Our results show that, while kernel benchmarks can show high efficiency of OpenMP programs on distributed systems, full applications need careful consideration of shared data access patterns. A naive translation (similar to OpenMP compilers for SMPs) leads to acceptable performance in very few applications only. However, additional optimizations, including access privatization, selective touch, and dynamic scheduling, resulting in 31% average improvement on our benchmarks. [PUBLICATION ABSTRACT]</description><identifier>ISSN: 0885-7458</identifier><identifier>EISSN: 1573-7640</identifier><identifier>DOI: 10.1023/A:1023090719310</identifier><identifier>CODEN: IJPPE5</identifier><language>eng</language><publisher>New York: Springer Nature B.V</publisher><subject>Software ; Studies</subject><ispartof>International journal of parallel programming, 2003-06, Vol.31 (3), p.225-249</ispartof><rights>Plenum Publishing Corporation 2003</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c286t-df553576bdbfc45011377f16732554eb0f15d5c7471f842fb3e76716b85ebbf23</citedby></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,776,780,27903,27904</link.rule.ids></links><search><creatorcontrib>Min, Seung-jai</creatorcontrib><creatorcontrib>Basumallik, Ayon</creatorcontrib><creatorcontrib>Eigenmann, Rudolf</creatorcontrib><title>Optimizing OpenMP programs on software distributed shared memory systems</title><title>International journal of parallel programming</title><description>This paper describes compiler techniques that can translate standard OpenMP applications into code for distributed computer systems. OpenMP has emerged as an important model and language extension for shared-memory parallel programming. However, despite OpenMP's success on these platforms, it is not currently being used on distributed system. The long-term goal of our project is to quantify the degree to which such a use is possible and develop supporting compiler techniques. Our present compiler techniques translate OpenMP programs into a form suitable for execution on a Software DSM system. We have implemented a compiler that performs this basic translation, and we have studied a number of hand optimizations that improve the baseline performance. Our approach complements related efforts that have proposed language extensions for efficient execution of OpenMP programs on distributed systems. Our results show that, while kernel benchmarks can show high efficiency of OpenMP programs on distributed systems, full applications need careful consideration of shared data access patterns. A naive translation (similar to OpenMP compilers for SMPs) leads to acceptable performance in very few applications only. However, additional optimizations, including access privatization, selective touch, and dynamic scheduling, resulting in 31% average improvement on our benchmarks. [PUBLICATION ABSTRACT]</description><subject>Software</subject><subject>Studies</subject><issn>0885-7458</issn><issn>1573-7640</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2003</creationdate><recordtype>article</recordtype><sourceid>8G5</sourceid><sourceid>BENPR</sourceid><sourceid>GUQSH</sourceid><sourceid>M2O</sourceid><recordid>eNpdjk1Lw0AURQdRsFbXbgcX7qJvPt68ibtS1AqVutB1ySQzNaVJamaC1F9vRFeuDlwOl8PYpYAbAVLdzu5-ADmQyJWAIzYRSCojo-GYTcBazEijPWVnMW4BICdrJ2yx2qe6qb_qdsNXe98-v_B93236oom8a3nsQvoses-rOqa-dkPyFY_v41Lxxjddf-DxEJNv4jk7CcUu-os_Ttnbw_3rfJEtV49P89kyK6U1KasCokIyrnKh1AhCKKIgDCmJqL2DILDCkjSJYLUMTnkyJIyz6J0LUk3Z9e_vmPkx-JjWTR1Lv9sVre-GuJaUS5sbO4pX_8RtN_Tt2LaWoGWOElB9A17oW5g</recordid><startdate>20030601</startdate><enddate>20030601</enddate><creator>Min, Seung-jai</creator><creator>Basumallik, Ayon</creator><creator>Eigenmann, Rudolf</creator><general>Springer Nature B.V</general><scope>3V.</scope><scope>7SC</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>8AL</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>8G5</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>GUQSH</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>M2O</scope><scope>MBDVC</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>Q9U</scope></search><sort><creationdate>20030601</creationdate><title>Optimizing OpenMP programs on software distributed shared memory systems</title><author>Min, Seung-jai ; Basumallik, Ayon ; Eigenmann, Rudolf</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c286t-df553576bdbfc45011377f16732554eb0f15d5c7471f842fb3e76716b85ebbf23</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2003</creationdate><topic>Software</topic><topic>Studies</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Min, Seung-jai</creatorcontrib><creatorcontrib>Basumallik, Ayon</creatorcontrib><creatorcontrib>Eigenmann, Rudolf</creatorcontrib><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>ABI/INFORM Collection</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Global (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>Research Library (Alumni Edition)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>Research Library Prep</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM Global</collection><collection>Computing Database</collection><collection>Research Library</collection><collection>Research Library (Corporate)</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>ProQuest One Business</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>ProQuest Central Basic</collection><jtitle>International journal of parallel programming</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Min, Seung-jai</au><au>Basumallik, Ayon</au><au>Eigenmann, Rudolf</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Optimizing OpenMP programs on software distributed shared memory systems</atitle><jtitle>International journal of parallel programming</jtitle><date>2003-06-01</date><risdate>2003</risdate><volume>31</volume><issue>3</issue><spage>225</spage><epage>249</epage><pages>225-249</pages><issn>0885-7458</issn><eissn>1573-7640</eissn><coden>IJPPE5</coden><abstract>This paper describes compiler techniques that can translate standard OpenMP applications into code for distributed computer systems. OpenMP has emerged as an important model and language extension for shared-memory parallel programming. However, despite OpenMP's success on these platforms, it is not currently being used on distributed system. The long-term goal of our project is to quantify the degree to which such a use is possible and develop supporting compiler techniques. Our present compiler techniques translate OpenMP programs into a form suitable for execution on a Software DSM system. We have implemented a compiler that performs this basic translation, and we have studied a number of hand optimizations that improve the baseline performance. Our approach complements related efforts that have proposed language extensions for efficient execution of OpenMP programs on distributed systems. Our results show that, while kernel benchmarks can show high efficiency of OpenMP programs on distributed systems, full applications need careful consideration of shared data access patterns. A naive translation (similar to OpenMP compilers for SMPs) leads to acceptable performance in very few applications only. However, additional optimizations, including access privatization, selective touch, and dynamic scheduling, resulting in 31% average improvement on our benchmarks. [PUBLICATION ABSTRACT]</abstract><cop>New York</cop><pub>Springer Nature B.V</pub><doi>10.1023/A:1023090719310</doi><tpages>25</tpages></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0885-7458 |
ispartof | International journal of parallel programming, 2003-06, Vol.31 (3), p.225-249 |
issn | 0885-7458 1573-7640 |
language | eng |
recordid | cdi_proquest_miscellaneous_27928968 |
source | Springer Nature - Complete Springer Journals |
subjects | Software Studies |
title | Optimizing OpenMP programs on software distributed shared memory systems |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-27T12%3A29%3A27IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Optimizing%20OpenMP%20programs%20on%20software%20distributed%20shared%20memory%20systems&rft.jtitle=International%20journal%20of%20parallel%20programming&rft.au=Min,%20Seung-jai&rft.date=2003-06-01&rft.volume=31&rft.issue=3&rft.spage=225&rft.epage=249&rft.pages=225-249&rft.issn=0885-7458&rft.eissn=1573-7640&rft.coden=IJPPE5&rft_id=info:doi/10.1023/A:1023090719310&rft_dat=%3Cproquest%3E429166851%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=204295205&rft_id=info:pmid/&rfr_iscdi=true |