From array algebra to energy efficiency on GPUs: Data and hardware shapes with dimension-lifting to optimize memory-processor layouts

We present a new formulation for parallel matrix multiplication (MM) to out-perform the standard row-column code design. This algorithm is formulated in the MoA formalism (A Mathematics of Arrays) and combines an array view of hardware (dimension-lifting) to extend indexing to physical memory/proces...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
1. Verfasser: Mullin, Lenore M. R
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Mullin, Lenore M. R
description We present a new formulation for parallel matrix multiplication (MM) to out-perform the standard row-column code design. This algorithm is formulated in the MoA formalism (A Mathematics of Arrays) and combines an array view of hardware (dimension-lifting) to extend indexing to physical memory/processing units, with a contiguous data layout derived from static transformations. This view of a hardware-software model is thus a bridging model in the sense of Valiant's BSP. OpenACCcode was derived from the MoA expressions's normal form, producing optimal block sizes using the static information of types and shapes. Experiments were run on Nvidia V100 GPUs and reveal energy consumption which is quadratic in N, i.e. linear in the size of matrix. More generally this approach may be an ideal way of formulating, optimizing, and mapping array algorithms to embedded hardware. This work builds upon recently published results of NREL scientists. .
doi_str_mv 10.48550/arxiv.2306.11148
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2306_11148</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2306_11148</sourcerecordid><originalsourceid>FETCH-LOGICAL-a678-2e711989324073eccd1b3c13e406084c4ef938057a877d2bec30f18947f4c173</originalsourceid><addsrcrecordid>eNot0LFOwzAQgOEsDKjwAEzcC6TYsVs7bKjQglQJJGCurs65tZTY0TlQws57oxamf_uHryiupJhqO5uJG-Sv8DmtlJhPpZTanhc_S04dIDOOgO2OtowwJKBIvBuBvA8uUHQjpAirl_d8C_c4IGBsYI_cHJAJ8h57ynAIwx6a0FHMIcWyDX4IcXe8pX4IXfgm6KhLPJY9J0c5J4YWx_Qx5IvizGOb6fK_k-J1-fC2eCzXz6unxd26xLmxZUVGytrWqtLCKHKukVvlpCIt5sJqp8nXyoqZQWtMU23JKeGlrbXx2kmjJsX13_XEsOk5dMjj5sixOXGoX_UqXKg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>From array algebra to energy efficiency on GPUs: Data and hardware shapes with dimension-lifting to optimize memory-processor layouts</title><source>arXiv.org</source><creator>Mullin, Lenore M. R</creator><creatorcontrib>Mullin, Lenore M. R</creatorcontrib><description>We present a new formulation for parallel matrix multiplication (MM) to out-perform the standard row-column code design. This algorithm is formulated in the MoA formalism (A Mathematics of Arrays) and combines an array view of hardware (dimension-lifting) to extend indexing to physical memory/processing units, with a contiguous data layout derived from static transformations. This view of a hardware-software model is thus a bridging model in the sense of Valiant's BSP. OpenACCcode was derived from the MoA expressions's normal form, producing optimal block sizes using the static information of types and shapes. Experiments were run on Nvidia V100 GPUs and reveal energy consumption which is quadratic in N, i.e. linear in the size of matrix. More generally this approach may be an ideal way of formulating, optimizing, and mapping array algorithms to embedded hardware. This work builds upon recently published results of NREL scientists. .</description><identifier>DOI: 10.48550/arxiv.2306.11148</identifier><language>eng</language><subject>Computer Science - Distributed, Parallel, and Cluster Computing ; Computer Science - Mathematical Software</subject><creationdate>2023-06</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2306.11148$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2306.11148$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Mullin, Lenore M. R</creatorcontrib><title>From array algebra to energy efficiency on GPUs: Data and hardware shapes with dimension-lifting to optimize memory-processor layouts</title><description>We present a new formulation for parallel matrix multiplication (MM) to out-perform the standard row-column code design. This algorithm is formulated in the MoA formalism (A Mathematics of Arrays) and combines an array view of hardware (dimension-lifting) to extend indexing to physical memory/processing units, with a contiguous data layout derived from static transformations. This view of a hardware-software model is thus a bridging model in the sense of Valiant's BSP. OpenACCcode was derived from the MoA expressions's normal form, producing optimal block sizes using the static information of types and shapes. Experiments were run on Nvidia V100 GPUs and reveal energy consumption which is quadratic in N, i.e. linear in the size of matrix. More generally this approach may be an ideal way of formulating, optimizing, and mapping array algorithms to embedded hardware. This work builds upon recently published results of NREL scientists. .</description><subject>Computer Science - Distributed, Parallel, and Cluster Computing</subject><subject>Computer Science - Mathematical Software</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNot0LFOwzAQgOEsDKjwAEzcC6TYsVs7bKjQglQJJGCurs65tZTY0TlQws57oxamf_uHryiupJhqO5uJG-Sv8DmtlJhPpZTanhc_S04dIDOOgO2OtowwJKBIvBuBvA8uUHQjpAirl_d8C_c4IGBsYI_cHJAJ8h57ynAIwx6a0FHMIcWyDX4IcXe8pX4IXfgm6KhLPJY9J0c5J4YWx_Qx5IvizGOb6fK_k-J1-fC2eCzXz6unxd26xLmxZUVGytrWqtLCKHKukVvlpCIt5sJqp8nXyoqZQWtMU23JKeGlrbXx2kmjJsX13_XEsOk5dMjj5sixOXGoX_UqXKg</recordid><startdate>20230619</startdate><enddate>20230619</enddate><creator>Mullin, Lenore M. R</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230619</creationdate><title>From array algebra to energy efficiency on GPUs: Data and hardware shapes with dimension-lifting to optimize memory-processor layouts</title><author>Mullin, Lenore M. R</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a678-2e711989324073eccd1b3c13e406084c4ef938057a877d2bec30f18947f4c173</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Distributed, Parallel, and Cluster Computing</topic><topic>Computer Science - Mathematical Software</topic><toplevel>online_resources</toplevel><creatorcontrib>Mullin, Lenore M. R</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Mullin, Lenore M. R</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>From array algebra to energy efficiency on GPUs: Data and hardware shapes with dimension-lifting to optimize memory-processor layouts</atitle><date>2023-06-19</date><risdate>2023</risdate><abstract>We present a new formulation for parallel matrix multiplication (MM) to out-perform the standard row-column code design. This algorithm is formulated in the MoA formalism (A Mathematics of Arrays) and combines an array view of hardware (dimension-lifting) to extend indexing to physical memory/processing units, with a contiguous data layout derived from static transformations. This view of a hardware-software model is thus a bridging model in the sense of Valiant's BSP. OpenACCcode was derived from the MoA expressions's normal form, producing optimal block sizes using the static information of types and shapes. Experiments were run on Nvidia V100 GPUs and reveal energy consumption which is quadratic in N, i.e. linear in the size of matrix. More generally this approach may be an ideal way of formulating, optimizing, and mapping array algorithms to embedded hardware. This work builds upon recently published results of NREL scientists. .</abstract><doi>10.48550/arxiv.2306.11148</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2306.11148
ispartof
issn
language eng
recordid cdi_arxiv_primary_2306_11148
source arXiv.org
subjects Computer Science - Distributed, Parallel, and Cluster Computing
Computer Science - Mathematical Software
title From array algebra to energy efficiency on GPUs: Data and hardware shapes with dimension-lifting to optimize memory-processor layouts
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-26T17%3A37%3A31IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=From%20array%20algebra%20to%20energy%20efficiency%20on%20GPUs:%20Data%20and%20hardware%20shapes%20with%20dimension-lifting%20to%20optimize%20memory-processor%20layouts&rft.au=Mullin,%20Lenore%20M.%20R&rft.date=2023-06-19&rft_id=info:doi/10.48550/arxiv.2306.11148&rft_dat=%3Carxiv_GOX%3E2306_11148%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true