Performance and Power: Systematic Evaluation of AI Workloads on Accelerators with CARAML

The rapid advancement of machine learning (ML) technologies has driven the development of specialized hardware accelerators designed to facilitate more efficient model training. This paper introduces the CARAML benchmark suite, which is employed to assess performance and energy consumption during th...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: John, Chelsea Maria, Nassyr, Stepan, Penke, Carolin, Herten, Andreas
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator John, Chelsea Maria
Nassyr, Stepan
Penke, Carolin
Herten, Andreas
description The rapid advancement of machine learning (ML) technologies has driven the development of specialized hardware accelerators designed to facilitate more efficient model training. This paper introduces the CARAML benchmark suite, which is employed to assess performance and energy consumption during the training of transformer-based large language models and computer vision models on a range of hardware accelerators, including systems from NVIDIA, AMD, and Graphcore. CARAML provides a compact, automated, extensible, and reproducible framework for assessing the performance and energy of ML workloads across various novel hardware architectures. The design and implementation of CARAML, along with a custom power measurement tool called jpwr, are discussed in detail.
doi_str_mv 10.48550/arxiv.2409.12994
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2409_12994</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2409_12994</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2409_129943</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjGw1DM0srQ04WSICEgtSssvyk3MS05VSMxLUQjIL08tslIIriwuSc1NLMlMVnAtS8wpBbLy8xTy0xQcPRXC84uyc_ITU4oVgEKOycmpOalFiSX5RcUK5ZklGQrOjkGOvj48DKxpiTnFqbxQmptB3s01xNlDF-yG-IKizNzEosp4kFviwW4xJqwCANbHPsQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Performance and Power: Systematic Evaluation of AI Workloads on Accelerators with CARAML</title><source>arXiv.org</source><creator>John, Chelsea Maria ; Nassyr, Stepan ; Penke, Carolin ; Herten, Andreas</creator><creatorcontrib>John, Chelsea Maria ; Nassyr, Stepan ; Penke, Carolin ; Herten, Andreas</creatorcontrib><description>The rapid advancement of machine learning (ML) technologies has driven the development of specialized hardware accelerators designed to facilitate more efficient model training. This paper introduces the CARAML benchmark suite, which is employed to assess performance and energy consumption during the training of transformer-based large language models and computer vision models on a range of hardware accelerators, including systems from NVIDIA, AMD, and Graphcore. CARAML provides a compact, automated, extensible, and reproducible framework for assessing the performance and energy of ML workloads across various novel hardware architectures. The design and implementation of CARAML, along with a custom power measurement tool called jpwr, are discussed in detail.</description><identifier>DOI: 10.48550/arxiv.2409.12994</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Distributed, Parallel, and Cluster Computing ; Computer Science - Hardware Architecture ; Computer Science - Learning ; Computer Science - Performance</subject><creationdate>2024-09</creationdate><rights>http://creativecommons.org/licenses/by-sa/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2409.12994$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2409.12994$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>John, Chelsea Maria</creatorcontrib><creatorcontrib>Nassyr, Stepan</creatorcontrib><creatorcontrib>Penke, Carolin</creatorcontrib><creatorcontrib>Herten, Andreas</creatorcontrib><title>Performance and Power: Systematic Evaluation of AI Workloads on Accelerators with CARAML</title><description>The rapid advancement of machine learning (ML) technologies has driven the development of specialized hardware accelerators designed to facilitate more efficient model training. This paper introduces the CARAML benchmark suite, which is employed to assess performance and energy consumption during the training of transformer-based large language models and computer vision models on a range of hardware accelerators, including systems from NVIDIA, AMD, and Graphcore. CARAML provides a compact, automated, extensible, and reproducible framework for assessing the performance and energy of ML workloads across various novel hardware architectures. The design and implementation of CARAML, along with a custom power measurement tool called jpwr, are discussed in detail.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Distributed, Parallel, and Cluster Computing</subject><subject>Computer Science - Hardware Architecture</subject><subject>Computer Science - Learning</subject><subject>Computer Science - Performance</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjGw1DM0srQ04WSICEgtSssvyk3MS05VSMxLUQjIL08tslIIriwuSc1NLMlMVnAtS8wpBbLy8xTy0xQcPRXC84uyc_ITU4oVgEKOycmpOalFiSX5RcUK5ZklGQrOjkGOvj48DKxpiTnFqbxQmptB3s01xNlDF-yG-IKizNzEosp4kFviwW4xJqwCANbHPsQ</recordid><startdate>20240919</startdate><enddate>20240919</enddate><creator>John, Chelsea Maria</creator><creator>Nassyr, Stepan</creator><creator>Penke, Carolin</creator><creator>Herten, Andreas</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240919</creationdate><title>Performance and Power: Systematic Evaluation of AI Workloads on Accelerators with CARAML</title><author>John, Chelsea Maria ; Nassyr, Stepan ; Penke, Carolin ; Herten, Andreas</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2409_129943</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Distributed, Parallel, and Cluster Computing</topic><topic>Computer Science - Hardware Architecture</topic><topic>Computer Science - Learning</topic><topic>Computer Science - Performance</topic><toplevel>online_resources</toplevel><creatorcontrib>John, Chelsea Maria</creatorcontrib><creatorcontrib>Nassyr, Stepan</creatorcontrib><creatorcontrib>Penke, Carolin</creatorcontrib><creatorcontrib>Herten, Andreas</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>John, Chelsea Maria</au><au>Nassyr, Stepan</au><au>Penke, Carolin</au><au>Herten, Andreas</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Performance and Power: Systematic Evaluation of AI Workloads on Accelerators with CARAML</atitle><date>2024-09-19</date><risdate>2024</risdate><abstract>The rapid advancement of machine learning (ML) technologies has driven the development of specialized hardware accelerators designed to facilitate more efficient model training. This paper introduces the CARAML benchmark suite, which is employed to assess performance and energy consumption during the training of transformer-based large language models and computer vision models on a range of hardware accelerators, including systems from NVIDIA, AMD, and Graphcore. CARAML provides a compact, automated, extensible, and reproducible framework for assessing the performance and energy of ML workloads across various novel hardware architectures. The design and implementation of CARAML, along with a custom power measurement tool called jpwr, are discussed in detail.</abstract><doi>10.48550/arxiv.2409.12994</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2409.12994
ispartof
issn
language eng
recordid cdi_arxiv_primary_2409_12994
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Distributed, Parallel, and Cluster Computing
Computer Science - Hardware Architecture
Computer Science - Learning
Computer Science - Performance
title Performance and Power: Systematic Evaluation of AI Workloads on Accelerators with CARAML
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T13%3A24%3A18IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Performance%20and%20Power:%20Systematic%20Evaluation%20of%20AI%20Workloads%20on%20Accelerators%20with%20CARAML&rft.au=John,%20Chelsea%20Maria&rft.date=2024-09-19&rft_id=info:doi/10.48550/arxiv.2409.12994&rft_dat=%3Carxiv_GOX%3E2409_12994%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true