Analyzing Machine Learning Workloads Using a Detailed GPU Simulator

Most deep neural networks deployed today are trained using GPUs via high-level frameworks such as TensorFlow and PyTorch. This paper describes changes we made to the GPGPU-Sim simulator to enable it to run PyTorch by running PTX kernels included in NVIDIA's cuDNN library. We use the resulting m...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Lew, Jonathan, Shah, Deval, Pati, Suchita, Cattell, Shaylin, Zhang, Mengchi, Sandhupatla, Amruth, Ng, Christopher, Goli, Negar, Sinclair, Matthew D, Rogers, Timothy G, Aamodt, Tor
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Lew, Jonathan
Shah, Deval
Pati, Suchita
Cattell, Shaylin
Zhang, Mengchi
Sandhupatla, Amruth
Ng, Christopher
Goli, Negar
Sinclair, Matthew D
Rogers, Timothy G
Aamodt, Tor
description Most deep neural networks deployed today are trained using GPUs via high-level frameworks such as TensorFlow and PyTorch. This paper describes changes we made to the GPGPU-Sim simulator to enable it to run PyTorch by running PTX kernels included in NVIDIA's cuDNN library. We use the resulting modified simulator, which has been made available publicly with this paper, to study some simple deep learning workloads. With our changes to GPGPU-Sim's functional simulation model, we find GPGPU-Sim performance model running a cuDNN enabled implementation of LeNet for MNIST reports results within 30% of real hardware. Using GPGPU-Sim's AerialVision performance analysis tool we observe that cuDNN API calls contain many varying phases and appear to include potentially inefficient microarchitecture behaviour such as DRAM partition bank camping, at least when executed on GPGPU-Sim's current performance model.
doi_str_mv 10.48550/arxiv.1811.08933
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1811_08933</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1811_08933</sourcerecordid><originalsourceid>FETCH-LOGICAL-a673-ac230258d8b0322c1bfa1e8be26d8a37e53c258df7b4f4a6b0a80a7c67151fe53</originalsourceid><addsrcrecordid>eNotj81KxDAAhHPxIKsP4Mm8QGt-miYel6qrUFFwF49lkiYazLaSruL69NrV0zDzwcBHyBlnZWWUYhfIX_Gz5IbzkplLKY9JsxyQ9t9xeKH3cK9x8LT1yMM8PI_5LY3oJ7qZ5g565XeIyfd09bihT3H7kbAb8wk5CkiTP_3PBVnfXK-b26J9WN01y7ZArWUBJyQTyvTGMimE4zaAe2O9qHsDqb2SbsZB2ypUqC2DYdCu1lzx8EsX5Pzv9mDRvee4Rd53s013sJE_v4ZEkQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Analyzing Machine Learning Workloads Using a Detailed GPU Simulator</title><source>arXiv.org</source><creator>Lew, Jonathan ; Shah, Deval ; Pati, Suchita ; Cattell, Shaylin ; Zhang, Mengchi ; Sandhupatla, Amruth ; Ng, Christopher ; Goli, Negar ; Sinclair, Matthew D ; Rogers, Timothy G ; Aamodt, Tor</creator><creatorcontrib>Lew, Jonathan ; Shah, Deval ; Pati, Suchita ; Cattell, Shaylin ; Zhang, Mengchi ; Sandhupatla, Amruth ; Ng, Christopher ; Goli, Negar ; Sinclair, Matthew D ; Rogers, Timothy G ; Aamodt, Tor</creatorcontrib><description>Most deep neural networks deployed today are trained using GPUs via high-level frameworks such as TensorFlow and PyTorch. This paper describes changes we made to the GPGPU-Sim simulator to enable it to run PyTorch by running PTX kernels included in NVIDIA's cuDNN library. We use the resulting modified simulator, which has been made available publicly with this paper, to study some simple deep learning workloads. With our changes to GPGPU-Sim's functional simulation model, we find GPGPU-Sim performance model running a cuDNN enabled implementation of LeNet for MNIST reports results within 30% of real hardware. Using GPGPU-Sim's AerialVision performance analysis tool we observe that cuDNN API calls contain many varying phases and appear to include potentially inefficient microarchitecture behaviour such as DRAM partition bank camping, at least when executed on GPGPU-Sim's current performance model.</description><identifier>DOI: 10.48550/arxiv.1811.08933</identifier><language>eng</language><subject>Computer Science - Distributed, Parallel, and Cluster Computing</subject><creationdate>2018-11</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1811.08933$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1811.08933$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Lew, Jonathan</creatorcontrib><creatorcontrib>Shah, Deval</creatorcontrib><creatorcontrib>Pati, Suchita</creatorcontrib><creatorcontrib>Cattell, Shaylin</creatorcontrib><creatorcontrib>Zhang, Mengchi</creatorcontrib><creatorcontrib>Sandhupatla, Amruth</creatorcontrib><creatorcontrib>Ng, Christopher</creatorcontrib><creatorcontrib>Goli, Negar</creatorcontrib><creatorcontrib>Sinclair, Matthew D</creatorcontrib><creatorcontrib>Rogers, Timothy G</creatorcontrib><creatorcontrib>Aamodt, Tor</creatorcontrib><title>Analyzing Machine Learning Workloads Using a Detailed GPU Simulator</title><description>Most deep neural networks deployed today are trained using GPUs via high-level frameworks such as TensorFlow and PyTorch. This paper describes changes we made to the GPGPU-Sim simulator to enable it to run PyTorch by running PTX kernels included in NVIDIA's cuDNN library. We use the resulting modified simulator, which has been made available publicly with this paper, to study some simple deep learning workloads. With our changes to GPGPU-Sim's functional simulation model, we find GPGPU-Sim performance model running a cuDNN enabled implementation of LeNet for MNIST reports results within 30% of real hardware. Using GPGPU-Sim's AerialVision performance analysis tool we observe that cuDNN API calls contain many varying phases and appear to include potentially inefficient microarchitecture behaviour such as DRAM partition bank camping, at least when executed on GPGPU-Sim's current performance model.</description><subject>Computer Science - Distributed, Parallel, and Cluster Computing</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj81KxDAAhHPxIKsP4Mm8QGt-miYel6qrUFFwF49lkiYazLaSruL69NrV0zDzwcBHyBlnZWWUYhfIX_Gz5IbzkplLKY9JsxyQ9t9xeKH3cK9x8LT1yMM8PI_5LY3oJ7qZ5g565XeIyfd09bihT3H7kbAb8wk5CkiTP_3PBVnfXK-b26J9WN01y7ZArWUBJyQTyvTGMimE4zaAe2O9qHsDqb2SbsZB2ypUqC2DYdCu1lzx8EsX5Pzv9mDRvee4Rd53s013sJE_v4ZEkQ</recordid><startdate>20181118</startdate><enddate>20181118</enddate><creator>Lew, Jonathan</creator><creator>Shah, Deval</creator><creator>Pati, Suchita</creator><creator>Cattell, Shaylin</creator><creator>Zhang, Mengchi</creator><creator>Sandhupatla, Amruth</creator><creator>Ng, Christopher</creator><creator>Goli, Negar</creator><creator>Sinclair, Matthew D</creator><creator>Rogers, Timothy G</creator><creator>Aamodt, Tor</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20181118</creationdate><title>Analyzing Machine Learning Workloads Using a Detailed GPU Simulator</title><author>Lew, Jonathan ; Shah, Deval ; Pati, Suchita ; Cattell, Shaylin ; Zhang, Mengchi ; Sandhupatla, Amruth ; Ng, Christopher ; Goli, Negar ; Sinclair, Matthew D ; Rogers, Timothy G ; Aamodt, Tor</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a673-ac230258d8b0322c1bfa1e8be26d8a37e53c258df7b4f4a6b0a80a7c67151fe53</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>Computer Science - Distributed, Parallel, and Cluster Computing</topic><toplevel>online_resources</toplevel><creatorcontrib>Lew, Jonathan</creatorcontrib><creatorcontrib>Shah, Deval</creatorcontrib><creatorcontrib>Pati, Suchita</creatorcontrib><creatorcontrib>Cattell, Shaylin</creatorcontrib><creatorcontrib>Zhang, Mengchi</creatorcontrib><creatorcontrib>Sandhupatla, Amruth</creatorcontrib><creatorcontrib>Ng, Christopher</creatorcontrib><creatorcontrib>Goli, Negar</creatorcontrib><creatorcontrib>Sinclair, Matthew D</creatorcontrib><creatorcontrib>Rogers, Timothy G</creatorcontrib><creatorcontrib>Aamodt, Tor</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Lew, Jonathan</au><au>Shah, Deval</au><au>Pati, Suchita</au><au>Cattell, Shaylin</au><au>Zhang, Mengchi</au><au>Sandhupatla, Amruth</au><au>Ng, Christopher</au><au>Goli, Negar</au><au>Sinclair, Matthew D</au><au>Rogers, Timothy G</au><au>Aamodt, Tor</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Analyzing Machine Learning Workloads Using a Detailed GPU Simulator</atitle><date>2018-11-18</date><risdate>2018</risdate><abstract>Most deep neural networks deployed today are trained using GPUs via high-level frameworks such as TensorFlow and PyTorch. This paper describes changes we made to the GPGPU-Sim simulator to enable it to run PyTorch by running PTX kernels included in NVIDIA's cuDNN library. We use the resulting modified simulator, which has been made available publicly with this paper, to study some simple deep learning workloads. With our changes to GPGPU-Sim's functional simulation model, we find GPGPU-Sim performance model running a cuDNN enabled implementation of LeNet for MNIST reports results within 30% of real hardware. Using GPGPU-Sim's AerialVision performance analysis tool we observe that cuDNN API calls contain many varying phases and appear to include potentially inefficient microarchitecture behaviour such as DRAM partition bank camping, at least when executed on GPGPU-Sim's current performance model.</abstract><doi>10.48550/arxiv.1811.08933</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.1811.08933
ispartof
issn
language eng
recordid cdi_arxiv_primary_1811_08933
source arXiv.org
subjects Computer Science - Distributed, Parallel, and Cluster Computing
title Analyzing Machine Learning Workloads Using a Detailed GPU Simulator
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-12T08%3A53%3A19IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Analyzing%20Machine%20Learning%20Workloads%20Using%20a%20Detailed%20GPU%20Simulator&rft.au=Lew,%20Jonathan&rft.date=2018-11-18&rft_id=info:doi/10.48550/arxiv.1811.08933&rft_dat=%3Carxiv_GOX%3E1811_08933%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true