Sparse GPU Kernels for Deep Learning

Scientific workloads have traditionally exploited high levels of sparsity to accelerate computation and reduce memory requirements. While deep neural networks can be made sparse, achieving practical speedups on GPUs is difficult because these applications have relatively moderate levels of sparsity...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Gale, Trevor, Zaharia, Matei, Young, Cliff, Elsen, Erich
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Gale, Trevor
Zaharia, Matei
Young, Cliff
Elsen, Erich
description Scientific workloads have traditionally exploited high levels of sparsity to accelerate computation and reduce memory requirements. While deep neural networks can be made sparse, achieving practical speedups on GPUs is difficult because these applications have relatively moderate levels of sparsity that are not sufficient for existing sparse kernels to outperform their dense counterparts. In this work, we study sparse matrices from deep learning applications and identify favorable properties that can be exploited to accelerate computation. Based on these insights, we develop high-performance GPU kernels for two sparse matrix operations widely applicable in neural networks: sparse matrix-dense matrix multiplication and sampled dense-dense matrix multiplication. Our kernels reach 27% of single-precision peak on Nvidia V100 GPUs. Using our kernels, we demonstrate sparse Transformer and MobileNet models that achieve 1.2-2.1x speedups and up to 12.8x memory savings without sacrificing accuracy.
doi_str_mv 10.48550/arxiv.2006.10901
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2006_10901</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2006_10901</sourcerecordid><originalsourceid>FETCH-LOGICAL-a671-e34c8f2d14bbe9d375b08a282195d6958852afefae69f8758d827a193a1b0a913</originalsourceid><addsrcrecordid>eNotzrFuwjAQgGEvHaqUB-iEB9akPjtOziOiJaBGKhJhji7kXEWiIXKkCt4eEZj-7dcnxDuoJEVr1QeFS_efaKWyBJRT8CoW-4HCyLLYHeQ3h55Po_TnID-ZB1kyhb7rf9_Ei6fTyLNnI1Gtv6rVJi5_iu1qWcaU5RCzSY_odQtp07BrTW4bhaRRg7Nt5iyi1eTZE2fOY26xRZ0TOEPQKHJgIjF_bCdmPYTuj8K1vnPriWtuIvs4PA</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Sparse GPU Kernels for Deep Learning</title><source>arXiv.org</source><creator>Gale, Trevor ; Zaharia, Matei ; Young, Cliff ; Elsen, Erich</creator><creatorcontrib>Gale, Trevor ; Zaharia, Matei ; Young, Cliff ; Elsen, Erich</creatorcontrib><description>Scientific workloads have traditionally exploited high levels of sparsity to accelerate computation and reduce memory requirements. While deep neural networks can be made sparse, achieving practical speedups on GPUs is difficult because these applications have relatively moderate levels of sparsity that are not sufficient for existing sparse kernels to outperform their dense counterparts. In this work, we study sparse matrices from deep learning applications and identify favorable properties that can be exploited to accelerate computation. Based on these insights, we develop high-performance GPU kernels for two sparse matrix operations widely applicable in neural networks: sparse matrix-dense matrix multiplication and sampled dense-dense matrix multiplication. Our kernels reach 27% of single-precision peak on Nvidia V100 GPUs. Using our kernels, we demonstrate sparse Transformer and MobileNet models that achieve 1.2-2.1x speedups and up to 12.8x memory savings without sacrificing accuracy.</description><identifier>DOI: 10.48550/arxiv.2006.10901</identifier><language>eng</language><subject>Computer Science - Distributed, Parallel, and Cluster Computing ; Computer Science - Learning ; Statistics - Machine Learning</subject><creationdate>2020-06</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2006.10901$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2006.10901$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Gale, Trevor</creatorcontrib><creatorcontrib>Zaharia, Matei</creatorcontrib><creatorcontrib>Young, Cliff</creatorcontrib><creatorcontrib>Elsen, Erich</creatorcontrib><title>Sparse GPU Kernels for Deep Learning</title><description>Scientific workloads have traditionally exploited high levels of sparsity to accelerate computation and reduce memory requirements. While deep neural networks can be made sparse, achieving practical speedups on GPUs is difficult because these applications have relatively moderate levels of sparsity that are not sufficient for existing sparse kernels to outperform their dense counterparts. In this work, we study sparse matrices from deep learning applications and identify favorable properties that can be exploited to accelerate computation. Based on these insights, we develop high-performance GPU kernels for two sparse matrix operations widely applicable in neural networks: sparse matrix-dense matrix multiplication and sampled dense-dense matrix multiplication. Our kernels reach 27% of single-precision peak on Nvidia V100 GPUs. Using our kernels, we demonstrate sparse Transformer and MobileNet models that achieve 1.2-2.1x speedups and up to 12.8x memory savings without sacrificing accuracy.</description><subject>Computer Science - Distributed, Parallel, and Cluster Computing</subject><subject>Computer Science - Learning</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzrFuwjAQgGEvHaqUB-iEB9akPjtOziOiJaBGKhJhji7kXEWiIXKkCt4eEZj-7dcnxDuoJEVr1QeFS_efaKWyBJRT8CoW-4HCyLLYHeQ3h55Po_TnID-ZB1kyhb7rf9_Ei6fTyLNnI1Gtv6rVJi5_iu1qWcaU5RCzSY_odQtp07BrTW4bhaRRg7Nt5iyi1eTZE2fOY26xRZ0TOEPQKHJgIjF_bCdmPYTuj8K1vnPriWtuIvs4PA</recordid><startdate>20200618</startdate><enddate>20200618</enddate><creator>Gale, Trevor</creator><creator>Zaharia, Matei</creator><creator>Young, Cliff</creator><creator>Elsen, Erich</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20200618</creationdate><title>Sparse GPU Kernels for Deep Learning</title><author>Gale, Trevor ; Zaharia, Matei ; Young, Cliff ; Elsen, Erich</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a671-e34c8f2d14bbe9d375b08a282195d6958852afefae69f8758d827a193a1b0a913</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Computer Science - Distributed, Parallel, and Cluster Computing</topic><topic>Computer Science - Learning</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Gale, Trevor</creatorcontrib><creatorcontrib>Zaharia, Matei</creatorcontrib><creatorcontrib>Young, Cliff</creatorcontrib><creatorcontrib>Elsen, Erich</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Gale, Trevor</au><au>Zaharia, Matei</au><au>Young, Cliff</au><au>Elsen, Erich</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Sparse GPU Kernels for Deep Learning</atitle><date>2020-06-18</date><risdate>2020</risdate><abstract>Scientific workloads have traditionally exploited high levels of sparsity to accelerate computation and reduce memory requirements. While deep neural networks can be made sparse, achieving practical speedups on GPUs is difficult because these applications have relatively moderate levels of sparsity that are not sufficient for existing sparse kernels to outperform their dense counterparts. In this work, we study sparse matrices from deep learning applications and identify favorable properties that can be exploited to accelerate computation. Based on these insights, we develop high-performance GPU kernels for two sparse matrix operations widely applicable in neural networks: sparse matrix-dense matrix multiplication and sampled dense-dense matrix multiplication. Our kernels reach 27% of single-precision peak on Nvidia V100 GPUs. Using our kernels, we demonstrate sparse Transformer and MobileNet models that achieve 1.2-2.1x speedups and up to 12.8x memory savings without sacrificing accuracy.</abstract><doi>10.48550/arxiv.2006.10901</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2006.10901
ispartof
issn
language eng
recordid cdi_arxiv_primary_2006_10901
source arXiv.org
subjects Computer Science - Distributed, Parallel, and Cluster Computing
Computer Science - Learning
Statistics - Machine Learning
title Sparse GPU Kernels for Deep Learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-27T07%3A32%3A55IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Sparse%20GPU%20Kernels%20for%20Deep%20Learning&rft.au=Gale,%20Trevor&rft.date=2020-06-18&rft_id=info:doi/10.48550/arxiv.2006.10901&rft_dat=%3Carxiv_GOX%3E2006_10901%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true