MemFlow: Memory-Driven Data Scheduling With Datapath Co-Design in Accelerators for Large-Scale Inference Applications
The increasing importance of inference algorithms, such as neural networks (NNs), principle component analysis (PCA), and singular value decomposition (SVD), etc., has led to the emergence of hardware accelerators to address power-performance tradeoffs in their implementation. Their large data sets...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on computer-aided design of integrated circuits and systems 2020-09, Vol.39 (9), p.1875-1888 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 1888 |
---|---|
container_issue | 9 |
container_start_page | 1875 |
container_title | IEEE transactions on computer-aided design of integrated circuits and systems |
container_volume | 39 |
creator | Nie, Qi Malik, Sharad |
description | The increasing importance of inference algorithms, such as neural networks (NNs), principle component analysis (PCA), and singular value decomposition (SVD), etc., has led to the emergence of hardware accelerators to address power-performance tradeoffs in their implementation. Their large data sets make DRAM access the bottleneck for power and performance. Private SRAM scratch-pad memory is used to mitigate the DRAM access penalty but it is a limited resource in size and bandwidth. Thus, accelerator design is not just about computation, but also how data flow is scheduled across the memory hierarchy, including DRAM, scratch-pad SRAM, and datapath registers. Current accelerator design tools automate the generation of customized datapaths to improve performance, but have limited support for reducing DRAM/SRAM accesses during the computation. In this paper, we propose a memory-driven accelerator design methodology for large-scale inference applications, to maximize data access in the datapath and SRAM. We demonstrate its efficacy using several key kernels from large-scale inference applications. |
doi_str_mv | 10.1109/TCAD.2019.2925377 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_ieee_primary_8747420</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8747420</ieee_id><sourcerecordid>2436404905</sourcerecordid><originalsourceid>FETCH-LOGICAL-c293t-de4acbb8510cdfad9f7d209859e2bd28b4b5eb98409e31df29be7a3adca307613</originalsourceid><addsrcrecordid>eNo9kE1Lw0AQhhdRsH78APGy4Dl19iPdrLfSWhUqHlQ8hs1m0m5Js3E3UfrvTa14mpfheWfgIeSKwZgx0Ldvs-l8zIHpMdc8FUodkRHTQiWSpeyYjICrLAFQcErOYtwAMJlyPSL9M24Xtf--o0PwYZfMg_vChs5NZ-irXWPZ165Z0Q_XrX-XrRnCzCdzjG7VUNfQqbVYYzCdD5FWPtClCStMXq2pkT41FQZsLNJp29bOms75Jl6Qk8rUES__5jl5X9y_zR6T5cvD02y6TCzXoktKlMYWRZYysGVlSl2pkoPOUo28KHlWyCLFQmcSNApWVlwXqIwwpTUC1ISJc3JzuNsG_9lj7PKN70MzvMy5FBMJUkM6UOxA2eBjDFjlbXBbE3Y5g3xvN9_bzfd28z-7Q-f60HGI-M9nSirJQfwAnZh3Fw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2436404905</pqid></control><display><type>article</type><title>MemFlow: Memory-Driven Data Scheduling With Datapath Co-Design in Accelerators for Large-Scale Inference Applications</title><source>IEEE Electronic Library (IEL)</source><creator>Nie, Qi ; Malik, Sharad</creator><creatorcontrib>Nie, Qi ; Malik, Sharad</creatorcontrib><description>The increasing importance of inference algorithms, such as neural networks (NNs), principle component analysis (PCA), and singular value decomposition (SVD), etc., has led to the emergence of hardware accelerators to address power-performance tradeoffs in their implementation. Their large data sets make DRAM access the bottleneck for power and performance. Private SRAM scratch-pad memory is used to mitigate the DRAM access penalty but it is a limited resource in size and bandwidth. Thus, accelerator design is not just about computation, but also how data flow is scheduled across the memory hierarchy, including DRAM, scratch-pad SRAM, and datapath registers. Current accelerator design tools automate the generation of customized datapaths to improve performance, but have limited support for reducing DRAM/SRAM accesses during the computation. In this paper, we propose a memory-driven accelerator design methodology for large-scale inference applications, to maximize data access in the datapath and SRAM. We demonstrate its efficacy using several key kernels from large-scale inference applications.</description><identifier>ISSN: 0278-0070</identifier><identifier>EISSN: 1937-4151</identifier><identifier>DOI: 10.1109/TCAD.2019.2925377</identifier><identifier>CODEN: ITCSDI</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Accelerator ; Accelerators ; Algorithms ; Bandwidth ; Co-design ; Computation ; Data paths ; data scheduling ; Dynamic random access memory ; hardware/software co-design ; Inference ; Kernel ; large-scale computing ; memory utilization ; Neural networks ; Optimization ; Performance enhancement ; Pipeline processing ; Principal components analysis ; Random access memory ; Registers ; Scheduling ; Singular value decomposition ; Static random access memory</subject><ispartof>IEEE transactions on computer-aided design of integrated circuits and systems, 2020-09, Vol.39 (9), p.1875-1888</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c293t-de4acbb8510cdfad9f7d209859e2bd28b4b5eb98409e31df29be7a3adca307613</citedby><cites>FETCH-LOGICAL-c293t-de4acbb8510cdfad9f7d209859e2bd28b4b5eb98409e31df29be7a3adca307613</cites><orcidid>0000-0002-0254-0487</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8747420$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/8747420$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Nie, Qi</creatorcontrib><creatorcontrib>Malik, Sharad</creatorcontrib><title>MemFlow: Memory-Driven Data Scheduling With Datapath Co-Design in Accelerators for Large-Scale Inference Applications</title><title>IEEE transactions on computer-aided design of integrated circuits and systems</title><addtitle>TCAD</addtitle><description>The increasing importance of inference algorithms, such as neural networks (NNs), principle component analysis (PCA), and singular value decomposition (SVD), etc., has led to the emergence of hardware accelerators to address power-performance tradeoffs in their implementation. Their large data sets make DRAM access the bottleneck for power and performance. Private SRAM scratch-pad memory is used to mitigate the DRAM access penalty but it is a limited resource in size and bandwidth. Thus, accelerator design is not just about computation, but also how data flow is scheduled across the memory hierarchy, including DRAM, scratch-pad SRAM, and datapath registers. Current accelerator design tools automate the generation of customized datapaths to improve performance, but have limited support for reducing DRAM/SRAM accesses during the computation. In this paper, we propose a memory-driven accelerator design methodology for large-scale inference applications, to maximize data access in the datapath and SRAM. We demonstrate its efficacy using several key kernels from large-scale inference applications.</description><subject>Accelerator</subject><subject>Accelerators</subject><subject>Algorithms</subject><subject>Bandwidth</subject><subject>Co-design</subject><subject>Computation</subject><subject>Data paths</subject><subject>data scheduling</subject><subject>Dynamic random access memory</subject><subject>hardware/software co-design</subject><subject>Inference</subject><subject>Kernel</subject><subject>large-scale computing</subject><subject>memory utilization</subject><subject>Neural networks</subject><subject>Optimization</subject><subject>Performance enhancement</subject><subject>Pipeline processing</subject><subject>Principal components analysis</subject><subject>Random access memory</subject><subject>Registers</subject><subject>Scheduling</subject><subject>Singular value decomposition</subject><subject>Static random access memory</subject><issn>0278-0070</issn><issn>1937-4151</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9kE1Lw0AQhhdRsH78APGy4Dl19iPdrLfSWhUqHlQ8hs1m0m5Js3E3UfrvTa14mpfheWfgIeSKwZgx0Ldvs-l8zIHpMdc8FUodkRHTQiWSpeyYjICrLAFQcErOYtwAMJlyPSL9M24Xtf--o0PwYZfMg_vChs5NZ-irXWPZ165Z0Q_XrX-XrRnCzCdzjG7VUNfQqbVYYzCdD5FWPtClCStMXq2pkT41FQZsLNJp29bOms75Jl6Qk8rUES__5jl5X9y_zR6T5cvD02y6TCzXoktKlMYWRZYysGVlSl2pkoPOUo28KHlWyCLFQmcSNApWVlwXqIwwpTUC1ISJc3JzuNsG_9lj7PKN70MzvMy5FBMJUkM6UOxA2eBjDFjlbXBbE3Y5g3xvN9_bzfd28z-7Q-f60HGI-M9nSirJQfwAnZh3Fw</recordid><startdate>20200901</startdate><enddate>20200901</enddate><creator>Nie, Qi</creator><creator>Malik, Sharad</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0002-0254-0487</orcidid></search><sort><creationdate>20200901</creationdate><title>MemFlow: Memory-Driven Data Scheduling With Datapath Co-Design in Accelerators for Large-Scale Inference Applications</title><author>Nie, Qi ; Malik, Sharad</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c293t-de4acbb8510cdfad9f7d209859e2bd28b4b5eb98409e31df29be7a3adca307613</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Accelerator</topic><topic>Accelerators</topic><topic>Algorithms</topic><topic>Bandwidth</topic><topic>Co-design</topic><topic>Computation</topic><topic>Data paths</topic><topic>data scheduling</topic><topic>Dynamic random access memory</topic><topic>hardware/software co-design</topic><topic>Inference</topic><topic>Kernel</topic><topic>large-scale computing</topic><topic>memory utilization</topic><topic>Neural networks</topic><topic>Optimization</topic><topic>Performance enhancement</topic><topic>Pipeline processing</topic><topic>Principal components analysis</topic><topic>Random access memory</topic><topic>Registers</topic><topic>Scheduling</topic><topic>Singular value decomposition</topic><topic>Static random access memory</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Nie, Qi</creatorcontrib><creatorcontrib>Malik, Sharad</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on computer-aided design of integrated circuits and systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Nie, Qi</au><au>Malik, Sharad</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>MemFlow: Memory-Driven Data Scheduling With Datapath Co-Design in Accelerators for Large-Scale Inference Applications</atitle><jtitle>IEEE transactions on computer-aided design of integrated circuits and systems</jtitle><stitle>TCAD</stitle><date>2020-09-01</date><risdate>2020</risdate><volume>39</volume><issue>9</issue><spage>1875</spage><epage>1888</epage><pages>1875-1888</pages><issn>0278-0070</issn><eissn>1937-4151</eissn><coden>ITCSDI</coden><abstract>The increasing importance of inference algorithms, such as neural networks (NNs), principle component analysis (PCA), and singular value decomposition (SVD), etc., has led to the emergence of hardware accelerators to address power-performance tradeoffs in their implementation. Their large data sets make DRAM access the bottleneck for power and performance. Private SRAM scratch-pad memory is used to mitigate the DRAM access penalty but it is a limited resource in size and bandwidth. Thus, accelerator design is not just about computation, but also how data flow is scheduled across the memory hierarchy, including DRAM, scratch-pad SRAM, and datapath registers. Current accelerator design tools automate the generation of customized datapaths to improve performance, but have limited support for reducing DRAM/SRAM accesses during the computation. In this paper, we propose a memory-driven accelerator design methodology for large-scale inference applications, to maximize data access in the datapath and SRAM. We demonstrate its efficacy using several key kernels from large-scale inference applications.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TCAD.2019.2925377</doi><tpages>14</tpages><orcidid>https://orcid.org/0000-0002-0254-0487</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 0278-0070 |
ispartof | IEEE transactions on computer-aided design of integrated circuits and systems, 2020-09, Vol.39 (9), p.1875-1888 |
issn | 0278-0070 1937-4151 |
language | eng |
recordid | cdi_ieee_primary_8747420 |
source | IEEE Electronic Library (IEL) |
subjects | Accelerator Accelerators Algorithms Bandwidth Co-design Computation Data paths data scheduling Dynamic random access memory hardware/software co-design Inference Kernel large-scale computing memory utilization Neural networks Optimization Performance enhancement Pipeline processing Principal components analysis Random access memory Registers Scheduling Singular value decomposition Static random access memory |
title | MemFlow: Memory-Driven Data Scheduling With Datapath Co-Design in Accelerators for Large-Scale Inference Applications |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T12%3A17%3A46IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=MemFlow:%20Memory-Driven%20Data%20Scheduling%20With%20Datapath%20Co-Design%20in%20Accelerators%20for%20Large-Scale%20Inference%20Applications&rft.jtitle=IEEE%20transactions%20on%20computer-aided%20design%20of%20integrated%20circuits%20and%20systems&rft.au=Nie,%20Qi&rft.date=2020-09-01&rft.volume=39&rft.issue=9&rft.spage=1875&rft.epage=1888&rft.pages=1875-1888&rft.issn=0278-0070&rft.eissn=1937-4151&rft.coden=ITCSDI&rft_id=info:doi/10.1109/TCAD.2019.2925377&rft_dat=%3Cproquest_RIE%3E2436404905%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2436404905&rft_id=info:pmid/&rft_ieee_id=8747420&rfr_iscdi=true |