Using Dataflow to Optimize Energy Efficiency of Deep Neural Network Accelerators

The authors demonstrate the key role dataflows play in the optimization of energy efficiency for deep neural network (DNN) accelerators. By introducing a systematic approach to analyze the problem and a new dataflow, called Row-Stationary, which is up to 2.5 times more energy efficient than existing...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE MICRO 2017, Vol.37 (3), p.12-21
Hauptverfasser: Yu-Hsin Chen, Emer, Joel, Sze, Vivienne
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 21
container_issue 3
container_start_page 12
container_title IEEE MICRO
container_volume 37
creator Yu-Hsin Chen
Emer, Joel
Sze, Vivienne
description The authors demonstrate the key role dataflows play in the optimization of energy efficiency for deep neural network (DNN) accelerators. By introducing a systematic approach to analyze the problem and a new dataflow, called Row-Stationary, which is up to 2.5 times more energy efficient than existing dataflows in processing a state-of-the-art DNN, this work provides guidelines for future DNN accelerator designs.
doi_str_mv 10.1109/MM.2017.54
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_crossref_primary_10_1109_MM_2017_54</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>7948671</ieee_id><sourcerecordid>1910792014</sourcerecordid><originalsourceid>FETCH-LOGICAL-c356t-768b029b4c9fccd25f19f5b0304ba9524c3dbdfbf73ab46f4520213ffa147fd83</originalsourceid><addsrcrecordid>eNo9kDtPwzAYRS0EEqWwsLJYYkNK8StxPFalPKSGMtDZchx_VUoaBztVVX49qYqY7nJ079VB6JaSCaVEPRbFhBEqJ6k4QyOquEwEFfwcjQiTLKGSs0t0FeOGEJIyko_QxyrW7Ro_md5A4_e493jZ9fW2_nF43rqwPuA5QG1r19oD9oCfnOvwu9sF0wzR7334wlNrXeOC6X2I1-gCTBPdzV-O0ep5_jl7TRbLl7fZdJFYnmZ9IrO8JEyVwiqwtmIpUAVpSTgRpVEpE5ZXZQUlSG5KkYEY7jLKAQwVEqqcj9H9qbcL_nvnYq83fhfaYVJTRYlUgwcxUA8nygYfY3Cgu1BvTThoSvTRmC4KfTSm0yN8d4Jr59w_KJXIM0n5L-IaZlQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1910792014</pqid></control><display><type>article</type><title>Using Dataflow to Optimize Energy Efficiency of Deep Neural Network Accelerators</title><source>IEEE Electronic Library (IEL)</source><creator>Yu-Hsin Chen ; Emer, Joel ; Sze, Vivienne</creator><creatorcontrib>Yu-Hsin Chen ; Emer, Joel ; Sze, Vivienne</creatorcontrib><description>The authors demonstrate the key role dataflows play in the optimization of energy efficiency for deep neural network (DNN) accelerators. By introducing a systematic approach to analyze the problem and a new dataflow, called Row-Stationary, which is up to 2.5 times more energy efficient than existing dataflows in processing a state-of-the-art DNN, this work provides guidelines for future DNN accelerator designs.</description><identifier>ISSN: 0272-1732</identifier><identifier>EISSN: 1937-4143</identifier><identifier>DOI: 10.1109/MM.2017.54</identifier><identifier>CODEN: IEMIDZ</identifier><language>eng</language><publisher>Los Alamitos: IEEE</publisher><subject>Accelerators ; Computer architecture ; dataflow ; Deep learning ; deep neural network ; Energy consumption ; Energy efficiency ; Energy management ; Neural networks ; Optimization ; Power efficiency ; Program processors ; Radio frequency ; Random access memory ; spatial architecture ; State of the art</subject><ispartof>IEEE MICRO, 2017, Vol.37 (3), p.12-21</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2017</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c356t-768b029b4c9fccd25f19f5b0304ba9524c3dbdfbf73ab46f4520213ffa147fd83</citedby><cites>FETCH-LOGICAL-c356t-768b029b4c9fccd25f19f5b0304ba9524c3dbdfbf73ab46f4520213ffa147fd83</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/7948671$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,4010,27900,27901,27902,54733</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/7948671$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Yu-Hsin Chen</creatorcontrib><creatorcontrib>Emer, Joel</creatorcontrib><creatorcontrib>Sze, Vivienne</creatorcontrib><title>Using Dataflow to Optimize Energy Efficiency of Deep Neural Network Accelerators</title><title>IEEE MICRO</title><addtitle>MM</addtitle><description>The authors demonstrate the key role dataflows play in the optimization of energy efficiency for deep neural network (DNN) accelerators. By introducing a systematic approach to analyze the problem and a new dataflow, called Row-Stationary, which is up to 2.5 times more energy efficient than existing dataflows in processing a state-of-the-art DNN, this work provides guidelines for future DNN accelerator designs.</description><subject>Accelerators</subject><subject>Computer architecture</subject><subject>dataflow</subject><subject>Deep learning</subject><subject>deep neural network</subject><subject>Energy consumption</subject><subject>Energy efficiency</subject><subject>Energy management</subject><subject>Neural networks</subject><subject>Optimization</subject><subject>Power efficiency</subject><subject>Program processors</subject><subject>Radio frequency</subject><subject>Random access memory</subject><subject>spatial architecture</subject><subject>State of the art</subject><issn>0272-1732</issn><issn>1937-4143</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2017</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9kDtPwzAYRS0EEqWwsLJYYkNK8StxPFalPKSGMtDZchx_VUoaBztVVX49qYqY7nJ079VB6JaSCaVEPRbFhBEqJ6k4QyOquEwEFfwcjQiTLKGSs0t0FeOGEJIyko_QxyrW7Ro_md5A4_e493jZ9fW2_nF43rqwPuA5QG1r19oD9oCfnOvwu9sF0wzR7334wlNrXeOC6X2I1-gCTBPdzV-O0ep5_jl7TRbLl7fZdJFYnmZ9IrO8JEyVwiqwtmIpUAVpSTgRpVEpE5ZXZQUlSG5KkYEY7jLKAQwVEqqcj9H9qbcL_nvnYq83fhfaYVJTRYlUgwcxUA8nygYfY3Cgu1BvTThoSvTRmC4KfTSm0yN8d4Jr59w_KJXIM0n5L-IaZlQ</recordid><startdate>2017</startdate><enddate>2017</enddate><creator>Yu-Hsin Chen</creator><creator>Emer, Joel</creator><creator>Sze, Vivienne</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>2017</creationdate><title>Using Dataflow to Optimize Energy Efficiency of Deep Neural Network Accelerators</title><author>Yu-Hsin Chen ; Emer, Joel ; Sze, Vivienne</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c356t-768b029b4c9fccd25f19f5b0304ba9524c3dbdfbf73ab46f4520213ffa147fd83</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2017</creationdate><topic>Accelerators</topic><topic>Computer architecture</topic><topic>dataflow</topic><topic>Deep learning</topic><topic>deep neural network</topic><topic>Energy consumption</topic><topic>Energy efficiency</topic><topic>Energy management</topic><topic>Neural networks</topic><topic>Optimization</topic><topic>Power efficiency</topic><topic>Program processors</topic><topic>Radio frequency</topic><topic>Random access memory</topic><topic>spatial architecture</topic><topic>State of the art</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Yu-Hsin Chen</creatorcontrib><creatorcontrib>Emer, Joel</creatorcontrib><creatorcontrib>Sze, Vivienne</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE MICRO</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Yu-Hsin Chen</au><au>Emer, Joel</au><au>Sze, Vivienne</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Using Dataflow to Optimize Energy Efficiency of Deep Neural Network Accelerators</atitle><jtitle>IEEE MICRO</jtitle><stitle>MM</stitle><date>2017</date><risdate>2017</risdate><volume>37</volume><issue>3</issue><spage>12</spage><epage>21</epage><pages>12-21</pages><issn>0272-1732</issn><eissn>1937-4143</eissn><coden>IEMIDZ</coden><abstract>The authors demonstrate the key role dataflows play in the optimization of energy efficiency for deep neural network (DNN) accelerators. By introducing a systematic approach to analyze the problem and a new dataflow, called Row-Stationary, which is up to 2.5 times more energy efficient than existing dataflows in processing a state-of-the-art DNN, this work provides guidelines for future DNN accelerator designs.</abstract><cop>Los Alamitos</cop><pub>IEEE</pub><doi>10.1109/MM.2017.54</doi><tpages>10</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 0272-1732
ispartof IEEE MICRO, 2017, Vol.37 (3), p.12-21
issn 0272-1732
1937-4143
language eng
recordid cdi_crossref_primary_10_1109_MM_2017_54
source IEEE Electronic Library (IEL)
subjects Accelerators
Computer architecture
dataflow
Deep learning
deep neural network
Energy consumption
Energy efficiency
Energy management
Neural networks
Optimization
Power efficiency
Program processors
Radio frequency
Random access memory
spatial architecture
State of the art
title Using Dataflow to Optimize Energy Efficiency of Deep Neural Network Accelerators
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-09T01%3A01%3A27IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Using%20Dataflow%20to%20Optimize%20Energy%20Efficiency%20of%20Deep%20Neural%20Network%20Accelerators&rft.jtitle=IEEE%20MICRO&rft.au=Yu-Hsin%20Chen&rft.date=2017&rft.volume=37&rft.issue=3&rft.spage=12&rft.epage=21&rft.pages=12-21&rft.issn=0272-1732&rft.eissn=1937-4143&rft.coden=IEMIDZ&rft_id=info:doi/10.1109/MM.2017.54&rft_dat=%3Cproquest_RIE%3E1910792014%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=1910792014&rft_id=info:pmid/&rft_ieee_id=7948671&rfr_iscdi=true