STIFT: A Spatio-Temporal Integrated Folding Tree for Efficient Reductions in Flexible DNN Accelerators

Increasing deployment of Deep Neural Networks (DNNs) recently fueled interest in the development of specific accelerator architectures capable of meeting their stringent performance and energy consumption requirements. DNN accelerators can be organized around three separate NoCs, namely distribution...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:ACM journal on emerging technologies in computing systems 2023-09, Vol.19 (4), p.1-20, Article 32
Hauptverfasser: Muñoz-Martínez, Francisco, Abellán, José L., Acacio, Manuel E., Krishna, Tushar
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 20
container_issue 4
container_start_page 1
container_title ACM journal on emerging technologies in computing systems
container_volume 19
creator Muñoz-Martínez, Francisco
Abellán, José L.
Acacio, Manuel E.
Krishna, Tushar
description Increasing deployment of Deep Neural Networks (DNNs) recently fueled interest in the development of specific accelerator architectures capable of meeting their stringent performance and energy consumption requirements. DNN accelerators can be organized around three separate NoCs, namely distribution, multiplier, and reduction networks (or DN, MN, and RN, respectively) between the global buffer(s) and the compute units (multipliers/adders). Among them, the RN, used to generate and reduce the partial sums produced during DNN processing, is a first-order driver of the area and energy efficiency of the accelerator. RNs can be orchestrated to exploit a Temporal, Spatial or Spatio-Temporal reduction dataflow. Among these, Spatio-Temporal reduction is the one that has shown superior performance. However, as we demonstrate in this work, a state-of-the-art implementation of the Spatio-Temporal reduction dataflow, based on the addition of Accumulators (Ac) to the RN (i.e., RN+Ac strategy), can result into significant area and energy expenses. To cope with this important issue, we propose STIFT (that stands for Spatio-Temporal Integrated Folding Tree) that implements the Spatio-Temporal reduction dataflow entirely on the RN hardware substrate (i.e., without the need for the extra accumulators). STIFT results into significant area and power savings regarding the more complex RN+Ac strategy, at the same time its performance advantage is preserved.
doi_str_mv 10.1145/3531011
format Article
fullrecord <record><control><sourceid>acm_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1145_3531011</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3531011</sourcerecordid><originalsourceid>FETCH-LOGICAL-a240t-767c6c4468796b0ab0965bf942e8f03e00b0b18975025f344a853fbba34793443</originalsourceid><addsrcrecordid>eNo9kDFPwzAUhC0EEqUgdiZvTIHn2E5itqo0EKkqEg1zZLvPlVGaVHaQ4N8T1NLp3um-d8MRcsvggTEhH7nkDBg7IxMmJSSiEHB-unl6Sa5i_ATgucrVhLh1XZX1E53R9V4Pvk9q3O37oFtadQNugx5wQ8u-3fhuS-uASF0f6MI5bz12A33HzZcd_7pIfUfLFr-9aZE-r1Z0Zi22ODb0IV6TC6fbiDdHnZKPclHPX5Pl20s1ny0TnQoYkjzLbWaFyIpcZQa0AZVJ45RIsXDAEcCAYYXKJaTScSF0IbkzRnORq9HyKbk_9NrQxxjQNfvgdzr8NAyav3ma4zwjeXcgtd2doP_wF5qhXZE</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>STIFT: A Spatio-Temporal Integrated Folding Tree for Efficient Reductions in Flexible DNN Accelerators</title><source>ACM Digital Library Complete</source><creator>Muñoz-Martínez, Francisco ; Abellán, José L. ; Acacio, Manuel E. ; Krishna, Tushar</creator><creatorcontrib>Muñoz-Martínez, Francisco ; Abellán, José L. ; Acacio, Manuel E. ; Krishna, Tushar</creatorcontrib><description>Increasing deployment of Deep Neural Networks (DNNs) recently fueled interest in the development of specific accelerator architectures capable of meeting their stringent performance and energy consumption requirements. DNN accelerators can be organized around three separate NoCs, namely distribution, multiplier, and reduction networks (or DN, MN, and RN, respectively) between the global buffer(s) and the compute units (multipliers/adders). Among them, the RN, used to generate and reduce the partial sums produced during DNN processing, is a first-order driver of the area and energy efficiency of the accelerator. RNs can be orchestrated to exploit a Temporal, Spatial or Spatio-Temporal reduction dataflow. Among these, Spatio-Temporal reduction is the one that has shown superior performance. However, as we demonstrate in this work, a state-of-the-art implementation of the Spatio-Temporal reduction dataflow, based on the addition of Accumulators (Ac) to the RN (i.e., RN+Ac strategy), can result into significant area and energy expenses. To cope with this important issue, we propose STIFT (that stands for Spatio-Temporal Integrated Folding Tree) that implements the Spatio-Temporal reduction dataflow entirely on the RN hardware substrate (i.e., without the need for the extra accumulators). STIFT results into significant area and power savings regarding the more complex RN+Ac strategy, at the same time its performance advantage is preserved.</description><identifier>ISSN: 1550-4832</identifier><identifier>EISSN: 1550-4840</identifier><identifier>DOI: 10.1145/3531011</identifier><language>eng</language><publisher>New York, NY: ACM</publisher><subject>Emerging architectures ; Hardware ; Hardware accelerators</subject><ispartof>ACM journal on emerging technologies in computing systems, 2023-09, Vol.19 (4), p.1-20, Article 32</ispartof><rights>Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-a240t-767c6c4468796b0ab0965bf942e8f03e00b0b18975025f344a853fbba34793443</citedby><cites>FETCH-LOGICAL-a240t-767c6c4468796b0ab0965bf942e8f03e00b0b18975025f344a853fbba34793443</cites><orcidid>0000-0003-0935-4078 ; 0000-0001-5738-6942 ; 0000-0003-3550-720X ; 0000-0002-1089-2191</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://dl.acm.org/doi/pdf/10.1145/3531011$$EPDF$$P50$$Gacm$$H</linktopdf><link.rule.ids>314,780,784,2282,27924,27925,40196,76228</link.rule.ids></links><search><creatorcontrib>Muñoz-Martínez, Francisco</creatorcontrib><creatorcontrib>Abellán, José L.</creatorcontrib><creatorcontrib>Acacio, Manuel E.</creatorcontrib><creatorcontrib>Krishna, Tushar</creatorcontrib><title>STIFT: A Spatio-Temporal Integrated Folding Tree for Efficient Reductions in Flexible DNN Accelerators</title><title>ACM journal on emerging technologies in computing systems</title><addtitle>ACM JETC</addtitle><description>Increasing deployment of Deep Neural Networks (DNNs) recently fueled interest in the development of specific accelerator architectures capable of meeting their stringent performance and energy consumption requirements. DNN accelerators can be organized around three separate NoCs, namely distribution, multiplier, and reduction networks (or DN, MN, and RN, respectively) between the global buffer(s) and the compute units (multipliers/adders). Among them, the RN, used to generate and reduce the partial sums produced during DNN processing, is a first-order driver of the area and energy efficiency of the accelerator. RNs can be orchestrated to exploit a Temporal, Spatial or Spatio-Temporal reduction dataflow. Among these, Spatio-Temporal reduction is the one that has shown superior performance. However, as we demonstrate in this work, a state-of-the-art implementation of the Spatio-Temporal reduction dataflow, based on the addition of Accumulators (Ac) to the RN (i.e., RN+Ac strategy), can result into significant area and energy expenses. To cope with this important issue, we propose STIFT (that stands for Spatio-Temporal Integrated Folding Tree) that implements the Spatio-Temporal reduction dataflow entirely on the RN hardware substrate (i.e., without the need for the extra accumulators). STIFT results into significant area and power savings regarding the more complex RN+Ac strategy, at the same time its performance advantage is preserved.</description><subject>Emerging architectures</subject><subject>Hardware</subject><subject>Hardware accelerators</subject><issn>1550-4832</issn><issn>1550-4840</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNo9kDFPwzAUhC0EEqUgdiZvTIHn2E5itqo0EKkqEg1zZLvPlVGaVHaQ4N8T1NLp3um-d8MRcsvggTEhH7nkDBg7IxMmJSSiEHB-unl6Sa5i_ATgucrVhLh1XZX1E53R9V4Pvk9q3O37oFtadQNugx5wQ8u-3fhuS-uASF0f6MI5bz12A33HzZcd_7pIfUfLFr-9aZE-r1Z0Zi22ODb0IV6TC6fbiDdHnZKPclHPX5Pl20s1ny0TnQoYkjzLbWaFyIpcZQa0AZVJ45RIsXDAEcCAYYXKJaTScSF0IbkzRnORq9HyKbk_9NrQxxjQNfvgdzr8NAyav3ma4zwjeXcgtd2doP_wF5qhXZE</recordid><startdate>20230908</startdate><enddate>20230908</enddate><creator>Muñoz-Martínez, Francisco</creator><creator>Abellán, José L.</creator><creator>Acacio, Manuel E.</creator><creator>Krishna, Tushar</creator><general>ACM</general><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0003-0935-4078</orcidid><orcidid>https://orcid.org/0000-0001-5738-6942</orcidid><orcidid>https://orcid.org/0000-0003-3550-720X</orcidid><orcidid>https://orcid.org/0000-0002-1089-2191</orcidid></search><sort><creationdate>20230908</creationdate><title>STIFT: A Spatio-Temporal Integrated Folding Tree for Efficient Reductions in Flexible DNN Accelerators</title><author>Muñoz-Martínez, Francisco ; Abellán, José L. ; Acacio, Manuel E. ; Krishna, Tushar</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a240t-767c6c4468796b0ab0965bf942e8f03e00b0b18975025f344a853fbba34793443</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Emerging architectures</topic><topic>Hardware</topic><topic>Hardware accelerators</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Muñoz-Martínez, Francisco</creatorcontrib><creatorcontrib>Abellán, José L.</creatorcontrib><creatorcontrib>Acacio, Manuel E.</creatorcontrib><creatorcontrib>Krishna, Tushar</creatorcontrib><collection>CrossRef</collection><jtitle>ACM journal on emerging technologies in computing systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Muñoz-Martínez, Francisco</au><au>Abellán, José L.</au><au>Acacio, Manuel E.</au><au>Krishna, Tushar</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>STIFT: A Spatio-Temporal Integrated Folding Tree for Efficient Reductions in Flexible DNN Accelerators</atitle><jtitle>ACM journal on emerging technologies in computing systems</jtitle><stitle>ACM JETC</stitle><date>2023-09-08</date><risdate>2023</risdate><volume>19</volume><issue>4</issue><spage>1</spage><epage>20</epage><pages>1-20</pages><artnum>32</artnum><issn>1550-4832</issn><eissn>1550-4840</eissn><abstract>Increasing deployment of Deep Neural Networks (DNNs) recently fueled interest in the development of specific accelerator architectures capable of meeting their stringent performance and energy consumption requirements. DNN accelerators can be organized around three separate NoCs, namely distribution, multiplier, and reduction networks (or DN, MN, and RN, respectively) between the global buffer(s) and the compute units (multipliers/adders). Among them, the RN, used to generate and reduce the partial sums produced during DNN processing, is a first-order driver of the area and energy efficiency of the accelerator. RNs can be orchestrated to exploit a Temporal, Spatial or Spatio-Temporal reduction dataflow. Among these, Spatio-Temporal reduction is the one that has shown superior performance. However, as we demonstrate in this work, a state-of-the-art implementation of the Spatio-Temporal reduction dataflow, based on the addition of Accumulators (Ac) to the RN (i.e., RN+Ac strategy), can result into significant area and energy expenses. To cope with this important issue, we propose STIFT (that stands for Spatio-Temporal Integrated Folding Tree) that implements the Spatio-Temporal reduction dataflow entirely on the RN hardware substrate (i.e., without the need for the extra accumulators). STIFT results into significant area and power savings regarding the more complex RN+Ac strategy, at the same time its performance advantage is preserved.</abstract><cop>New York, NY</cop><pub>ACM</pub><doi>10.1145/3531011</doi><tpages>20</tpages><orcidid>https://orcid.org/0000-0003-0935-4078</orcidid><orcidid>https://orcid.org/0000-0001-5738-6942</orcidid><orcidid>https://orcid.org/0000-0003-3550-720X</orcidid><orcidid>https://orcid.org/0000-0002-1089-2191</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1550-4832
ispartof ACM journal on emerging technologies in computing systems, 2023-09, Vol.19 (4), p.1-20, Article 32
issn 1550-4832
1550-4840
language eng
recordid cdi_crossref_primary_10_1145_3531011
source ACM Digital Library Complete
subjects Emerging architectures
Hardware
Hardware accelerators
title STIFT: A Spatio-Temporal Integrated Folding Tree for Efficient Reductions in Flexible DNN Accelerators
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-26T20%3A38%3A00IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-acm_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=STIFT:%20A%20Spatio-Temporal%20Integrated%20Folding%20Tree%20for%20Efficient%20Reductions%20in%20Flexible%20DNN%20Accelerators&rft.jtitle=ACM%20journal%20on%20emerging%20technologies%20in%20computing%20systems&rft.au=Mu%C3%B1oz-Mart%C3%ADnez,%20Francisco&rft.date=2023-09-08&rft.volume=19&rft.issue=4&rft.spage=1&rft.epage=20&rft.pages=1-20&rft.artnum=32&rft.issn=1550-4832&rft.eissn=1550-4840&rft_id=info:doi/10.1145/3531011&rft_dat=%3Cacm_cross%3E3531011%3C/acm_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true