Improving Memory Utilization in Convolutional Neural Network Accelerators

While the accuracy of convolutional neural networks has achieved vast improvements by introducing larger and deeper network architectures, also the memory footprint for storing their parameters and activations has increased. This trend especially challenges power- and resource-limited accelerator de...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2021-04
Hauptverfasser: Jokic, Petar, Emery, Stephane, Benini, Luca
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Jokic, Petar
Emery, Stephane
Benini, Luca
description While the accuracy of convolutional neural networks has achieved vast improvements by introducing larger and deeper network architectures, also the memory footprint for storing their parameters and activations has increased. This trend especially challenges power- and resource-limited accelerator designs, which are often restricted to store all network data in on-chip memory to avoid interfacing energy-hungry external memories. Maximizing the network size that fits on a given accelerator thus requires to maximize its memory utilization. While the traditionally used ping-pong buffering technique is mapping subsequent activation layers to disjunctive memory regions, we propose a mapping method that allows these regions to overlap and thus utilize the memory more efficiently. This work presents the mathematical model to compute the maximum activations memory overlap and thus the lower bound of on-chip memory needed to perform layer-by-layer processing of convolutional neural networks on memory-limited accelerators. Our experiments with various real-world object detector networks show that the proposed mapping technique can decrease the activations memory by up to 32.9%, reducing the overall memory for the entire network by up to 23.9% compared to traditional ping-pong buffering. For higher resolution de-noising networks, we achieve activation memory savings of 48.8%. Additionally, we implement a face detector network on an FPGA-based camera to validate these memory savings on a complete end-to-end system.
doi_str_mv 10.48550/arxiv.2007.09963
format Article
fullrecord <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_2007_09963</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2426017092</sourcerecordid><originalsourceid>FETCH-LOGICAL-a522-c730e4cc72d97b2346949f08d8977f38ec520a18878f8bae10b6c7d4f91a5d483</originalsourceid><addsrcrecordid>eNotj11LwzAYhYMgOOZ-gFcWvO58-yZpkstR_ChMvZnXJUtTyeyambTT-evdOq8eDhwO5yHkJoM5k5zDvQ4_bj9HADEHpXJ6QSZIaZZKhnhFZjFuAABzgZzTCSnL7S74ves-khe79eGQvPeudb-6d75LXJcUvtv7djhF3Savdggj-m8fPpOFMba1Qfc-xGty2eg22tk_p2T1-LAqntPl21NZLJap5oipERQsM0ZgrcQaKcsVUw3IWiohGiqt4Qg6k1LIRq61zWCdG1GzRmWa10zSKbk9z46e1S64rQ6H6uRbjb7Hxt25cRT7Gmzsq40fwvF9rJBhDpkAhfQPFBBYkA</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2426017092</pqid></control><display><type>article</type><title>Improving Memory Utilization in Convolutional Neural Network Accelerators</title><source>arXiv.org</source><source>Free E- Journals</source><creator>Jokic, Petar ; Emery, Stephane ; Benini, Luca</creator><creatorcontrib>Jokic, Petar ; Emery, Stephane ; Benini, Luca</creatorcontrib><description>While the accuracy of convolutional neural networks has achieved vast improvements by introducing larger and deeper network architectures, also the memory footprint for storing their parameters and activations has increased. This trend especially challenges power- and resource-limited accelerator designs, which are often restricted to store all network data in on-chip memory to avoid interfacing energy-hungry external memories. Maximizing the network size that fits on a given accelerator thus requires to maximize its memory utilization. While the traditionally used ping-pong buffering technique is mapping subsequent activation layers to disjunctive memory regions, we propose a mapping method that allows these regions to overlap and thus utilize the memory more efficiently. This work presents the mathematical model to compute the maximum activations memory overlap and thus the lower bound of on-chip memory needed to perform layer-by-layer processing of convolutional neural networks on memory-limited accelerators. Our experiments with various real-world object detector networks show that the proposed mapping technique can decrease the activations memory by up to 32.9%, reducing the overall memory for the entire network by up to 23.9% compared to traditional ping-pong buffering. For higher resolution de-noising networks, we achieve activation memory savings of 48.8%. Additionally, we implement a face detector network on an FPGA-based camera to validate these memory savings on a complete end-to-end system.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2007.09963</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Accelerators ; Activation ; Artificial neural networks ; Buffers ; Computer architecture ; Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Learning ; Computer Science - Neural and Evolutionary Computing ; Lower bounds ; Mapping ; Neural networks</subject><ispartof>arXiv.org, 2021-04</ispartof><rights>2021. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,784,885,27923</link.rule.ids><backlink>$$Uhttps://doi.org/10.48550/arXiv.2007.09963$$DView paper in arXiv$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.1109/LES.2020.3009924$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink></links><search><creatorcontrib>Jokic, Petar</creatorcontrib><creatorcontrib>Emery, Stephane</creatorcontrib><creatorcontrib>Benini, Luca</creatorcontrib><title>Improving Memory Utilization in Convolutional Neural Network Accelerators</title><title>arXiv.org</title><description>While the accuracy of convolutional neural networks has achieved vast improvements by introducing larger and deeper network architectures, also the memory footprint for storing their parameters and activations has increased. This trend especially challenges power- and resource-limited accelerator designs, which are often restricted to store all network data in on-chip memory to avoid interfacing energy-hungry external memories. Maximizing the network size that fits on a given accelerator thus requires to maximize its memory utilization. While the traditionally used ping-pong buffering technique is mapping subsequent activation layers to disjunctive memory regions, we propose a mapping method that allows these regions to overlap and thus utilize the memory more efficiently. This work presents the mathematical model to compute the maximum activations memory overlap and thus the lower bound of on-chip memory needed to perform layer-by-layer processing of convolutional neural networks on memory-limited accelerators. Our experiments with various real-world object detector networks show that the proposed mapping technique can decrease the activations memory by up to 32.9%, reducing the overall memory for the entire network by up to 23.9% compared to traditional ping-pong buffering. For higher resolution de-noising networks, we achieve activation memory savings of 48.8%. Additionally, we implement a face detector network on an FPGA-based camera to validate these memory savings on a complete end-to-end system.</description><subject>Accelerators</subject><subject>Activation</subject><subject>Artificial neural networks</subject><subject>Buffers</subject><subject>Computer architecture</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Learning</subject><subject>Computer Science - Neural and Evolutionary Computing</subject><subject>Lower bounds</subject><subject>Mapping</subject><subject>Neural networks</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GOX</sourceid><recordid>eNotj11LwzAYhYMgOOZ-gFcWvO58-yZpkstR_ChMvZnXJUtTyeyambTT-evdOq8eDhwO5yHkJoM5k5zDvQ4_bj9HADEHpXJ6QSZIaZZKhnhFZjFuAABzgZzTCSnL7S74ves-khe79eGQvPeudb-6d75LXJcUvtv7djhF3Savdggj-m8fPpOFMba1Qfc-xGty2eg22tk_p2T1-LAqntPl21NZLJap5oipERQsM0ZgrcQaKcsVUw3IWiohGiqt4Qg6k1LIRq61zWCdG1GzRmWa10zSKbk9z46e1S64rQ6H6uRbjb7Hxt25cRT7Gmzsq40fwvF9rJBhDpkAhfQPFBBYkA</recordid><startdate>20210406</startdate><enddate>20210406</enddate><creator>Jokic, Petar</creator><creator>Emery, Stephane</creator><creator>Benini, Luca</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20210406</creationdate><title>Improving Memory Utilization in Convolutional Neural Network Accelerators</title><author>Jokic, Petar ; Emery, Stephane ; Benini, Luca</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a522-c730e4cc72d97b2346949f08d8977f38ec520a18878f8bae10b6c7d4f91a5d483</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Accelerators</topic><topic>Activation</topic><topic>Artificial neural networks</topic><topic>Buffers</topic><topic>Computer architecture</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Learning</topic><topic>Computer Science - Neural and Evolutionary Computing</topic><topic>Lower bounds</topic><topic>Mapping</topic><topic>Neural networks</topic><toplevel>online_resources</toplevel><creatorcontrib>Jokic, Petar</creatorcontrib><creatorcontrib>Emery, Stephane</creatorcontrib><creatorcontrib>Benini, Luca</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Jokic, Petar</au><au>Emery, Stephane</au><au>Benini, Luca</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Improving Memory Utilization in Convolutional Neural Network Accelerators</atitle><jtitle>arXiv.org</jtitle><date>2021-04-06</date><risdate>2021</risdate><eissn>2331-8422</eissn><abstract>While the accuracy of convolutional neural networks has achieved vast improvements by introducing larger and deeper network architectures, also the memory footprint for storing their parameters and activations has increased. This trend especially challenges power- and resource-limited accelerator designs, which are often restricted to store all network data in on-chip memory to avoid interfacing energy-hungry external memories. Maximizing the network size that fits on a given accelerator thus requires to maximize its memory utilization. While the traditionally used ping-pong buffering technique is mapping subsequent activation layers to disjunctive memory regions, we propose a mapping method that allows these regions to overlap and thus utilize the memory more efficiently. This work presents the mathematical model to compute the maximum activations memory overlap and thus the lower bound of on-chip memory needed to perform layer-by-layer processing of convolutional neural networks on memory-limited accelerators. Our experiments with various real-world object detector networks show that the proposed mapping technique can decrease the activations memory by up to 32.9%, reducing the overall memory for the entire network by up to 23.9% compared to traditional ping-pong buffering. For higher resolution de-noising networks, we achieve activation memory savings of 48.8%. Additionally, we implement a face detector network on an FPGA-based camera to validate these memory savings on a complete end-to-end system.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2007.09963</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2021-04
issn 2331-8422
language eng
recordid cdi_arxiv_primary_2007_09963
source arXiv.org; Free E- Journals
subjects Accelerators
Activation
Artificial neural networks
Buffers
Computer architecture
Computer Science - Computer Vision and Pattern Recognition
Computer Science - Learning
Computer Science - Neural and Evolutionary Computing
Lower bounds
Mapping
Neural networks
title Improving Memory Utilization in Convolutional Neural Network Accelerators
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-13T19%3A38%3A49IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Improving%20Memory%20Utilization%20in%20Convolutional%20Neural%20Network%20Accelerators&rft.jtitle=arXiv.org&rft.au=Jokic,%20Petar&rft.date=2021-04-06&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2007.09963&rft_dat=%3Cproquest_arxiv%3E2426017092%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2426017092&rft_id=info:pmid/&rfr_iscdi=true