Mapping of CNNs on multi-core RRAM-based CIM architectures

RRAM-based multi-core systems improve the energy efficiency and performance of CNNs. Thereby, the distributed parallel execution of convolutional layers causes critical data dependencies that limit the potential speedup. This paper presents synchronization techniques for parallel inference of convol...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Pelke, Rebecca, Bosbach, Nils, Cubero, Jose, Staudigl, Felix, Leupers, Rainer, Joseph, Jan Moritz
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Pelke, Rebecca
Bosbach, Nils
Cubero, Jose
Staudigl, Felix
Leupers, Rainer
Joseph, Jan Moritz
description RRAM-based multi-core systems improve the energy efficiency and performance of CNNs. Thereby, the distributed parallel execution of convolutional layers causes critical data dependencies that limit the potential speedup. This paper presents synchronization techniques for parallel inference of convolutional layers on RRAM-based CIM architectures. We propose an architecture optimization that enables efficient data exchange and discuss the impact of different architecture setups on the performance. The corresponding compiler algorithms are optimized for high speedup and low memory consumption during CNN inference. We achieve more than 99% of the theoretical acceleration limit with a marginal data transmission overhead of less than 4% for state-of-the-art CNN benchmarks.
doi_str_mv 10.48550/arxiv.2309.03805
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2309_03805</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2309_03805</sourcerecordid><originalsourceid>FETCH-LOGICAL-a675-7ada4764e770994fe977f242eccba290dcc38de2f0b9badf0e4ed56b1672d69c3</originalsourceid><addsrcrecordid>eNotj0tuwjAUAL1hUUEP0FV9AQfjb8wORf0gESoh9tGz_dxaAhI5oWpvX5V2NbvRDCEPK16pWmu-hPKVPyshuau4rLm-I-sWhiFf3mmfaLPfj7S_0PP1NGUW-oL0cNi0zMOIkTbblkIJH3nCMF0LjgsyS3Aa8f6fc3J8fjo2r2z39rJtNjsGxmpmIYKyRqG13DmV0FmbhBIYggfheAxB1hFF4t55iImjwqiNXxkronFBzsnjn_YW3w0ln6F8d78T3W1C_gB6ckEh</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Mapping of CNNs on multi-core RRAM-based CIM architectures</title><source>arXiv.org</source><creator>Pelke, Rebecca ; Bosbach, Nils ; Cubero, Jose ; Staudigl, Felix ; Leupers, Rainer ; Joseph, Jan Moritz</creator><creatorcontrib>Pelke, Rebecca ; Bosbach, Nils ; Cubero, Jose ; Staudigl, Felix ; Leupers, Rainer ; Joseph, Jan Moritz</creatorcontrib><description>RRAM-based multi-core systems improve the energy efficiency and performance of CNNs. Thereby, the distributed parallel execution of convolutional layers causes critical data dependencies that limit the potential speedup. This paper presents synchronization techniques for parallel inference of convolutional layers on RRAM-based CIM architectures. We propose an architecture optimization that enables efficient data exchange and discuss the impact of different architecture setups on the performance. The corresponding compiler algorithms are optimized for high speedup and low memory consumption during CNN inference. We achieve more than 99% of the theoretical acceleration limit with a marginal data transmission overhead of less than 4% for state-of-the-art CNN benchmarks.</description><identifier>DOI: 10.48550/arxiv.2309.03805</identifier><language>eng</language><subject>Computer Science - Hardware Architecture</subject><creationdate>2023-09</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,778,883</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2309.03805$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2309.03805$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Pelke, Rebecca</creatorcontrib><creatorcontrib>Bosbach, Nils</creatorcontrib><creatorcontrib>Cubero, Jose</creatorcontrib><creatorcontrib>Staudigl, Felix</creatorcontrib><creatorcontrib>Leupers, Rainer</creatorcontrib><creatorcontrib>Joseph, Jan Moritz</creatorcontrib><title>Mapping of CNNs on multi-core RRAM-based CIM architectures</title><description>RRAM-based multi-core systems improve the energy efficiency and performance of CNNs. Thereby, the distributed parallel execution of convolutional layers causes critical data dependencies that limit the potential speedup. This paper presents synchronization techniques for parallel inference of convolutional layers on RRAM-based CIM architectures. We propose an architecture optimization that enables efficient data exchange and discuss the impact of different architecture setups on the performance. The corresponding compiler algorithms are optimized for high speedup and low memory consumption during CNN inference. We achieve more than 99% of the theoretical acceleration limit with a marginal data transmission overhead of less than 4% for state-of-the-art CNN benchmarks.</description><subject>Computer Science - Hardware Architecture</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj0tuwjAUAL1hUUEP0FV9AQfjb8wORf0gESoh9tGz_dxaAhI5oWpvX5V2NbvRDCEPK16pWmu-hPKVPyshuau4rLm-I-sWhiFf3mmfaLPfj7S_0PP1NGUW-oL0cNi0zMOIkTbblkIJH3nCMF0LjgsyS3Aa8f6fc3J8fjo2r2z39rJtNjsGxmpmIYKyRqG13DmV0FmbhBIYggfheAxB1hFF4t55iImjwqiNXxkronFBzsnjn_YW3w0ln6F8d78T3W1C_gB6ckEh</recordid><startdate>20230907</startdate><enddate>20230907</enddate><creator>Pelke, Rebecca</creator><creator>Bosbach, Nils</creator><creator>Cubero, Jose</creator><creator>Staudigl, Felix</creator><creator>Leupers, Rainer</creator><creator>Joseph, Jan Moritz</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230907</creationdate><title>Mapping of CNNs on multi-core RRAM-based CIM architectures</title><author>Pelke, Rebecca ; Bosbach, Nils ; Cubero, Jose ; Staudigl, Felix ; Leupers, Rainer ; Joseph, Jan Moritz</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a675-7ada4764e770994fe977f242eccba290dcc38de2f0b9badf0e4ed56b1672d69c3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Hardware Architecture</topic><toplevel>online_resources</toplevel><creatorcontrib>Pelke, Rebecca</creatorcontrib><creatorcontrib>Bosbach, Nils</creatorcontrib><creatorcontrib>Cubero, Jose</creatorcontrib><creatorcontrib>Staudigl, Felix</creatorcontrib><creatorcontrib>Leupers, Rainer</creatorcontrib><creatorcontrib>Joseph, Jan Moritz</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Pelke, Rebecca</au><au>Bosbach, Nils</au><au>Cubero, Jose</au><au>Staudigl, Felix</au><au>Leupers, Rainer</au><au>Joseph, Jan Moritz</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Mapping of CNNs on multi-core RRAM-based CIM architectures</atitle><date>2023-09-07</date><risdate>2023</risdate><abstract>RRAM-based multi-core systems improve the energy efficiency and performance of CNNs. Thereby, the distributed parallel execution of convolutional layers causes critical data dependencies that limit the potential speedup. This paper presents synchronization techniques for parallel inference of convolutional layers on RRAM-based CIM architectures. We propose an architecture optimization that enables efficient data exchange and discuss the impact of different architecture setups on the performance. The corresponding compiler algorithms are optimized for high speedup and low memory consumption during CNN inference. We achieve more than 99% of the theoretical acceleration limit with a marginal data transmission overhead of less than 4% for state-of-the-art CNN benchmarks.</abstract><doi>10.48550/arxiv.2309.03805</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2309.03805
ispartof
issn
language eng
recordid cdi_arxiv_primary_2309_03805
source arXiv.org
subjects Computer Science - Hardware Architecture
title Mapping of CNNs on multi-core RRAM-based CIM architectures
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-17T07%3A44%3A22IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Mapping%20of%20CNNs%20on%20multi-core%20RRAM-based%20CIM%20architectures&rft.au=Pelke,%20Rebecca&rft.date=2023-09-07&rft_id=info:doi/10.48550/arxiv.2309.03805&rft_dat=%3Carxiv_GOX%3E2309_03805%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true