Ternary Neural Networks Based on on/off Memristors: Set-Up and Training
Neuromorphic systems based on hardware neural networks (HNNs) are expected to be an energy and time-efficient computing architecture for solving complex tasks. In this paper, we consider the implementation of deep neural networks (DNNs) using crossbar arrays of memristors. More specifically, we cons...
Gespeichert in:
Veröffentlicht in: | Electronics (Basel) 2022-05, Vol.11 (10), p.1526 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | 10 |
container_start_page | 1526 |
container_title | Electronics (Basel) |
container_volume | 11 |
creator | Morell, Antoni Machado, Elvis Díaz Miranda, Enrique Boquet, Guillem Vicario, Jose Lopez |
description | Neuromorphic systems based on hardware neural networks (HNNs) are expected to be an energy and time-efficient computing architecture for solving complex tasks. In this paper, we consider the implementation of deep neural networks (DNNs) using crossbar arrays of memristors. More specifically, we considered the case where such devices can be configured in just two states: the low-resistance state (LRS) and the high-resistance state (HRS). HNNs suffer from several non-idealities that need to be solved when mapping our software-based models. A clear example in memristor-based neural networks is conductance variability, which is inherent to resistive switching devices, so achieving good performance in an HNN largely depends on the development of reliable weight storage or, alternatively, mitigation techniques against weight uncertainty. In this manuscript, we provide guidelines for a system-level designer where we take into account several issues related to the set-up of the HNN, such as what the appropriate conductance value in the LRS is or the adaptive conversion of current outputs at one stage to input voltages for the next stage. A second contribution is the training of the system, which is performed via offline learning, and considering the hardware imperfections, which in this case are conductance fluctuations. Finally, the resulting inference system is tested in two well-known databases from MNIST, showing that is competitive in terms of classification performance against the software-based counterpart. Additional advice and insights on system tuning and expected performance are given throughout the paper. |
doi_str_mv | 10.3390/electronics11101526 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2670132408</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2670132408</sourcerecordid><originalsourceid>FETCH-LOGICAL-c272t-e81ab348230013968ea5bee85f135cac9c687083014d2d75851ad186c40506b3</originalsourceid><addsrcrecordid>eNptUMFKxDAUDKLgUvcLvAQ8181LmjbxpouuwqoH67mk6at07TY1aRH_3sh68OBjYN5h3vBmCDkHdimEZivs0U7eDZ0NAMBA8vyILDgrdKq55sd_9lOyDGHH4mgQSrAF2ZToB-O_6BPO3vSRpk_n3wO9MQEb6oaIlWtb-oh734XJ-XBFX3BKX0dqhoaW3nRDN7ydkZPW9AGXv5yQ8u62XN-n2-fNw_p6m1pe8ClFBaYWmeKCMRA6V2hkjahkC0JaY7XNVcHiZ5A1vCmkkmAaULnNmGR5LRJycbAdvfuYMUzVzs0xQB8qnhfRkmfxOiHioLLeheCxrUbf7WPKClj101n1T2fiG6EdYH8</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2670132408</pqid></control><display><type>article</type><title>Ternary Neural Networks Based on on/off Memristors: Set-Up and Training</title><source>MDPI - Multidisciplinary Digital Publishing Institute</source><source>Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals</source><creator>Morell, Antoni ; Machado, Elvis Díaz ; Miranda, Enrique ; Boquet, Guillem ; Vicario, Jose Lopez</creator><creatorcontrib>Morell, Antoni ; Machado, Elvis Díaz ; Miranda, Enrique ; Boquet, Guillem ; Vicario, Jose Lopez</creatorcontrib><description>Neuromorphic systems based on hardware neural networks (HNNs) are expected to be an energy and time-efficient computing architecture for solving complex tasks. In this paper, we consider the implementation of deep neural networks (DNNs) using crossbar arrays of memristors. More specifically, we considered the case where such devices can be configured in just two states: the low-resistance state (LRS) and the high-resistance state (HRS). HNNs suffer from several non-idealities that need to be solved when mapping our software-based models. A clear example in memristor-based neural networks is conductance variability, which is inherent to resistive switching devices, so achieving good performance in an HNN largely depends on the development of reliable weight storage or, alternatively, mitigation techniques against weight uncertainty. In this manuscript, we provide guidelines for a system-level designer where we take into account several issues related to the set-up of the HNN, such as what the appropriate conductance value in the LRS is or the adaptive conversion of current outputs at one stage to input voltages for the next stage. A second contribution is the training of the system, which is performed via offline learning, and considering the hardware imperfections, which in this case are conductance fluctuations. Finally, the resulting inference system is tested in two well-known databases from MNIST, showing that is competitive in terms of classification performance against the software-based counterpart. Additional advice and insights on system tuning and expected performance are given throughout the paper.</description><identifier>ISSN: 2079-9292</identifier><identifier>EISSN: 2079-9292</identifier><identifier>DOI: 10.3390/electronics11101526</identifier><language>eng</language><publisher>Basel: MDPI AG</publisher><subject>Accuracy ; Arrays ; Artificial neural networks ; Design ; Energy efficiency ; Field programmable gate arrays ; Hardware ; Machine learning ; Memristors ; Neural networks ; Software ; Task complexity ; Training ; Values</subject><ispartof>Electronics (Basel), 2022-05, Vol.11 (10), p.1526</ispartof><rights>2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c272t-e81ab348230013968ea5bee85f135cac9c687083014d2d75851ad186c40506b3</cites><orcidid>0000-0002-8683-0421 ; 0000-0002-6583-8547 ; 0000-0002-3574-4697 ; 0000-0003-0470-5318 ; 0000-0003-2249-8594</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,776,780,27901,27902</link.rule.ids></links><search><creatorcontrib>Morell, Antoni</creatorcontrib><creatorcontrib>Machado, Elvis Díaz</creatorcontrib><creatorcontrib>Miranda, Enrique</creatorcontrib><creatorcontrib>Boquet, Guillem</creatorcontrib><creatorcontrib>Vicario, Jose Lopez</creatorcontrib><title>Ternary Neural Networks Based on on/off Memristors: Set-Up and Training</title><title>Electronics (Basel)</title><description>Neuromorphic systems based on hardware neural networks (HNNs) are expected to be an energy and time-efficient computing architecture for solving complex tasks. In this paper, we consider the implementation of deep neural networks (DNNs) using crossbar arrays of memristors. More specifically, we considered the case where such devices can be configured in just two states: the low-resistance state (LRS) and the high-resistance state (HRS). HNNs suffer from several non-idealities that need to be solved when mapping our software-based models. A clear example in memristor-based neural networks is conductance variability, which is inherent to resistive switching devices, so achieving good performance in an HNN largely depends on the development of reliable weight storage or, alternatively, mitigation techniques against weight uncertainty. In this manuscript, we provide guidelines for a system-level designer where we take into account several issues related to the set-up of the HNN, such as what the appropriate conductance value in the LRS is or the adaptive conversion of current outputs at one stage to input voltages for the next stage. A second contribution is the training of the system, which is performed via offline learning, and considering the hardware imperfections, which in this case are conductance fluctuations. Finally, the resulting inference system is tested in two well-known databases from MNIST, showing that is competitive in terms of classification performance against the software-based counterpart. Additional advice and insights on system tuning and expected performance are given throughout the paper.</description><subject>Accuracy</subject><subject>Arrays</subject><subject>Artificial neural networks</subject><subject>Design</subject><subject>Energy efficiency</subject><subject>Field programmable gate arrays</subject><subject>Hardware</subject><subject>Machine learning</subject><subject>Memristors</subject><subject>Neural networks</subject><subject>Software</subject><subject>Task complexity</subject><subject>Training</subject><subject>Values</subject><issn>2079-9292</issn><issn>2079-9292</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNptUMFKxDAUDKLgUvcLvAQ8181LmjbxpouuwqoH67mk6at07TY1aRH_3sh68OBjYN5h3vBmCDkHdimEZivs0U7eDZ0NAMBA8vyILDgrdKq55sd_9lOyDGHH4mgQSrAF2ZToB-O_6BPO3vSRpk_n3wO9MQEb6oaIlWtb-oh734XJ-XBFX3BKX0dqhoaW3nRDN7ydkZPW9AGXv5yQ8u62XN-n2-fNw_p6m1pe8ClFBaYWmeKCMRA6V2hkjahkC0JaY7XNVcHiZ5A1vCmkkmAaULnNmGR5LRJycbAdvfuYMUzVzs0xQB8qnhfRkmfxOiHioLLeheCxrUbf7WPKClj101n1T2fiG6EdYH8</recordid><startdate>20220501</startdate><enddate>20220501</enddate><creator>Morell, Antoni</creator><creator>Machado, Elvis Díaz</creator><creator>Miranda, Enrique</creator><creator>Boquet, Guillem</creator><creator>Vicario, Jose Lopez</creator><general>MDPI AG</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L7M</scope><scope>P5Z</scope><scope>P62</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><orcidid>https://orcid.org/0000-0002-8683-0421</orcidid><orcidid>https://orcid.org/0000-0002-6583-8547</orcidid><orcidid>https://orcid.org/0000-0002-3574-4697</orcidid><orcidid>https://orcid.org/0000-0003-0470-5318</orcidid><orcidid>https://orcid.org/0000-0003-2249-8594</orcidid></search><sort><creationdate>20220501</creationdate><title>Ternary Neural Networks Based on on/off Memristors: Set-Up and Training</title><author>Morell, Antoni ; Machado, Elvis Díaz ; Miranda, Enrique ; Boquet, Guillem ; Vicario, Jose Lopez</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c272t-e81ab348230013968ea5bee85f135cac9c687083014d2d75851ad186c40506b3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Accuracy</topic><topic>Arrays</topic><topic>Artificial neural networks</topic><topic>Design</topic><topic>Energy efficiency</topic><topic>Field programmable gate arrays</topic><topic>Hardware</topic><topic>Machine learning</topic><topic>Memristors</topic><topic>Neural networks</topic><topic>Software</topic><topic>Task complexity</topic><topic>Training</topic><topic>Values</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Morell, Antoni</creatorcontrib><creatorcontrib>Machado, Elvis Díaz</creatorcontrib><creatorcontrib>Miranda, Enrique</creatorcontrib><creatorcontrib>Boquet, Guillem</creatorcontrib><creatorcontrib>Vicario, Jose Lopez</creatorcontrib><collection>CrossRef</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><jtitle>Electronics (Basel)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Morell, Antoni</au><au>Machado, Elvis Díaz</au><au>Miranda, Enrique</au><au>Boquet, Guillem</au><au>Vicario, Jose Lopez</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Ternary Neural Networks Based on on/off Memristors: Set-Up and Training</atitle><jtitle>Electronics (Basel)</jtitle><date>2022-05-01</date><risdate>2022</risdate><volume>11</volume><issue>10</issue><spage>1526</spage><pages>1526-</pages><issn>2079-9292</issn><eissn>2079-9292</eissn><abstract>Neuromorphic systems based on hardware neural networks (HNNs) are expected to be an energy and time-efficient computing architecture for solving complex tasks. In this paper, we consider the implementation of deep neural networks (DNNs) using crossbar arrays of memristors. More specifically, we considered the case where such devices can be configured in just two states: the low-resistance state (LRS) and the high-resistance state (HRS). HNNs suffer from several non-idealities that need to be solved when mapping our software-based models. A clear example in memristor-based neural networks is conductance variability, which is inherent to resistive switching devices, so achieving good performance in an HNN largely depends on the development of reliable weight storage or, alternatively, mitigation techniques against weight uncertainty. In this manuscript, we provide guidelines for a system-level designer where we take into account several issues related to the set-up of the HNN, such as what the appropriate conductance value in the LRS is or the adaptive conversion of current outputs at one stage to input voltages for the next stage. A second contribution is the training of the system, which is performed via offline learning, and considering the hardware imperfections, which in this case are conductance fluctuations. Finally, the resulting inference system is tested in two well-known databases from MNIST, showing that is competitive in terms of classification performance against the software-based counterpart. Additional advice and insights on system tuning and expected performance are given throughout the paper.</abstract><cop>Basel</cop><pub>MDPI AG</pub><doi>10.3390/electronics11101526</doi><orcidid>https://orcid.org/0000-0002-8683-0421</orcidid><orcidid>https://orcid.org/0000-0002-6583-8547</orcidid><orcidid>https://orcid.org/0000-0002-3574-4697</orcidid><orcidid>https://orcid.org/0000-0003-0470-5318</orcidid><orcidid>https://orcid.org/0000-0003-2249-8594</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2079-9292 |
ispartof | Electronics (Basel), 2022-05, Vol.11 (10), p.1526 |
issn | 2079-9292 2079-9292 |
language | eng |
recordid | cdi_proquest_journals_2670132408 |
source | MDPI - Multidisciplinary Digital Publishing Institute; Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals |
subjects | Accuracy Arrays Artificial neural networks Design Energy efficiency Field programmable gate arrays Hardware Machine learning Memristors Neural networks Software Task complexity Training Values |
title | Ternary Neural Networks Based on on/off Memristors: Set-Up and Training |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-10T20%3A11%3A26IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Ternary%20Neural%20Networks%20Based%20on%20on/off%20Memristors:%20Set-Up%20and%20Training&rft.jtitle=Electronics%20(Basel)&rft.au=Morell,%20Antoni&rft.date=2022-05-01&rft.volume=11&rft.issue=10&rft.spage=1526&rft.pages=1526-&rft.issn=2079-9292&rft.eissn=2079-9292&rft_id=info:doi/10.3390/electronics11101526&rft_dat=%3Cproquest_cross%3E2670132408%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2670132408&rft_id=info:pmid/&rfr_iscdi=true |