Weight Programming in DNN Analog Hardware Accelerators in the Presence of NVM Variability

Crossbar arrays of nonvolatile memory (NVM) can potentially accelerate development of deep neural networks (DNNs) by implementing crucial multiply–accumulate (MAC) operations at the location of data. Effective weight‐programming procedures can both minimize the performance impact during training and...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Advanced electronic materials 2019-09, Vol.5 (9), p.n/a
Hauptverfasser: Mackin, Charles, Tsai, Hsinyu, Ambrogio, Stefano, Narayanan, Pritish, Chen, An, Burr, Geoffrey W.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page n/a
container_issue 9
container_start_page
container_title Advanced electronic materials
container_volume 5
creator Mackin, Charles
Tsai, Hsinyu
Ambrogio, Stefano
Narayanan, Pritish
Chen, An
Burr, Geoffrey W.
description Crossbar arrays of nonvolatile memory (NVM) can potentially accelerate development of deep neural networks (DNNs) by implementing crucial multiply–accumulate (MAC) operations at the location of data. Effective weight‐programming procedures can both minimize the performance impact during training and reduce the down time for inference, where new parameter sets may need to be loaded. Simultaneous weight programming along an entire dimension (e.g., row or column) of a crossbar array in the context of forward inference and training is shown to be important. A framework for determining the optimal hardware conditions in which to program weights is provided, and its efficacy in the presence of considerable NVM variability is explored through simulations. This strategy is shown capable of programming 98–99% of weights effectively, in a manner that is both largely independent of the target weight distribution and highly tolerant to variability in NVM conductance‐versus‐pulse characteristics. The probability that a weight fails to reach its target value, termed Pfail, is quantified and the fundamental trade‐off between Pfail and weight programming speed is explored. Lastly, the impact of imperfectly programmed weights on DNN test accuracies is examined for various networks, including multi‐layer perceptrons (MLPs) and long short‐term memory (LSTM). A strategy for programming weights in parallel (e.g., row‐wise) in analog deep neural network (DNN) hardware accelerators is developed and, through simulations, programming efficacy in the presence of considerable nonvolatile memory variability is explored. The impact of imperfectly programmed weights on DNN accuracy is then examined for various networks, including multilayer perceptrons and long short‐term memory.
doi_str_mv 10.1002/aelm.201900026
format Article
fullrecord <record><control><sourceid>wiley_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1002_aelm_201900026</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>AELM201900026</sourcerecordid><originalsourceid>FETCH-LOGICAL-c2896-a258dd587734176ad1eda738943ed93a5c8f961d6abe53ce11a92c85d1a56d1a3</originalsourceid><addsrcrecordid>eNqFkE1LAzEQhoMoWGqvnvMHtiabJrs5LrVaoV09aNXTMk1mt5H9kGSh9N-7paLevMwHvM8wPIRcczbljMU3gHUzjRnXbNjUGRnFXOuIK_Z2_me-JJMQPoYIT5SYSTEi76_oql1Pn3xXeWga11bUtfQ2z2nWQt1VdAne7sEjzYzBGj30nQ_HTL_DAcOArUHalTTfrOkGvIOtq11_uCIXJdQBJ999TF7uFs_zZbR6vH-YZ6vIxKlWEcQytVamSSJmw1dgOVpIRKpnAq0WIE1aasWtgi1KYZBz0LFJpeUg1VDEmExPd43vQvBYFp_eNeAPBWfF0U1xdFP8uBkAfQL2rsbDP-kiW6zWv-wXpZFoVQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Weight Programming in DNN Analog Hardware Accelerators in the Presence of NVM Variability</title><source>Wiley Online Library</source><creator>Mackin, Charles ; Tsai, Hsinyu ; Ambrogio, Stefano ; Narayanan, Pritish ; Chen, An ; Burr, Geoffrey W.</creator><creatorcontrib>Mackin, Charles ; Tsai, Hsinyu ; Ambrogio, Stefano ; Narayanan, Pritish ; Chen, An ; Burr, Geoffrey W.</creatorcontrib><description>Crossbar arrays of nonvolatile memory (NVM) can potentially accelerate development of deep neural networks (DNNs) by implementing crucial multiply–accumulate (MAC) operations at the location of data. Effective weight‐programming procedures can both minimize the performance impact during training and reduce the down time for inference, where new parameter sets may need to be loaded. Simultaneous weight programming along an entire dimension (e.g., row or column) of a crossbar array in the context of forward inference and training is shown to be important. A framework for determining the optimal hardware conditions in which to program weights is provided, and its efficacy in the presence of considerable NVM variability is explored through simulations. This strategy is shown capable of programming 98–99% of weights effectively, in a manner that is both largely independent of the target weight distribution and highly tolerant to variability in NVM conductance‐versus‐pulse characteristics. The probability that a weight fails to reach its target value, termed Pfail, is quantified and the fundamental trade‐off between Pfail and weight programming speed is explored. Lastly, the impact of imperfectly programmed weights on DNN test accuracies is examined for various networks, including multi‐layer perceptrons (MLPs) and long short‐term memory (LSTM). A strategy for programming weights in parallel (e.g., row‐wise) in analog deep neural network (DNN) hardware accelerators is developed and, through simulations, programming efficacy in the presence of considerable nonvolatile memory variability is explored. The impact of imperfectly programmed weights on DNN accuracy is then examined for various networks, including multilayer perceptrons and long short‐term memory.</description><identifier>ISSN: 2199-160X</identifier><identifier>EISSN: 2199-160X</identifier><identifier>DOI: 10.1002/aelm.201900026</identifier><language>eng</language><subject>analog hardware accelerator ; crossbar array ; deep learning ; phase change memory</subject><ispartof>Advanced electronic materials, 2019-09, Vol.5 (9), p.n/a</ispartof><rights>2019 WILEY‐VCH Verlag GmbH &amp; Co. KGaA, Weinheim</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c2896-a258dd587734176ad1eda738943ed93a5c8f961d6abe53ce11a92c85d1a56d1a3</citedby><cites>FETCH-LOGICAL-c2896-a258dd587734176ad1eda738943ed93a5c8f961d6abe53ce11a92c85d1a56d1a3</cites><orcidid>0000-0001-5717-2549</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://onlinelibrary.wiley.com/doi/pdf/10.1002%2Faelm.201900026$$EPDF$$P50$$Gwiley$$H</linktopdf><linktohtml>$$Uhttps://onlinelibrary.wiley.com/doi/full/10.1002%2Faelm.201900026$$EHTML$$P50$$Gwiley$$H</linktohtml><link.rule.ids>314,780,784,1416,27915,27916,45565,45566</link.rule.ids></links><search><creatorcontrib>Mackin, Charles</creatorcontrib><creatorcontrib>Tsai, Hsinyu</creatorcontrib><creatorcontrib>Ambrogio, Stefano</creatorcontrib><creatorcontrib>Narayanan, Pritish</creatorcontrib><creatorcontrib>Chen, An</creatorcontrib><creatorcontrib>Burr, Geoffrey W.</creatorcontrib><title>Weight Programming in DNN Analog Hardware Accelerators in the Presence of NVM Variability</title><title>Advanced electronic materials</title><description>Crossbar arrays of nonvolatile memory (NVM) can potentially accelerate development of deep neural networks (DNNs) by implementing crucial multiply–accumulate (MAC) operations at the location of data. Effective weight‐programming procedures can both minimize the performance impact during training and reduce the down time for inference, where new parameter sets may need to be loaded. Simultaneous weight programming along an entire dimension (e.g., row or column) of a crossbar array in the context of forward inference and training is shown to be important. A framework for determining the optimal hardware conditions in which to program weights is provided, and its efficacy in the presence of considerable NVM variability is explored through simulations. This strategy is shown capable of programming 98–99% of weights effectively, in a manner that is both largely independent of the target weight distribution and highly tolerant to variability in NVM conductance‐versus‐pulse characteristics. The probability that a weight fails to reach its target value, termed Pfail, is quantified and the fundamental trade‐off between Pfail and weight programming speed is explored. Lastly, the impact of imperfectly programmed weights on DNN test accuracies is examined for various networks, including multi‐layer perceptrons (MLPs) and long short‐term memory (LSTM). A strategy for programming weights in parallel (e.g., row‐wise) in analog deep neural network (DNN) hardware accelerators is developed and, through simulations, programming efficacy in the presence of considerable nonvolatile memory variability is explored. The impact of imperfectly programmed weights on DNN accuracy is then examined for various networks, including multilayer perceptrons and long short‐term memory.</description><subject>analog hardware accelerator</subject><subject>crossbar array</subject><subject>deep learning</subject><subject>phase change memory</subject><issn>2199-160X</issn><issn>2199-160X</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><recordid>eNqFkE1LAzEQhoMoWGqvnvMHtiabJrs5LrVaoV09aNXTMk1mt5H9kGSh9N-7paLevMwHvM8wPIRcczbljMU3gHUzjRnXbNjUGRnFXOuIK_Z2_me-JJMQPoYIT5SYSTEi76_oql1Pn3xXeWga11bUtfQ2z2nWQt1VdAne7sEjzYzBGj30nQ_HTL_DAcOArUHalTTfrOkGvIOtq11_uCIXJdQBJ999TF7uFs_zZbR6vH-YZ6vIxKlWEcQytVamSSJmw1dgOVpIRKpnAq0WIE1aasWtgi1KYZBz0LFJpeUg1VDEmExPd43vQvBYFp_eNeAPBWfF0U1xdFP8uBkAfQL2rsbDP-kiW6zWv-wXpZFoVQ</recordid><startdate>201909</startdate><enddate>201909</enddate><creator>Mackin, Charles</creator><creator>Tsai, Hsinyu</creator><creator>Ambrogio, Stefano</creator><creator>Narayanan, Pritish</creator><creator>Chen, An</creator><creator>Burr, Geoffrey W.</creator><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0001-5717-2549</orcidid></search><sort><creationdate>201909</creationdate><title>Weight Programming in DNN Analog Hardware Accelerators in the Presence of NVM Variability</title><author>Mackin, Charles ; Tsai, Hsinyu ; Ambrogio, Stefano ; Narayanan, Pritish ; Chen, An ; Burr, Geoffrey W.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c2896-a258dd587734176ad1eda738943ed93a5c8f961d6abe53ce11a92c85d1a56d1a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>analog hardware accelerator</topic><topic>crossbar array</topic><topic>deep learning</topic><topic>phase change memory</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Mackin, Charles</creatorcontrib><creatorcontrib>Tsai, Hsinyu</creatorcontrib><creatorcontrib>Ambrogio, Stefano</creatorcontrib><creatorcontrib>Narayanan, Pritish</creatorcontrib><creatorcontrib>Chen, An</creatorcontrib><creatorcontrib>Burr, Geoffrey W.</creatorcontrib><collection>CrossRef</collection><jtitle>Advanced electronic materials</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Mackin, Charles</au><au>Tsai, Hsinyu</au><au>Ambrogio, Stefano</au><au>Narayanan, Pritish</au><au>Chen, An</au><au>Burr, Geoffrey W.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Weight Programming in DNN Analog Hardware Accelerators in the Presence of NVM Variability</atitle><jtitle>Advanced electronic materials</jtitle><date>2019-09</date><risdate>2019</risdate><volume>5</volume><issue>9</issue><epage>n/a</epage><issn>2199-160X</issn><eissn>2199-160X</eissn><abstract>Crossbar arrays of nonvolatile memory (NVM) can potentially accelerate development of deep neural networks (DNNs) by implementing crucial multiply–accumulate (MAC) operations at the location of data. Effective weight‐programming procedures can both minimize the performance impact during training and reduce the down time for inference, where new parameter sets may need to be loaded. Simultaneous weight programming along an entire dimension (e.g., row or column) of a crossbar array in the context of forward inference and training is shown to be important. A framework for determining the optimal hardware conditions in which to program weights is provided, and its efficacy in the presence of considerable NVM variability is explored through simulations. This strategy is shown capable of programming 98–99% of weights effectively, in a manner that is both largely independent of the target weight distribution and highly tolerant to variability in NVM conductance‐versus‐pulse characteristics. The probability that a weight fails to reach its target value, termed Pfail, is quantified and the fundamental trade‐off between Pfail and weight programming speed is explored. Lastly, the impact of imperfectly programmed weights on DNN test accuracies is examined for various networks, including multi‐layer perceptrons (MLPs) and long short‐term memory (LSTM). A strategy for programming weights in parallel (e.g., row‐wise) in analog deep neural network (DNN) hardware accelerators is developed and, through simulations, programming efficacy in the presence of considerable nonvolatile memory variability is explored. The impact of imperfectly programmed weights on DNN accuracy is then examined for various networks, including multilayer perceptrons and long short‐term memory.</abstract><doi>10.1002/aelm.201900026</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0001-5717-2549</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 2199-160X
ispartof Advanced electronic materials, 2019-09, Vol.5 (9), p.n/a
issn 2199-160X
2199-160X
language eng
recordid cdi_crossref_primary_10_1002_aelm_201900026
source Wiley Online Library
subjects analog hardware accelerator
crossbar array
deep learning
phase change memory
title Weight Programming in DNN Analog Hardware Accelerators in the Presence of NVM Variability
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-14T20%3A05%3A20IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-wiley_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Weight%20Programming%20in%20DNN%20Analog%20Hardware%20Accelerators%20in%20the%20Presence%20of%20NVM%20Variability&rft.jtitle=Advanced%20electronic%20materials&rft.au=Mackin,%20Charles&rft.date=2019-09&rft.volume=5&rft.issue=9&rft.epage=n/a&rft.issn=2199-160X&rft.eissn=2199-160X&rft_id=info:doi/10.1002/aelm.201900026&rft_dat=%3Cwiley_cross%3EAELM201900026%3C/wiley_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true