A 65-nm Neuromorphic Image Classification Processor With Energy-Efficient Training Through Direct Spike-Only Feedback

Recent advances in neural network (NN) and machine learning algorithms have sparked a wide array of research in specialized hardware, ranging from high-performance NN accelerators for use inside the server systems to energy-efficient edge computing systems. While most of these studies have focused o...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE journal of solid-state circuits 2020-01, Vol.55 (1), p.108-119
Hauptverfasser: Park, Jeongwoo, Lee, Juyun, Jeon, Dongsuk
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 119
container_issue 1
container_start_page 108
container_title IEEE journal of solid-state circuits
container_volume 55
creator Park, Jeongwoo
Lee, Juyun
Jeon, Dongsuk
description Recent advances in neural network (NN) and machine learning algorithms have sparked a wide array of research in specialized hardware, ranging from high-performance NN accelerators for use inside the server systems to energy-efficient edge computing systems. While most of these studies have focused on designing inference engines, implementing the training process of an NN for energy-constrained mobile devices has remained to be a challenge due to the requirement of higher numerical precision. In this article, we aim to build an on-chip learning system that would show highly energy-efficient training for NNs without degradation in the performance for machine learning tasks. To achieve this goal, we adapt and optimize a neuromorphic learning algorithm and propose hardware design techniques to fully exploit the properties of the modifications. We verify that our system achieves energy-efficient training with only 7.5% more energy consumption compared with its highly efficient inference of 236 nJ/image on the handwritten digit [Modified National Institute of Standards and Technology database (MNIST)] images. Moreover, our system achieves 97.83% classification accuracy on the MNIST test data set, which outperforms prior neuromorphic on-chip learning systems and is close to the performance of the conventional method for training deep neural networks (NNs), the backpropagation.
doi_str_mv 10.1109/JSSC.2019.2942367
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_ieee_primary_8867974</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8867974</ieee_id><sourcerecordid>2332194407</sourcerecordid><originalsourceid>FETCH-LOGICAL-c293t-8402d41d77cd5cda35eb37f8d35d843b9671faf0e756fbaa0fd194237ec672af3</originalsourceid><addsrcrecordid>eNo9kF1LwzAUhoMoOKc_QLwJeJ2Zj7ZpL0fddDKcsInelSw9abOPZibtxf69HROvDgee9z2cB6F7RkeM0ezpbbnMR5yybMSziItEXqABi-OUMCm-L9GAUpaSjFN6jW5C2PRrFKVsgLoxTmLS7PE7dN7tnT_UVuPZXlWA850KwRqrVWtdgz-80xCC8_jLtjWeNOCrI5mYHrDQtHjllW1sU-FV7V1X1fjZetAtXh7sFsii2R3xFKBcK729RVdG7QLc_c0h-pxOVvkrmS9eZvl4TjTPREvSiPIyYqWUuox1qUQMayFNWoq4TCOxzhLJjDIUZJyYtVLUlOz0vQSdSK6MGKLHc-_Bu58OQltsXOeb_mTBheA9HFHZU-xMae9C8GCKg7d75Y8Fo8XJbnGyW5zsFn92-8zDOWMB4J9P00RmMhK_q7x3CA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2332194407</pqid></control><display><type>article</type><title>A 65-nm Neuromorphic Image Classification Processor With Energy-Efficient Training Through Direct Spike-Only Feedback</title><source>IEEE Electronic Library (IEL)</source><creator>Park, Jeongwoo ; Lee, Juyun ; Jeon, Dongsuk</creator><creatorcontrib>Park, Jeongwoo ; Lee, Juyun ; Jeon, Dongsuk</creatorcontrib><description>Recent advances in neural network (NN) and machine learning algorithms have sparked a wide array of research in specialized hardware, ranging from high-performance NN accelerators for use inside the server systems to energy-efficient edge computing systems. While most of these studies have focused on designing inference engines, implementing the training process of an NN for energy-constrained mobile devices has remained to be a challenge due to the requirement of higher numerical precision. In this article, we aim to build an on-chip learning system that would show highly energy-efficient training for NNs without degradation in the performance for machine learning tasks. To achieve this goal, we adapt and optimize a neuromorphic learning algorithm and propose hardware design techniques to fully exploit the properties of the modifications. We verify that our system achieves energy-efficient training with only 7.5% more energy consumption compared with its highly efficient inference of 236 nJ/image on the handwritten digit [Modified National Institute of Standards and Technology database (MNIST)] images. Moreover, our system achieves 97.83% classification accuracy on the MNIST test data set, which outperforms prior neuromorphic on-chip learning systems and is close to the performance of the conventional method for training deep neural networks (NNs), the backpropagation.</description><identifier>ISSN: 0018-9200</identifier><identifier>EISSN: 1558-173X</identifier><identifier>DOI: 10.1109/JSSC.2019.2942367</identifier><identifier>CODEN: IJSCBC</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Accelerators ; Algorithms ; Artificial intelligence ; Artificial neural networks ; Back propagation ; Cognitive tasks ; Computational efficiency ; Design modifications ; digital integrated circuits ; Edge computing ; Electronic devices ; Energy ; Energy consumption ; Handwriting ; Hardware ; Image classification ; Inference ; learning systems ; Machine learning ; Machine learning algorithms ; Microprocessors ; multi layer perceptrons ; Neural networks ; Neuromorphics ; Neurons ; Task analysis ; Training ; very large-scale integration</subject><ispartof>IEEE journal of solid-state circuits, 2020-01, Vol.55 (1), p.108-119</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c293t-8402d41d77cd5cda35eb37f8d35d843b9671faf0e756fbaa0fd194237ec672af3</citedby><cites>FETCH-LOGICAL-c293t-8402d41d77cd5cda35eb37f8d35d843b9671faf0e756fbaa0fd194237ec672af3</cites><orcidid>0000-0002-0395-8076</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8867974$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/8867974$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Park, Jeongwoo</creatorcontrib><creatorcontrib>Lee, Juyun</creatorcontrib><creatorcontrib>Jeon, Dongsuk</creatorcontrib><title>A 65-nm Neuromorphic Image Classification Processor With Energy-Efficient Training Through Direct Spike-Only Feedback</title><title>IEEE journal of solid-state circuits</title><addtitle>JSSC</addtitle><description>Recent advances in neural network (NN) and machine learning algorithms have sparked a wide array of research in specialized hardware, ranging from high-performance NN accelerators for use inside the server systems to energy-efficient edge computing systems. While most of these studies have focused on designing inference engines, implementing the training process of an NN for energy-constrained mobile devices has remained to be a challenge due to the requirement of higher numerical precision. In this article, we aim to build an on-chip learning system that would show highly energy-efficient training for NNs without degradation in the performance for machine learning tasks. To achieve this goal, we adapt and optimize a neuromorphic learning algorithm and propose hardware design techniques to fully exploit the properties of the modifications. We verify that our system achieves energy-efficient training with only 7.5% more energy consumption compared with its highly efficient inference of 236 nJ/image on the handwritten digit [Modified National Institute of Standards and Technology database (MNIST)] images. Moreover, our system achieves 97.83% classification accuracy on the MNIST test data set, which outperforms prior neuromorphic on-chip learning systems and is close to the performance of the conventional method for training deep neural networks (NNs), the backpropagation.</description><subject>Accelerators</subject><subject>Algorithms</subject><subject>Artificial intelligence</subject><subject>Artificial neural networks</subject><subject>Back propagation</subject><subject>Cognitive tasks</subject><subject>Computational efficiency</subject><subject>Design modifications</subject><subject>digital integrated circuits</subject><subject>Edge computing</subject><subject>Electronic devices</subject><subject>Energy</subject><subject>Energy consumption</subject><subject>Handwriting</subject><subject>Hardware</subject><subject>Image classification</subject><subject>Inference</subject><subject>learning systems</subject><subject>Machine learning</subject><subject>Machine learning algorithms</subject><subject>Microprocessors</subject><subject>multi layer perceptrons</subject><subject>Neural networks</subject><subject>Neuromorphics</subject><subject>Neurons</subject><subject>Task analysis</subject><subject>Training</subject><subject>very large-scale integration</subject><issn>0018-9200</issn><issn>1558-173X</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9kF1LwzAUhoMoOKc_QLwJeJ2Zj7ZpL0fddDKcsInelSw9abOPZibtxf69HROvDgee9z2cB6F7RkeM0ezpbbnMR5yybMSziItEXqABi-OUMCm-L9GAUpaSjFN6jW5C2PRrFKVsgLoxTmLS7PE7dN7tnT_UVuPZXlWA850KwRqrVWtdgz-80xCC8_jLtjWeNOCrI5mYHrDQtHjllW1sU-FV7V1X1fjZetAtXh7sFsii2R3xFKBcK729RVdG7QLc_c0h-pxOVvkrmS9eZvl4TjTPREvSiPIyYqWUuox1qUQMayFNWoq4TCOxzhLJjDIUZJyYtVLUlOz0vQSdSK6MGKLHc-_Bu58OQltsXOeb_mTBheA9HFHZU-xMae9C8GCKg7d75Y8Fo8XJbnGyW5zsFn92-8zDOWMB4J9P00RmMhK_q7x3CA</recordid><startdate>202001</startdate><enddate>202001</enddate><creator>Park, Jeongwoo</creator><creator>Lee, Juyun</creator><creator>Jeon, Dongsuk</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>8FD</scope><scope>L7M</scope><orcidid>https://orcid.org/0000-0002-0395-8076</orcidid></search><sort><creationdate>202001</creationdate><title>A 65-nm Neuromorphic Image Classification Processor With Energy-Efficient Training Through Direct Spike-Only Feedback</title><author>Park, Jeongwoo ; Lee, Juyun ; Jeon, Dongsuk</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c293t-8402d41d77cd5cda35eb37f8d35d843b9671faf0e756fbaa0fd194237ec672af3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Accelerators</topic><topic>Algorithms</topic><topic>Artificial intelligence</topic><topic>Artificial neural networks</topic><topic>Back propagation</topic><topic>Cognitive tasks</topic><topic>Computational efficiency</topic><topic>Design modifications</topic><topic>digital integrated circuits</topic><topic>Edge computing</topic><topic>Electronic devices</topic><topic>Energy</topic><topic>Energy consumption</topic><topic>Handwriting</topic><topic>Hardware</topic><topic>Image classification</topic><topic>Inference</topic><topic>learning systems</topic><topic>Machine learning</topic><topic>Machine learning algorithms</topic><topic>Microprocessors</topic><topic>multi layer perceptrons</topic><topic>Neural networks</topic><topic>Neuromorphics</topic><topic>Neurons</topic><topic>Task analysis</topic><topic>Training</topic><topic>very large-scale integration</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Park, Jeongwoo</creatorcontrib><creatorcontrib>Lee, Juyun</creatorcontrib><creatorcontrib>Jeon, Dongsuk</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>Advanced Technologies Database with Aerospace</collection><jtitle>IEEE journal of solid-state circuits</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Park, Jeongwoo</au><au>Lee, Juyun</au><au>Jeon, Dongsuk</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A 65-nm Neuromorphic Image Classification Processor With Energy-Efficient Training Through Direct Spike-Only Feedback</atitle><jtitle>IEEE journal of solid-state circuits</jtitle><stitle>JSSC</stitle><date>2020-01</date><risdate>2020</risdate><volume>55</volume><issue>1</issue><spage>108</spage><epage>119</epage><pages>108-119</pages><issn>0018-9200</issn><eissn>1558-173X</eissn><coden>IJSCBC</coden><abstract>Recent advances in neural network (NN) and machine learning algorithms have sparked a wide array of research in specialized hardware, ranging from high-performance NN accelerators for use inside the server systems to energy-efficient edge computing systems. While most of these studies have focused on designing inference engines, implementing the training process of an NN for energy-constrained mobile devices has remained to be a challenge due to the requirement of higher numerical precision. In this article, we aim to build an on-chip learning system that would show highly energy-efficient training for NNs without degradation in the performance for machine learning tasks. To achieve this goal, we adapt and optimize a neuromorphic learning algorithm and propose hardware design techniques to fully exploit the properties of the modifications. We verify that our system achieves energy-efficient training with only 7.5% more energy consumption compared with its highly efficient inference of 236 nJ/image on the handwritten digit [Modified National Institute of Standards and Technology database (MNIST)] images. Moreover, our system achieves 97.83% classification accuracy on the MNIST test data set, which outperforms prior neuromorphic on-chip learning systems and is close to the performance of the conventional method for training deep neural networks (NNs), the backpropagation.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/JSSC.2019.2942367</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0002-0395-8076</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 0018-9200
ispartof IEEE journal of solid-state circuits, 2020-01, Vol.55 (1), p.108-119
issn 0018-9200
1558-173X
language eng
recordid cdi_ieee_primary_8867974
source IEEE Electronic Library (IEL)
subjects Accelerators
Algorithms
Artificial intelligence
Artificial neural networks
Back propagation
Cognitive tasks
Computational efficiency
Design modifications
digital integrated circuits
Edge computing
Electronic devices
Energy
Energy consumption
Handwriting
Hardware
Image classification
Inference
learning systems
Machine learning
Machine learning algorithms
Microprocessors
multi layer perceptrons
Neural networks
Neuromorphics
Neurons
Task analysis
Training
very large-scale integration
title A 65-nm Neuromorphic Image Classification Processor With Energy-Efficient Training Through Direct Spike-Only Feedback
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T21%3A20%3A42IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%2065-nm%20Neuromorphic%20Image%20Classification%20Processor%20With%20Energy-Efficient%20Training%20Through%20Direct%20Spike-Only%20Feedback&rft.jtitle=IEEE%20journal%20of%20solid-state%20circuits&rft.au=Park,%20Jeongwoo&rft.date=2020-01&rft.volume=55&rft.issue=1&rft.spage=108&rft.epage=119&rft.pages=108-119&rft.issn=0018-9200&rft.eissn=1558-173X&rft.coden=IJSCBC&rft_id=info:doi/10.1109/JSSC.2019.2942367&rft_dat=%3Cproquest_RIE%3E2332194407%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2332194407&rft_id=info:pmid/&rft_ieee_id=8867974&rfr_iscdi=true