Three-Dimensional nand Flash for Vector-Matrix Multiplication
Three-Dimensional NAND flash technology is one of the most competitive integrated solutions for the high-volume massive data storage. So far, there are few investigations on how to use 3-D NAND flash for in-memory computing in the neural network accelerator. In this brief, we propose using the 3-D v...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on very large scale integration (VLSI) systems 2019-04, Vol.27 (4), p.988-991 |
---|---|
Hauptverfasser: | , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 991 |
---|---|
container_issue | 4 |
container_start_page | 988 |
container_title | IEEE transactions on very large scale integration (VLSI) systems |
container_volume | 27 |
creator | Wang, Panni Xu, Feng Wang, Bo Gao, Bin Wu, Huaqiang Qian, He Yu, Shimeng |
description | Three-Dimensional NAND flash technology is one of the most competitive integrated solutions for the high-volume massive data storage. So far, there are few investigations on how to use 3-D NAND flash for in-memory computing in the neural network accelerator. In this brief, we propose using the 3-D vertical channel NAND array architecture to implement the vector-matrix multiplication (VMM) with for the first time. Based on the array-level SPICE simulation, the bias condition including the selector layer and the unselected layers is optimized to achieve high computation accuracy of VMM. Since the VMM can be performed layer by layer in a 3-D NAND array, the read-out latency is largely improved compared to the conventional single-cell read-out operation. The impact of device-to-device variation on the computation accuracy is also analyzed. |
doi_str_mv | 10.1109/TVLSI.2018.2882194 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_crossref_primary_10_1109_TVLSI_2018_2882194</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8571188</ieee_id><sourcerecordid>2196881351</sourcerecordid><originalsourceid>FETCH-LOGICAL-c344t-17115256c373c526bad85a7072826a317d8650f729cd18eaa785294324b888713</originalsourceid><addsrcrecordid>eNo9kE9LAzEQxYMoWKtfQC8LnlMz-bOZPXiQarXQ4sHaa0izWbplu1uTLei3N7XFuczAvDe8-RFyC2wEwIqHxXL2MR1xBjjiiBwKeUYGoJSmRarzNLNc0LRgl-Qqxg1jIGXBBuRxsQ7e0-d669tYd61tsta2ZTZpbFxnVReypXd9F-jc9qH-zub7pq93Te1sn9TX5KKyTfQ3pz4kn5OXxfiNzt5fp-OnGXVCyp6CBlBc5U5o4RTPV7ZEZTXTHHluBegSc8UqzQtXAnprNSpeSMHlChE1iCG5P97dhe5r72NvNt0-pLDRpF9zRBDqoOJHlQtdjMFXZhfqrQ0_Bpg5YDJ_mMwBkzlhSqa7o6n23v8bUKXIiOIXHTxhrA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2196881351</pqid></control><display><type>article</type><title>Three-Dimensional nand Flash for Vector-Matrix Multiplication</title><source>IEEE Electronic Library (IEL)</source><creator>Wang, Panni ; Xu, Feng ; Wang, Bo ; Gao, Bin ; Wu, Huaqiang ; Qian, He ; Yu, Shimeng</creator><creatorcontrib>Wang, Panni ; Xu, Feng ; Wang, Bo ; Gao, Bin ; Wu, Huaqiang ; Qian, He ; Yu, Shimeng</creatorcontrib><description>Three-Dimensional NAND flash technology is one of the most competitive integrated solutions for the high-volume massive data storage. So far, there are few investigations on how to use 3-D NAND flash for in-memory computing in the neural network accelerator. In this brief, we propose using the 3-D vertical channel NAND array architecture to implement the vector-matrix multiplication (VMM) with for the first time. Based on the array-level SPICE simulation, the bias condition including the selector layer and the unselected layers is optimized to achieve high computation accuracy of VMM. Since the VMM can be performed layer by layer in a 3-D NAND array, the read-out latency is largely improved compared to the conventional single-cell read-out operation. The impact of device-to-device variation on the computation accuracy is also analyzed.</description><identifier>ISSN: 1063-8210</identifier><identifier>EISSN: 1557-9999</identifier><identifier>DOI: 10.1109/TVLSI.2018.2882194</identifier><identifier>CODEN: IEVSE9</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>3-D NAND flash ; Arrays ; Computation ; Computer architecture ; Computer simulation ; Data storage ; Flash memory (computers) ; Logic gates ; Matrix algebra ; Matrix methods ; Microprocessors ; Multiplication ; neural network ; Neural networks ; Resistance ; Solid modeling ; Transistors ; vector–matrix multiplication (VMM) ; Virtual machine monitors ; weighted sum</subject><ispartof>IEEE transactions on very large scale integration (VLSI) systems, 2019-04, Vol.27 (4), p.988-991</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c344t-17115256c373c526bad85a7072826a317d8650f729cd18eaa785294324b888713</citedby><cites>FETCH-LOGICAL-c344t-17115256c373c526bad85a7072826a317d8650f729cd18eaa785294324b888713</cites><orcidid>0000-0001-8359-7997 ; 0000-0002-0068-3652 ; 0000-0002-2417-983X ; 0000-0001-8559-2727</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8571188$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27901,27902,54733</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/8571188$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Wang, Panni</creatorcontrib><creatorcontrib>Xu, Feng</creatorcontrib><creatorcontrib>Wang, Bo</creatorcontrib><creatorcontrib>Gao, Bin</creatorcontrib><creatorcontrib>Wu, Huaqiang</creatorcontrib><creatorcontrib>Qian, He</creatorcontrib><creatorcontrib>Yu, Shimeng</creatorcontrib><title>Three-Dimensional nand Flash for Vector-Matrix Multiplication</title><title>IEEE transactions on very large scale integration (VLSI) systems</title><addtitle>TVLSI</addtitle><description>Three-Dimensional NAND flash technology is one of the most competitive integrated solutions for the high-volume massive data storage. So far, there are few investigations on how to use 3-D NAND flash for in-memory computing in the neural network accelerator. In this brief, we propose using the 3-D vertical channel NAND array architecture to implement the vector-matrix multiplication (VMM) with for the first time. Based on the array-level SPICE simulation, the bias condition including the selector layer and the unselected layers is optimized to achieve high computation accuracy of VMM. Since the VMM can be performed layer by layer in a 3-D NAND array, the read-out latency is largely improved compared to the conventional single-cell read-out operation. The impact of device-to-device variation on the computation accuracy is also analyzed.</description><subject>3-D NAND flash</subject><subject>Arrays</subject><subject>Computation</subject><subject>Computer architecture</subject><subject>Computer simulation</subject><subject>Data storage</subject><subject>Flash memory (computers)</subject><subject>Logic gates</subject><subject>Matrix algebra</subject><subject>Matrix methods</subject><subject>Microprocessors</subject><subject>Multiplication</subject><subject>neural network</subject><subject>Neural networks</subject><subject>Resistance</subject><subject>Solid modeling</subject><subject>Transistors</subject><subject>vector–matrix multiplication (VMM)</subject><subject>Virtual machine monitors</subject><subject>weighted sum</subject><issn>1063-8210</issn><issn>1557-9999</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9kE9LAzEQxYMoWKtfQC8LnlMz-bOZPXiQarXQ4sHaa0izWbplu1uTLei3N7XFuczAvDe8-RFyC2wEwIqHxXL2MR1xBjjiiBwKeUYGoJSmRarzNLNc0LRgl-Qqxg1jIGXBBuRxsQ7e0-d669tYd61tsta2ZTZpbFxnVReypXd9F-jc9qH-zub7pq93Te1sn9TX5KKyTfQ3pz4kn5OXxfiNzt5fp-OnGXVCyp6CBlBc5U5o4RTPV7ZEZTXTHHluBegSc8UqzQtXAnprNSpeSMHlChE1iCG5P97dhe5r72NvNt0-pLDRpF9zRBDqoOJHlQtdjMFXZhfqrQ0_Bpg5YDJ_mMwBkzlhSqa7o6n23v8bUKXIiOIXHTxhrA</recordid><startdate>20190401</startdate><enddate>20190401</enddate><creator>Wang, Panni</creator><creator>Xu, Feng</creator><creator>Wang, Bo</creator><creator>Gao, Bin</creator><creator>Wu, Huaqiang</creator><creator>Qian, He</creator><creator>Yu, Shimeng</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>8FD</scope><scope>L7M</scope><orcidid>https://orcid.org/0000-0001-8359-7997</orcidid><orcidid>https://orcid.org/0000-0002-0068-3652</orcidid><orcidid>https://orcid.org/0000-0002-2417-983X</orcidid><orcidid>https://orcid.org/0000-0001-8559-2727</orcidid></search><sort><creationdate>20190401</creationdate><title>Three-Dimensional nand Flash for Vector-Matrix Multiplication</title><author>Wang, Panni ; Xu, Feng ; Wang, Bo ; Gao, Bin ; Wu, Huaqiang ; Qian, He ; Yu, Shimeng</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c344t-17115256c373c526bad85a7072826a317d8650f729cd18eaa785294324b888713</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>3-D NAND flash</topic><topic>Arrays</topic><topic>Computation</topic><topic>Computer architecture</topic><topic>Computer simulation</topic><topic>Data storage</topic><topic>Flash memory (computers)</topic><topic>Logic gates</topic><topic>Matrix algebra</topic><topic>Matrix methods</topic><topic>Microprocessors</topic><topic>Multiplication</topic><topic>neural network</topic><topic>Neural networks</topic><topic>Resistance</topic><topic>Solid modeling</topic><topic>Transistors</topic><topic>vector–matrix multiplication (VMM)</topic><topic>Virtual machine monitors</topic><topic>weighted sum</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Wang, Panni</creatorcontrib><creatorcontrib>Xu, Feng</creatorcontrib><creatorcontrib>Wang, Bo</creatorcontrib><creatorcontrib>Gao, Bin</creatorcontrib><creatorcontrib>Wu, Huaqiang</creatorcontrib><creatorcontrib>Qian, He</creatorcontrib><creatorcontrib>Yu, Shimeng</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>Advanced Technologies Database with Aerospace</collection><jtitle>IEEE transactions on very large scale integration (VLSI) systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Wang, Panni</au><au>Xu, Feng</au><au>Wang, Bo</au><au>Gao, Bin</au><au>Wu, Huaqiang</au><au>Qian, He</au><au>Yu, Shimeng</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Three-Dimensional nand Flash for Vector-Matrix Multiplication</atitle><jtitle>IEEE transactions on very large scale integration (VLSI) systems</jtitle><stitle>TVLSI</stitle><date>2019-04-01</date><risdate>2019</risdate><volume>27</volume><issue>4</issue><spage>988</spage><epage>991</epage><pages>988-991</pages><issn>1063-8210</issn><eissn>1557-9999</eissn><coden>IEVSE9</coden><abstract>Three-Dimensional NAND flash technology is one of the most competitive integrated solutions for the high-volume massive data storage. So far, there are few investigations on how to use 3-D NAND flash for in-memory computing in the neural network accelerator. In this brief, we propose using the 3-D vertical channel NAND array architecture to implement the vector-matrix multiplication (VMM) with for the first time. Based on the array-level SPICE simulation, the bias condition including the selector layer and the unselected layers is optimized to achieve high computation accuracy of VMM. Since the VMM can be performed layer by layer in a 3-D NAND array, the read-out latency is largely improved compared to the conventional single-cell read-out operation. The impact of device-to-device variation on the computation accuracy is also analyzed.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TVLSI.2018.2882194</doi><tpages>4</tpages><orcidid>https://orcid.org/0000-0001-8359-7997</orcidid><orcidid>https://orcid.org/0000-0002-0068-3652</orcidid><orcidid>https://orcid.org/0000-0002-2417-983X</orcidid><orcidid>https://orcid.org/0000-0001-8559-2727</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1063-8210 |
ispartof | IEEE transactions on very large scale integration (VLSI) systems, 2019-04, Vol.27 (4), p.988-991 |
issn | 1063-8210 1557-9999 |
language | eng |
recordid | cdi_crossref_primary_10_1109_TVLSI_2018_2882194 |
source | IEEE Electronic Library (IEL) |
subjects | 3-D NAND flash Arrays Computation Computer architecture Computer simulation Data storage Flash memory (computers) Logic gates Matrix algebra Matrix methods Microprocessors Multiplication neural network Neural networks Resistance Solid modeling Transistors vector–matrix multiplication (VMM) Virtual machine monitors weighted sum |
title | Three-Dimensional nand Flash for Vector-Matrix Multiplication |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-16T02%3A53%3A54IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Three-Dimensional%20nand%20Flash%20for%20Vector-Matrix%20Multiplication&rft.jtitle=IEEE%20transactions%20on%20very%20large%20scale%20integration%20(VLSI)%20systems&rft.au=Wang,%20Panni&rft.date=2019-04-01&rft.volume=27&rft.issue=4&rft.spage=988&rft.epage=991&rft.pages=988-991&rft.issn=1063-8210&rft.eissn=1557-9999&rft.coden=IEVSE9&rft_id=info:doi/10.1109/TVLSI.2018.2882194&rft_dat=%3Cproquest_RIE%3E2196881351%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2196881351&rft_id=info:pmid/&rft_ieee_id=8571188&rfr_iscdi=true |