Neural network weight distribution from a grid of memory elements

Neural inference chips for computing neural activations are provided. In various embodiments, a neural inference chip comprises at least one neural core, a memory array, an instruction buffer, and an instruction memory. The instruction buffer has a position corresponding to each of a plurality of el...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: MODHA, Dharmendra, TABA, Brian Seisho, ARTHUR, John Vernon, ORTEGA OTERO, Carlos, NAYAK, Tapan, AKOPYAN, Filipp, SAWADA, Jun, DATTA, Pallab, CASSIDY, Andrew Stephen
Format: Patent
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator MODHA, Dharmendra
TABA, Brian Seisho
ARTHUR, John Vernon
ORTEGA OTERO, Carlos
NAYAK, Tapan
AKOPYAN, Filipp
SAWADA, Jun
DATTA, Pallab
CASSIDY, Andrew Stephen
description Neural inference chips for computing neural activations are provided. In various embodiments, a neural inference chip comprises at least one neural core, a memory array, an instruction buffer, and an instruction memory. The instruction buffer has a position corresponding to each of a plurality of elements of the memory array. The instruction memory provides at least one instruction to the instruction buffer. The instruction buffer advances the at least one instruction between positions in the instruction buffer. The instruction buffer provides the at least one instruction to at least one of the plurality of elements of the memory array from its associated position in the instruction buffer when the memory of the at least one of the plurality of elements contains data associated with the at least one instruction. Each element of the memory array provides a data block from its memory to its horizontal buffer in response to the arrival of an associated instruction from the instruction buffer. The horizontal buffer of each element of the memory array provides a data block to the horizontal buffer of another of the elements of the memory array or to the at least one neural core.
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_AU2021251304BB2</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>AU2021251304BB2</sourcerecordid><originalsourceid>FETCH-epo_espacenet_AU2021251304BB23</originalsourceid><addsrcrecordid>eNrjZHD0Sy0tSsxRyEstKc8vylYoT81MzyhRSMksLinKTCotyczPU0grys9VSFRIL8pMUchPU8hNzc0vqlRIzUnNTc0rKeZhYE1LzClO5YXS3Awqbq4hzh66qQX58anFBYnJqUDD4x1DjQyMDI1MDY0NTJycjIyJVAYATeIzKQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Neural network weight distribution from a grid of memory elements</title><source>esp@cenet</source><creator>MODHA, Dharmendra ; TABA, Brian Seisho ; ARTHUR, John Vernon ; ORTEGA OTERO, Carlos ; NAYAK, Tapan ; AKOPYAN, Filipp ; SAWADA, Jun ; DATTA, Pallab ; CASSIDY, Andrew Stephen</creator><creatorcontrib>MODHA, Dharmendra ; TABA, Brian Seisho ; ARTHUR, John Vernon ; ORTEGA OTERO, Carlos ; NAYAK, Tapan ; AKOPYAN, Filipp ; SAWADA, Jun ; DATTA, Pallab ; CASSIDY, Andrew Stephen</creatorcontrib><description>Neural inference chips for computing neural activations are provided. In various embodiments, a neural inference chip comprises at least one neural core, a memory array, an instruction buffer, and an instruction memory. The instruction buffer has a position corresponding to each of a plurality of elements of the memory array. The instruction memory provides at least one instruction to the instruction buffer. The instruction buffer advances the at least one instruction between positions in the instruction buffer. The instruction buffer provides the at least one instruction to at least one of the plurality of elements of the memory array from its associated position in the instruction buffer when the memory of the at least one of the plurality of elements contains data associated with the at least one instruction. Each element of the memory array provides a data block from its memory to its horizontal buffer in response to the arrival of an associated instruction from the instruction buffer. The horizontal buffer of each element of the memory array provides a data block to the horizontal buffer of another of the elements of the memory array or to the at least one neural core.</description><language>eng</language><subject>CALCULATING ; COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS ; COMPUTING ; COUNTING ; ELECTRIC DIGITAL DATA PROCESSING ; PHYSICS</subject><creationdate>2023</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20230406&amp;DB=EPODOC&amp;CC=AU&amp;NR=2021251304B2$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,776,881,25542,76289</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20230406&amp;DB=EPODOC&amp;CC=AU&amp;NR=2021251304B2$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>MODHA, Dharmendra</creatorcontrib><creatorcontrib>TABA, Brian Seisho</creatorcontrib><creatorcontrib>ARTHUR, John Vernon</creatorcontrib><creatorcontrib>ORTEGA OTERO, Carlos</creatorcontrib><creatorcontrib>NAYAK, Tapan</creatorcontrib><creatorcontrib>AKOPYAN, Filipp</creatorcontrib><creatorcontrib>SAWADA, Jun</creatorcontrib><creatorcontrib>DATTA, Pallab</creatorcontrib><creatorcontrib>CASSIDY, Andrew Stephen</creatorcontrib><title>Neural network weight distribution from a grid of memory elements</title><description>Neural inference chips for computing neural activations are provided. In various embodiments, a neural inference chip comprises at least one neural core, a memory array, an instruction buffer, and an instruction memory. The instruction buffer has a position corresponding to each of a plurality of elements of the memory array. The instruction memory provides at least one instruction to the instruction buffer. The instruction buffer advances the at least one instruction between positions in the instruction buffer. The instruction buffer provides the at least one instruction to at least one of the plurality of elements of the memory array from its associated position in the instruction buffer when the memory of the at least one of the plurality of elements contains data associated with the at least one instruction. Each element of the memory array provides a data block from its memory to its horizontal buffer in response to the arrival of an associated instruction from the instruction buffer. The horizontal buffer of each element of the memory array provides a data block to the horizontal buffer of another of the elements of the memory array or to the at least one neural core.</description><subject>CALCULATING</subject><subject>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>ELECTRIC DIGITAL DATA PROCESSING</subject><subject>PHYSICS</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2023</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNrjZHD0Sy0tSsxRyEstKc8vylYoT81MzyhRSMksLinKTCotyczPU0grys9VSFRIL8pMUchPU8hNzc0vqlRIzUnNTc0rKeZhYE1LzClO5YXS3Awqbq4hzh66qQX58anFBYnJqUDD4x1DjQyMDI1MDY0NTJycjIyJVAYATeIzKQ</recordid><startdate>20230406</startdate><enddate>20230406</enddate><creator>MODHA, Dharmendra</creator><creator>TABA, Brian Seisho</creator><creator>ARTHUR, John Vernon</creator><creator>ORTEGA OTERO, Carlos</creator><creator>NAYAK, Tapan</creator><creator>AKOPYAN, Filipp</creator><creator>SAWADA, Jun</creator><creator>DATTA, Pallab</creator><creator>CASSIDY, Andrew Stephen</creator><scope>EVB</scope></search><sort><creationdate>20230406</creationdate><title>Neural network weight distribution from a grid of memory elements</title><author>MODHA, Dharmendra ; TABA, Brian Seisho ; ARTHUR, John Vernon ; ORTEGA OTERO, Carlos ; NAYAK, Tapan ; AKOPYAN, Filipp ; SAWADA, Jun ; DATTA, Pallab ; CASSIDY, Andrew Stephen</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_AU2021251304BB23</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>eng</language><creationdate>2023</creationdate><topic>CALCULATING</topic><topic>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>ELECTRIC DIGITAL DATA PROCESSING</topic><topic>PHYSICS</topic><toplevel>online_resources</toplevel><creatorcontrib>MODHA, Dharmendra</creatorcontrib><creatorcontrib>TABA, Brian Seisho</creatorcontrib><creatorcontrib>ARTHUR, John Vernon</creatorcontrib><creatorcontrib>ORTEGA OTERO, Carlos</creatorcontrib><creatorcontrib>NAYAK, Tapan</creatorcontrib><creatorcontrib>AKOPYAN, Filipp</creatorcontrib><creatorcontrib>SAWADA, Jun</creatorcontrib><creatorcontrib>DATTA, Pallab</creatorcontrib><creatorcontrib>CASSIDY, Andrew Stephen</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>MODHA, Dharmendra</au><au>TABA, Brian Seisho</au><au>ARTHUR, John Vernon</au><au>ORTEGA OTERO, Carlos</au><au>NAYAK, Tapan</au><au>AKOPYAN, Filipp</au><au>SAWADA, Jun</au><au>DATTA, Pallab</au><au>CASSIDY, Andrew Stephen</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Neural network weight distribution from a grid of memory elements</title><date>2023-04-06</date><risdate>2023</risdate><abstract>Neural inference chips for computing neural activations are provided. In various embodiments, a neural inference chip comprises at least one neural core, a memory array, an instruction buffer, and an instruction memory. The instruction buffer has a position corresponding to each of a plurality of elements of the memory array. The instruction memory provides at least one instruction to the instruction buffer. The instruction buffer advances the at least one instruction between positions in the instruction buffer. The instruction buffer provides the at least one instruction to at least one of the plurality of elements of the memory array from its associated position in the instruction buffer when the memory of the at least one of the plurality of elements contains data associated with the at least one instruction. Each element of the memory array provides a data block from its memory to its horizontal buffer in response to the arrival of an associated instruction from the instruction buffer. The horizontal buffer of each element of the memory array provides a data block to the horizontal buffer of another of the elements of the memory array or to the at least one neural core.</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language eng
recordid cdi_epo_espacenet_AU2021251304BB2
source esp@cenet
subjects CALCULATING
COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
COMPUTING
COUNTING
ELECTRIC DIGITAL DATA PROCESSING
PHYSICS
title Neural network weight distribution from a grid of memory elements
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-07T18%3A04%3A25IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=MODHA,%20Dharmendra&rft.date=2023-04-06&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3EAU2021251304BB2%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true