Method and apparatus having a scalable architecture for neural networks

An artificial intelligence processor can optimize the usage of its neural network to process a data set more efficiently. The artificial intelligence processor can have a neural network of multiple arithmetic logic units each having one or more computing engines and a local arithmetic memory divided...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Setty, Ravi Sreenivasa, Bandaaru, Venkateswarlu, Mital, Deepak, Ursachi, Vlad Ionut
Format: Patent
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Setty, Ravi Sreenivasa
Bandaaru, Venkateswarlu
Mital, Deepak
Ursachi, Vlad Ionut
description An artificial intelligence processor can optimize the usage of its neural network to process a data set more efficiently. The artificial intelligence processor can have a neural network of multiple arithmetic logic units each having one or more computing engines and a local arithmetic memory divided into a set of clusters arranged into a node ring. A scheduler with a local scheduler memory for each cluster. An advanced extensible interface can read a data set model from an external memory in a single data read. A memory manager can control the node ring. When a data size of the data set is larger than a processing model layer for processing the data set, the memory manager can slice the data set into data set chunks. The memory manager can assign a data set chunk to a data cluster. The memory manager can broadcast channel instructions from the processing model layer to every cluster. The memory manager can process the data set chunk in the data cluster according to the channel instructions of the processing model. Alternately, when the data size of the data set is smaller than the processing model layer, the memory manager can slice the processing model layer into channel chunks. The memory manager can assign a channel chunk to a channel cluster. The memory manager can broadcast the data set to every cluster. The memory manager can process the data set chunk according to channel instructions of the channel chunk.
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_US2023120227A1</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>US2023120227A1</sourcerecordid><originalsourceid>FETCH-epo_espacenet_US2023120227A13</originalsourceid><addsrcrecordid>eNrjZHD3TS3JyE9RSMwD4oKCxKLEktJihYzEssy8dIVEheLkxJzEpJxUhcSi5IzMktTkktKiVIW0_CKFvNTSosQcIFVSnl-UXczDwJqWmFOcyguluRmU3VxDnD10Uwvy41OLCxKTU4FK40ODjQyMjA2BhJG5o6ExcaoAPbY09A</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Method and apparatus having a scalable architecture for neural networks</title><source>esp@cenet</source><creator>Setty, Ravi Sreenivasa ; Bandaaru, Venkateswarlu ; Mital, Deepak ; Ursachi, Vlad Ionut</creator><creatorcontrib>Setty, Ravi Sreenivasa ; Bandaaru, Venkateswarlu ; Mital, Deepak ; Ursachi, Vlad Ionut</creatorcontrib><description>An artificial intelligence processor can optimize the usage of its neural network to process a data set more efficiently. The artificial intelligence processor can have a neural network of multiple arithmetic logic units each having one or more computing engines and a local arithmetic memory divided into a set of clusters arranged into a node ring. A scheduler with a local scheduler memory for each cluster. An advanced extensible interface can read a data set model from an external memory in a single data read. A memory manager can control the node ring. When a data size of the data set is larger than a processing model layer for processing the data set, the memory manager can slice the data set into data set chunks. The memory manager can assign a data set chunk to a data cluster. The memory manager can broadcast channel instructions from the processing model layer to every cluster. The memory manager can process the data set chunk in the data cluster according to the channel instructions of the processing model. Alternately, when the data size of the data set is smaller than the processing model layer, the memory manager can slice the processing model layer into channel chunks. The memory manager can assign a channel chunk to a channel cluster. The memory manager can broadcast the data set to every cluster. The memory manager can process the data set chunk according to channel instructions of the channel chunk.</description><language>eng</language><subject>CALCULATING ; COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS ; COMPUTING ; COUNTING ; PHYSICS</subject><creationdate>2023</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20230420&amp;DB=EPODOC&amp;CC=US&amp;NR=2023120227A1$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,780,885,25564,76547</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20230420&amp;DB=EPODOC&amp;CC=US&amp;NR=2023120227A1$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>Setty, Ravi Sreenivasa</creatorcontrib><creatorcontrib>Bandaaru, Venkateswarlu</creatorcontrib><creatorcontrib>Mital, Deepak</creatorcontrib><creatorcontrib>Ursachi, Vlad Ionut</creatorcontrib><title>Method and apparatus having a scalable architecture for neural networks</title><description>An artificial intelligence processor can optimize the usage of its neural network to process a data set more efficiently. The artificial intelligence processor can have a neural network of multiple arithmetic logic units each having one or more computing engines and a local arithmetic memory divided into a set of clusters arranged into a node ring. A scheduler with a local scheduler memory for each cluster. An advanced extensible interface can read a data set model from an external memory in a single data read. A memory manager can control the node ring. When a data size of the data set is larger than a processing model layer for processing the data set, the memory manager can slice the data set into data set chunks. The memory manager can assign a data set chunk to a data cluster. The memory manager can broadcast channel instructions from the processing model layer to every cluster. The memory manager can process the data set chunk in the data cluster according to the channel instructions of the processing model. Alternately, when the data size of the data set is smaller than the processing model layer, the memory manager can slice the processing model layer into channel chunks. The memory manager can assign a channel chunk to a channel cluster. The memory manager can broadcast the data set to every cluster. The memory manager can process the data set chunk according to channel instructions of the channel chunk.</description><subject>CALCULATING</subject><subject>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>PHYSICS</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2023</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNrjZHD3TS3JyE9RSMwD4oKCxKLEktJihYzEssy8dIVEheLkxJzEpJxUhcSi5IzMktTkktKiVIW0_CKFvNTSosQcIFVSnl-UXczDwJqWmFOcyguluRmU3VxDnD10Uwvy41OLCxKTU4FK40ODjQyMjA2BhJG5o6ExcaoAPbY09A</recordid><startdate>20230420</startdate><enddate>20230420</enddate><creator>Setty, Ravi Sreenivasa</creator><creator>Bandaaru, Venkateswarlu</creator><creator>Mital, Deepak</creator><creator>Ursachi, Vlad Ionut</creator><scope>EVB</scope></search><sort><creationdate>20230420</creationdate><title>Method and apparatus having a scalable architecture for neural networks</title><author>Setty, Ravi Sreenivasa ; Bandaaru, Venkateswarlu ; Mital, Deepak ; Ursachi, Vlad Ionut</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_US2023120227A13</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>eng</language><creationdate>2023</creationdate><topic>CALCULATING</topic><topic>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>PHYSICS</topic><toplevel>online_resources</toplevel><creatorcontrib>Setty, Ravi Sreenivasa</creatorcontrib><creatorcontrib>Bandaaru, Venkateswarlu</creatorcontrib><creatorcontrib>Mital, Deepak</creatorcontrib><creatorcontrib>Ursachi, Vlad Ionut</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Setty, Ravi Sreenivasa</au><au>Bandaaru, Venkateswarlu</au><au>Mital, Deepak</au><au>Ursachi, Vlad Ionut</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Method and apparatus having a scalable architecture for neural networks</title><date>2023-04-20</date><risdate>2023</risdate><abstract>An artificial intelligence processor can optimize the usage of its neural network to process a data set more efficiently. The artificial intelligence processor can have a neural network of multiple arithmetic logic units each having one or more computing engines and a local arithmetic memory divided into a set of clusters arranged into a node ring. A scheduler with a local scheduler memory for each cluster. An advanced extensible interface can read a data set model from an external memory in a single data read. A memory manager can control the node ring. When a data size of the data set is larger than a processing model layer for processing the data set, the memory manager can slice the data set into data set chunks. The memory manager can assign a data set chunk to a data cluster. The memory manager can broadcast channel instructions from the processing model layer to every cluster. The memory manager can process the data set chunk in the data cluster according to the channel instructions of the processing model. Alternately, when the data size of the data set is smaller than the processing model layer, the memory manager can slice the processing model layer into channel chunks. The memory manager can assign a channel chunk to a channel cluster. The memory manager can broadcast the data set to every cluster. The memory manager can process the data set chunk according to channel instructions of the channel chunk.</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language eng
recordid cdi_epo_espacenet_US2023120227A1
source esp@cenet
subjects CALCULATING
COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
COMPUTING
COUNTING
PHYSICS
title Method and apparatus having a scalable architecture for neural networks
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-20T12%3A13%3A19IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=Setty,%20Ravi%20Sreenivasa&rft.date=2023-04-20&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3EUS2023120227A1%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true