MULTI-LAYER NEURAL NETWORK PROCESSING BY A NEURAL NETWORK ACCELERATOR USING HOST COMMUNICATED MERGED WEIGHTS AND A PACKAGE OF PER-LAYER INSTRUCTIONS

In the disclosed methods and systems for processing in a neural network system, a host computer system writes a plurality of weight matrices associated with a plurality of layers of a neural network to a memory shared with a neural network accelerator. The host computer system further assembles a pl...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: NG, Aaron, TENG, Xiao, SETTLE, Sean, GHASEMI, Ehsan, ZEJDA, Jindrich, SIRASAO, Ashish, WU, Yongjun, DELAYE, Elliott
Format: Patent
Sprache:eng ; fre ; ger
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator NG, Aaron
TENG, Xiao
SETTLE, Sean
GHASEMI, Ehsan
ZEJDA, Jindrich
SIRASAO, Ashish
WU, Yongjun
DELAYE, Elliott
description In the disclosed methods and systems for processing in a neural network system, a host computer system writes a plurality of weight matrices associated with a plurality of layers of a neural network to a memory shared with a neural network accelerator. The host computer system further assembles a plurality of per-layer instructions into an instruction package. Each per-layer instruction specifies processing of a respective layer of the plurality of layers of the neural network, and respective offsets of weight matrices in a shared memory. The host computer system writes input data and the instruction package to the shared memory. The neural network accelerator reads the instruction package from the shared memory and processes the plurality of per-layer instructions of the instruction package.
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_EP3698296A1</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>EP3698296A1</sourcerecordid><originalsourceid>FETCH-epo_espacenet_EP3698296A13</originalsourceid><addsrcrecordid>eNqNjT0KwkAQRtNYiHqHuYCFCsGU4zpuluxPmJ1FUgWRtRIV9Cge2CCprKxe8T3eNy3eLlkxS4sdMXhKjHaAHAM30HJQFKPxGnYd4O-MSpElRgkM6WvVIQqo4FzyRqHQHhyxHnAko2uJgH4_dFpUDWqCcICWePw2PgonJSb4OC8ml9P1mRcjZwUcSFS9zI97n5-P0znf8qundlNW23VV4mrzh_IBBZxBlQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>MULTI-LAYER NEURAL NETWORK PROCESSING BY A NEURAL NETWORK ACCELERATOR USING HOST COMMUNICATED MERGED WEIGHTS AND A PACKAGE OF PER-LAYER INSTRUCTIONS</title><source>esp@cenet</source><creator>NG, Aaron ; TENG, Xiao ; SETTLE, Sean ; GHASEMI, Ehsan ; ZEJDA, Jindrich ; SIRASAO, Ashish ; WU, Yongjun ; DELAYE, Elliott</creator><creatorcontrib>NG, Aaron ; TENG, Xiao ; SETTLE, Sean ; GHASEMI, Ehsan ; ZEJDA, Jindrich ; SIRASAO, Ashish ; WU, Yongjun ; DELAYE, Elliott</creatorcontrib><description>In the disclosed methods and systems for processing in a neural network system, a host computer system writes a plurality of weight matrices associated with a plurality of layers of a neural network to a memory shared with a neural network accelerator. The host computer system further assembles a plurality of per-layer instructions into an instruction package. Each per-layer instruction specifies processing of a respective layer of the plurality of layers of the neural network, and respective offsets of weight matrices in a shared memory. The host computer system writes input data and the instruction package to the shared memory. The neural network accelerator reads the instruction package from the shared memory and processes the plurality of per-layer instructions of the instruction package.</description><language>eng ; fre ; ger</language><subject>CALCULATING ; COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS ; COMPUTING ; COUNTING ; PHYSICS</subject><creationdate>2020</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20200826&amp;DB=EPODOC&amp;CC=EP&amp;NR=3698296A1$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,777,882,25545,76296</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20200826&amp;DB=EPODOC&amp;CC=EP&amp;NR=3698296A1$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>NG, Aaron</creatorcontrib><creatorcontrib>TENG, Xiao</creatorcontrib><creatorcontrib>SETTLE, Sean</creatorcontrib><creatorcontrib>GHASEMI, Ehsan</creatorcontrib><creatorcontrib>ZEJDA, Jindrich</creatorcontrib><creatorcontrib>SIRASAO, Ashish</creatorcontrib><creatorcontrib>WU, Yongjun</creatorcontrib><creatorcontrib>DELAYE, Elliott</creatorcontrib><title>MULTI-LAYER NEURAL NETWORK PROCESSING BY A NEURAL NETWORK ACCELERATOR USING HOST COMMUNICATED MERGED WEIGHTS AND A PACKAGE OF PER-LAYER INSTRUCTIONS</title><description>In the disclosed methods and systems for processing in a neural network system, a host computer system writes a plurality of weight matrices associated with a plurality of layers of a neural network to a memory shared with a neural network accelerator. The host computer system further assembles a plurality of per-layer instructions into an instruction package. Each per-layer instruction specifies processing of a respective layer of the plurality of layers of the neural network, and respective offsets of weight matrices in a shared memory. The host computer system writes input data and the instruction package to the shared memory. The neural network accelerator reads the instruction package from the shared memory and processes the plurality of per-layer instructions of the instruction package.</description><subject>CALCULATING</subject><subject>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>PHYSICS</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2020</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNqNjT0KwkAQRtNYiHqHuYCFCsGU4zpuluxPmJ1FUgWRtRIV9Cge2CCprKxe8T3eNy3eLlkxS4sdMXhKjHaAHAM30HJQFKPxGnYd4O-MSpElRgkM6WvVIQqo4FzyRqHQHhyxHnAko2uJgH4_dFpUDWqCcICWePw2PgonJSb4OC8ml9P1mRcjZwUcSFS9zI97n5-P0znf8qundlNW23VV4mrzh_IBBZxBlQ</recordid><startdate>20200826</startdate><enddate>20200826</enddate><creator>NG, Aaron</creator><creator>TENG, Xiao</creator><creator>SETTLE, Sean</creator><creator>GHASEMI, Ehsan</creator><creator>ZEJDA, Jindrich</creator><creator>SIRASAO, Ashish</creator><creator>WU, Yongjun</creator><creator>DELAYE, Elliott</creator><scope>EVB</scope></search><sort><creationdate>20200826</creationdate><title>MULTI-LAYER NEURAL NETWORK PROCESSING BY A NEURAL NETWORK ACCELERATOR USING HOST COMMUNICATED MERGED WEIGHTS AND A PACKAGE OF PER-LAYER INSTRUCTIONS</title><author>NG, Aaron ; TENG, Xiao ; SETTLE, Sean ; GHASEMI, Ehsan ; ZEJDA, Jindrich ; SIRASAO, Ashish ; WU, Yongjun ; DELAYE, Elliott</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_EP3698296A13</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>eng ; fre ; ger</language><creationdate>2020</creationdate><topic>CALCULATING</topic><topic>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>PHYSICS</topic><toplevel>online_resources</toplevel><creatorcontrib>NG, Aaron</creatorcontrib><creatorcontrib>TENG, Xiao</creatorcontrib><creatorcontrib>SETTLE, Sean</creatorcontrib><creatorcontrib>GHASEMI, Ehsan</creatorcontrib><creatorcontrib>ZEJDA, Jindrich</creatorcontrib><creatorcontrib>SIRASAO, Ashish</creatorcontrib><creatorcontrib>WU, Yongjun</creatorcontrib><creatorcontrib>DELAYE, Elliott</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>NG, Aaron</au><au>TENG, Xiao</au><au>SETTLE, Sean</au><au>GHASEMI, Ehsan</au><au>ZEJDA, Jindrich</au><au>SIRASAO, Ashish</au><au>WU, Yongjun</au><au>DELAYE, Elliott</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>MULTI-LAYER NEURAL NETWORK PROCESSING BY A NEURAL NETWORK ACCELERATOR USING HOST COMMUNICATED MERGED WEIGHTS AND A PACKAGE OF PER-LAYER INSTRUCTIONS</title><date>2020-08-26</date><risdate>2020</risdate><abstract>In the disclosed methods and systems for processing in a neural network system, a host computer system writes a plurality of weight matrices associated with a plurality of layers of a neural network to a memory shared with a neural network accelerator. The host computer system further assembles a plurality of per-layer instructions into an instruction package. Each per-layer instruction specifies processing of a respective layer of the plurality of layers of the neural network, and respective offsets of weight matrices in a shared memory. The host computer system writes input data and the instruction package to the shared memory. The neural network accelerator reads the instruction package from the shared memory and processes the plurality of per-layer instructions of the instruction package.</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language eng ; fre ; ger
recordid cdi_epo_espacenet_EP3698296A1
source esp@cenet
subjects CALCULATING
COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
COMPUTING
COUNTING
PHYSICS
title MULTI-LAYER NEURAL NETWORK PROCESSING BY A NEURAL NETWORK ACCELERATOR USING HOST COMMUNICATED MERGED WEIGHTS AND A PACKAGE OF PER-LAYER INSTRUCTIONS
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-20T15%3A59%3A40IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=NG,%20Aaron&rft.date=2020-08-26&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3EEP3698296A1%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true