Accelerator for neural network computing and execution method thereof

An accelerator for neural network computing includes hardware engines and a buffer memory. The hardware engines include a convolution engine and at least a second engine. Each hardware engine includes circuitry to perform neural network operations. The buffer memory stores a first input tile and a s...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: ANDRIAN, HENRRY, LIN, CHIEN-HUNG, HUNG, SHENG-JE, WANG, SHAO-YU, CHEN, TAI-LUNG, CHEN, YI-SIOU, WU, CHI-TA, KUO, YU-TING, CHENG, MENG-HSUAN
Format: Patent
Sprache:chi ; eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator ANDRIAN, HENRRY
LIN, CHIEN-HUNG
HUNG, SHENG-JE
WANG, SHAO-YU
CHEN, TAI-LUNG
CHEN, YI-SIOU
WU, CHI-TA
KUO, YU-TING
CHENG, MENG-HSUAN
description An accelerator for neural network computing includes hardware engines and a buffer memory. The hardware engines include a convolution engine and at least a second engine. Each hardware engine includes circuitry to perform neural network operations. The buffer memory stores a first input tile and a second input tile of an input feature map. The second input tile overlaps with the first input tile in the buffer memory. The convolution engine is operative to retrieve the first input tile from the buffer memory, perform convolution operations on the first input tile to generate an intermediate tile of an intermediate feature map, and pass the intermediate tile to the second engine via the buffer memory.
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_TWI748151BB</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>TWI748151BB</sourcerecordid><originalsourceid>FETCH-epo_espacenet_TWI748151BB3</originalsourceid><addsrcrecordid>eNrjZHB1TE5OzUktSizJL1JIA-K81NKixBwgVVKeX5StkJyfW1BakpmXrpCYl6KQWpGaDOTl5ynkppZk5KcolGSkFqXmp_EwsKYl5hSn8kJpbgYFN9cQZw_d1IL8-NTigsTkVKCB8SHhnuYmFoamhk5OxkQoAQDY_TPS</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Accelerator for neural network computing and execution method thereof</title><source>esp@cenet</source><creator>ANDRIAN, HENRRY ; LIN, CHIEN-HUNG ; HUNG, SHENG-JE ; WANG, SHAO-YU ; CHEN, TAI-LUNG ; CHEN, YI-SIOU ; WU, CHI-TA ; KUO, YU-TING ; CHENG, MENG-HSUAN</creator><creatorcontrib>ANDRIAN, HENRRY ; LIN, CHIEN-HUNG ; HUNG, SHENG-JE ; WANG, SHAO-YU ; CHEN, TAI-LUNG ; CHEN, YI-SIOU ; WU, CHI-TA ; KUO, YU-TING ; CHENG, MENG-HSUAN</creatorcontrib><description>An accelerator for neural network computing includes hardware engines and a buffer memory. The hardware engines include a convolution engine and at least a second engine. Each hardware engine includes circuitry to perform neural network operations. The buffer memory stores a first input tile and a second input tile of an input feature map. The second input tile overlaps with the first input tile in the buffer memory. The convolution engine is operative to retrieve the first input tile from the buffer memory, perform convolution operations on the first input tile to generate an intermediate tile of an intermediate feature map, and pass the intermediate tile to the second engine via the buffer memory.</description><language>chi ; eng</language><subject>CALCULATING ; COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS ; COMPUTING ; COUNTING ; ELECTRIC DIGITAL DATA PROCESSING ; PHYSICS</subject><creationdate>2021</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20211201&amp;DB=EPODOC&amp;CC=TW&amp;NR=I748151B$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,776,881,25543,76292</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20211201&amp;DB=EPODOC&amp;CC=TW&amp;NR=I748151B$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>ANDRIAN, HENRRY</creatorcontrib><creatorcontrib>LIN, CHIEN-HUNG</creatorcontrib><creatorcontrib>HUNG, SHENG-JE</creatorcontrib><creatorcontrib>WANG, SHAO-YU</creatorcontrib><creatorcontrib>CHEN, TAI-LUNG</creatorcontrib><creatorcontrib>CHEN, YI-SIOU</creatorcontrib><creatorcontrib>WU, CHI-TA</creatorcontrib><creatorcontrib>KUO, YU-TING</creatorcontrib><creatorcontrib>CHENG, MENG-HSUAN</creatorcontrib><title>Accelerator for neural network computing and execution method thereof</title><description>An accelerator for neural network computing includes hardware engines and a buffer memory. The hardware engines include a convolution engine and at least a second engine. Each hardware engine includes circuitry to perform neural network operations. The buffer memory stores a first input tile and a second input tile of an input feature map. The second input tile overlaps with the first input tile in the buffer memory. The convolution engine is operative to retrieve the first input tile from the buffer memory, perform convolution operations on the first input tile to generate an intermediate tile of an intermediate feature map, and pass the intermediate tile to the second engine via the buffer memory.</description><subject>CALCULATING</subject><subject>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>ELECTRIC DIGITAL DATA PROCESSING</subject><subject>PHYSICS</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2021</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNrjZHB1TE5OzUktSizJL1JIA-K81NKixBwgVVKeX5StkJyfW1BakpmXrpCYl6KQWpGaDOTl5ynkppZk5KcolGSkFqXmp_EwsKYl5hSn8kJpbgYFN9cQZw_d1IL8-NTigsTkVKCB8SHhnuYmFoamhk5OxkQoAQDY_TPS</recordid><startdate>20211201</startdate><enddate>20211201</enddate><creator>ANDRIAN, HENRRY</creator><creator>LIN, CHIEN-HUNG</creator><creator>HUNG, SHENG-JE</creator><creator>WANG, SHAO-YU</creator><creator>CHEN, TAI-LUNG</creator><creator>CHEN, YI-SIOU</creator><creator>WU, CHI-TA</creator><creator>KUO, YU-TING</creator><creator>CHENG, MENG-HSUAN</creator><scope>EVB</scope></search><sort><creationdate>20211201</creationdate><title>Accelerator for neural network computing and execution method thereof</title><author>ANDRIAN, HENRRY ; LIN, CHIEN-HUNG ; HUNG, SHENG-JE ; WANG, SHAO-YU ; CHEN, TAI-LUNG ; CHEN, YI-SIOU ; WU, CHI-TA ; KUO, YU-TING ; CHENG, MENG-HSUAN</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_TWI748151BB3</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>chi ; eng</language><creationdate>2021</creationdate><topic>CALCULATING</topic><topic>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>ELECTRIC DIGITAL DATA PROCESSING</topic><topic>PHYSICS</topic><toplevel>online_resources</toplevel><creatorcontrib>ANDRIAN, HENRRY</creatorcontrib><creatorcontrib>LIN, CHIEN-HUNG</creatorcontrib><creatorcontrib>HUNG, SHENG-JE</creatorcontrib><creatorcontrib>WANG, SHAO-YU</creatorcontrib><creatorcontrib>CHEN, TAI-LUNG</creatorcontrib><creatorcontrib>CHEN, YI-SIOU</creatorcontrib><creatorcontrib>WU, CHI-TA</creatorcontrib><creatorcontrib>KUO, YU-TING</creatorcontrib><creatorcontrib>CHENG, MENG-HSUAN</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>ANDRIAN, HENRRY</au><au>LIN, CHIEN-HUNG</au><au>HUNG, SHENG-JE</au><au>WANG, SHAO-YU</au><au>CHEN, TAI-LUNG</au><au>CHEN, YI-SIOU</au><au>WU, CHI-TA</au><au>KUO, YU-TING</au><au>CHENG, MENG-HSUAN</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Accelerator for neural network computing and execution method thereof</title><date>2021-12-01</date><risdate>2021</risdate><abstract>An accelerator for neural network computing includes hardware engines and a buffer memory. The hardware engines include a convolution engine and at least a second engine. Each hardware engine includes circuitry to perform neural network operations. The buffer memory stores a first input tile and a second input tile of an input feature map. The second input tile overlaps with the first input tile in the buffer memory. The convolution engine is operative to retrieve the first input tile from the buffer memory, perform convolution operations on the first input tile to generate an intermediate tile of an intermediate feature map, and pass the intermediate tile to the second engine via the buffer memory.</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language chi ; eng
recordid cdi_epo_espacenet_TWI748151BB
source esp@cenet
subjects CALCULATING
COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
COMPUTING
COUNTING
ELECTRIC DIGITAL DATA PROCESSING
PHYSICS
title Accelerator for neural network computing and execution method thereof
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-27T09%3A06%3A32IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=ANDRIAN,%20HENRRY&rft.date=2021-12-01&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3ETWI748151BB%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true