ARTIFICIAL INTELLIGENCE ACCELERATOR DEVICE

An artificial intelligence (AI) accelerator device may include a plurality of on-chip mini buffers that are associated with a processing element (PE) array. Each mini buffer is associated with a subset of rows or a subset of columns of the PE array. Partitioning an on-chip buffer of the AI accelerat...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: SUN, Xiaoyu, AKARVARDAR, Murat Kerem, PENG, Xiaochen
Format: Patent
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator SUN, Xiaoyu
AKARVARDAR, Murat Kerem
PENG, Xiaochen
description An artificial intelligence (AI) accelerator device may include a plurality of on-chip mini buffers that are associated with a processing element (PE) array. Each mini buffer is associated with a subset of rows or a subset of columns of the PE array. Partitioning an on-chip buffer of the AI accelerator device into the mini buffers described herein may reduce the size and complexity of the on-chip buffer. The reduced size of the on-chip buffer may reduce the wire routing complexity of the on-chip buffer, which may reduce latency and may reduce access energy for the AI accelerator device. This may increase the operating efficiency and/or may increase the performance of the AI accelerator device. Moreover, the mini buffers may increase the overall bandwidth that is available for the mini buffers to transfer data to and from the PE array.
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_US2024069971A1</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>US2024069971A1</sourcerecordid><originalsourceid>FETCH-epo_espacenet_US2024069971A13</originalsourceid><addsrcrecordid>eNrjZNByDArxdPN09nT0UfD0C3H18fF0d_VzdlVwdHZ29XENcgzxD1JwcQ3zdHblYWBNS8wpTuWF0twMym6uIc4euqkF-fGpxQWJyal5qSXxocFGBkYmBmaWluaGjobGxKkCAN_UJZU</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>ARTIFICIAL INTELLIGENCE ACCELERATOR DEVICE</title><source>esp@cenet</source><creator>SUN, Xiaoyu ; AKARVARDAR, Murat Kerem ; PENG, Xiaochen</creator><creatorcontrib>SUN, Xiaoyu ; AKARVARDAR, Murat Kerem ; PENG, Xiaochen</creatorcontrib><description>An artificial intelligence (AI) accelerator device may include a plurality of on-chip mini buffers that are associated with a processing element (PE) array. Each mini buffer is associated with a subset of rows or a subset of columns of the PE array. Partitioning an on-chip buffer of the AI accelerator device into the mini buffers described herein may reduce the size and complexity of the on-chip buffer. The reduced size of the on-chip buffer may reduce the wire routing complexity of the on-chip buffer, which may reduce latency and may reduce access energy for the AI accelerator device. This may increase the operating efficiency and/or may increase the performance of the AI accelerator device. Moreover, the mini buffers may increase the overall bandwidth that is available for the mini buffers to transfer data to and from the PE array.</description><language>eng</language><subject>CALCULATING ; COMPUTING ; COUNTING ; ELECTRIC DIGITAL DATA PROCESSING ; PHYSICS</subject><creationdate>2024</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20240229&amp;DB=EPODOC&amp;CC=US&amp;NR=2024069971A1$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,780,885,25564,76547</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20240229&amp;DB=EPODOC&amp;CC=US&amp;NR=2024069971A1$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>SUN, Xiaoyu</creatorcontrib><creatorcontrib>AKARVARDAR, Murat Kerem</creatorcontrib><creatorcontrib>PENG, Xiaochen</creatorcontrib><title>ARTIFICIAL INTELLIGENCE ACCELERATOR DEVICE</title><description>An artificial intelligence (AI) accelerator device may include a plurality of on-chip mini buffers that are associated with a processing element (PE) array. Each mini buffer is associated with a subset of rows or a subset of columns of the PE array. Partitioning an on-chip buffer of the AI accelerator device into the mini buffers described herein may reduce the size and complexity of the on-chip buffer. The reduced size of the on-chip buffer may reduce the wire routing complexity of the on-chip buffer, which may reduce latency and may reduce access energy for the AI accelerator device. This may increase the operating efficiency and/or may increase the performance of the AI accelerator device. Moreover, the mini buffers may increase the overall bandwidth that is available for the mini buffers to transfer data to and from the PE array.</description><subject>CALCULATING</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>ELECTRIC DIGITAL DATA PROCESSING</subject><subject>PHYSICS</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2024</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNrjZNByDArxdPN09nT0UfD0C3H18fF0d_VzdlVwdHZ29XENcgzxD1JwcQ3zdHblYWBNS8wpTuWF0twMym6uIc4euqkF-fGpxQWJyal5qSXxocFGBkYmBmaWluaGjobGxKkCAN_UJZU</recordid><startdate>20240229</startdate><enddate>20240229</enddate><creator>SUN, Xiaoyu</creator><creator>AKARVARDAR, Murat Kerem</creator><creator>PENG, Xiaochen</creator><scope>EVB</scope></search><sort><creationdate>20240229</creationdate><title>ARTIFICIAL INTELLIGENCE ACCELERATOR DEVICE</title><author>SUN, Xiaoyu ; AKARVARDAR, Murat Kerem ; PENG, Xiaochen</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_US2024069971A13</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>eng</language><creationdate>2024</creationdate><topic>CALCULATING</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>ELECTRIC DIGITAL DATA PROCESSING</topic><topic>PHYSICS</topic><toplevel>online_resources</toplevel><creatorcontrib>SUN, Xiaoyu</creatorcontrib><creatorcontrib>AKARVARDAR, Murat Kerem</creatorcontrib><creatorcontrib>PENG, Xiaochen</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>SUN, Xiaoyu</au><au>AKARVARDAR, Murat Kerem</au><au>PENG, Xiaochen</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>ARTIFICIAL INTELLIGENCE ACCELERATOR DEVICE</title><date>2024-02-29</date><risdate>2024</risdate><abstract>An artificial intelligence (AI) accelerator device may include a plurality of on-chip mini buffers that are associated with a processing element (PE) array. Each mini buffer is associated with a subset of rows or a subset of columns of the PE array. Partitioning an on-chip buffer of the AI accelerator device into the mini buffers described herein may reduce the size and complexity of the on-chip buffer. The reduced size of the on-chip buffer may reduce the wire routing complexity of the on-chip buffer, which may reduce latency and may reduce access energy for the AI accelerator device. This may increase the operating efficiency and/or may increase the performance of the AI accelerator device. Moreover, the mini buffers may increase the overall bandwidth that is available for the mini buffers to transfer data to and from the PE array.</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language eng
recordid cdi_epo_espacenet_US2024069971A1
source esp@cenet
subjects CALCULATING
COMPUTING
COUNTING
ELECTRIC DIGITAL DATA PROCESSING
PHYSICS
title ARTIFICIAL INTELLIGENCE ACCELERATOR DEVICE
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-25T00%3A30%3A17IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=SUN,%20Xiaoyu&rft.date=2024-02-29&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3EUS2024069971A1%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true