Method and device for loading data in machine learning accelerator
Methods, apparatus, systems, and articles of manufacture for loading data into accelerators are disclosed. An example apparatus includes data provider circuitry to load a first segment and an additional amount of compressed machine learning parameter data into a processor engine. Processor engine ci...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Patent |
Sprache: | chi ; eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | CHINYA, GAUTHAM MATYAKUTI, DEEPAK BRICK CORMAC RAHA, ARNAB MOHAPATRA, DEBABRATA KIM SANG-KYUN |
description | Methods, apparatus, systems, and articles of manufacture for loading data into accelerators are disclosed. An example apparatus includes data provider circuitry to load a first segment and an additional amount of compressed machine learning parameter data into a processor engine. Processor engine circuitry performs machine learning operations using the compressed machine learning parameter data for the first segment. The compressed local data re-user circuit determines whether there is a second segment in the additional amount of compressed machine learning parameter data. When there is a second segment in the additional amount of compressed machine learning parameter data, the processor engine circuitry performs a machine learning operation using the second segment.
公开了用于将数据加载到加速器中的方法、装置、系统、和制品。示例装置包括数据提供器电路,来将第一片段和额外量的压缩机器学习参数数据加载到处理器引擎中。处理器引擎电路使用第一片段的压缩机器学习参数数据来执行机器学习操作。压缩本地数据再使用器电路确定在额外量的压缩机器学习参数数据中是否存在第二片段。当在额外量的压缩机器学习参数数据中存在第二片段时,处理器引擎电路使用第二片段来执行机器学习操作。 |
format | Patent |
fullrecord | <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_CN115526301A</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>CN115526301A</sourcerecordid><originalsourceid>FETCH-epo_espacenet_CN115526301A3</originalsourceid><addsrcrecordid>eNqNyjEKwkAQRuFtLES9w-QAgmuIfQwRG63swzD7xyysM2GzeH4RPIDVK763ducbymSBWAMFvKOARsuUjEPUJwUuTFHpxTJFBSVw1i-wCBIyF8tbtxo5Ldj9unHVpX901z1mG7DMLFCUobt73zTHU33wbf3P8wHz8zF9</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Method and device for loading data in machine learning accelerator</title><source>esp@cenet</source><creator>CHINYA, GAUTHAM ; MATYAKUTI, DEEPAK ; BRICK CORMAC ; RAHA, ARNAB ; MOHAPATRA, DEBABRATA ; KIM SANG-KYUN</creator><creatorcontrib>CHINYA, GAUTHAM ; MATYAKUTI, DEEPAK ; BRICK CORMAC ; RAHA, ARNAB ; MOHAPATRA, DEBABRATA ; KIM SANG-KYUN</creatorcontrib><description>Methods, apparatus, systems, and articles of manufacture for loading data into accelerators are disclosed. An example apparatus includes data provider circuitry to load a first segment and an additional amount of compressed machine learning parameter data into a processor engine. Processor engine circuitry performs machine learning operations using the compressed machine learning parameter data for the first segment. The compressed local data re-user circuit determines whether there is a second segment in the additional amount of compressed machine learning parameter data. When there is a second segment in the additional amount of compressed machine learning parameter data, the processor engine circuitry performs a machine learning operation using the second segment.
公开了用于将数据加载到加速器中的方法、装置、系统、和制品。示例装置包括数据提供器电路,来将第一片段和额外量的压缩机器学习参数数据加载到处理器引擎中。处理器引擎电路使用第一片段的压缩机器学习参数数据来执行机器学习操作。压缩本地数据再使用器电路确定在额外量的压缩机器学习参数数据中是否存在第二片段。当在额外量的压缩机器学习参数数据中存在第二片段时,处理器引擎电路使用第二片段来执行机器学习操作。</description><language>chi ; eng</language><subject>CALCULATING ; COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS ; COMPUTING ; COUNTING ; PHYSICS</subject><creationdate>2022</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&date=20221227&DB=EPODOC&CC=CN&NR=115526301A$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,776,881,25542,76516</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&date=20221227&DB=EPODOC&CC=CN&NR=115526301A$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>CHINYA, GAUTHAM</creatorcontrib><creatorcontrib>MATYAKUTI, DEEPAK</creatorcontrib><creatorcontrib>BRICK CORMAC</creatorcontrib><creatorcontrib>RAHA, ARNAB</creatorcontrib><creatorcontrib>MOHAPATRA, DEBABRATA</creatorcontrib><creatorcontrib>KIM SANG-KYUN</creatorcontrib><title>Method and device for loading data in machine learning accelerator</title><description>Methods, apparatus, systems, and articles of manufacture for loading data into accelerators are disclosed. An example apparatus includes data provider circuitry to load a first segment and an additional amount of compressed machine learning parameter data into a processor engine. Processor engine circuitry performs machine learning operations using the compressed machine learning parameter data for the first segment. The compressed local data re-user circuit determines whether there is a second segment in the additional amount of compressed machine learning parameter data. When there is a second segment in the additional amount of compressed machine learning parameter data, the processor engine circuitry performs a machine learning operation using the second segment.
公开了用于将数据加载到加速器中的方法、装置、系统、和制品。示例装置包括数据提供器电路,来将第一片段和额外量的压缩机器学习参数数据加载到处理器引擎中。处理器引擎电路使用第一片段的压缩机器学习参数数据来执行机器学习操作。压缩本地数据再使用器电路确定在额外量的压缩机器学习参数数据中是否存在第二片段。当在额外量的压缩机器学习参数数据中存在第二片段时,处理器引擎电路使用第二片段来执行机器学习操作。</description><subject>CALCULATING</subject><subject>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>PHYSICS</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2022</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNqNyjEKwkAQRuFtLES9w-QAgmuIfQwRG63swzD7xyysM2GzeH4RPIDVK763ducbymSBWAMFvKOARsuUjEPUJwUuTFHpxTJFBSVw1i-wCBIyF8tbtxo5Ldj9unHVpX901z1mG7DMLFCUobt73zTHU33wbf3P8wHz8zF9</recordid><startdate>20221227</startdate><enddate>20221227</enddate><creator>CHINYA, GAUTHAM</creator><creator>MATYAKUTI, DEEPAK</creator><creator>BRICK CORMAC</creator><creator>RAHA, ARNAB</creator><creator>MOHAPATRA, DEBABRATA</creator><creator>KIM SANG-KYUN</creator><scope>EVB</scope></search><sort><creationdate>20221227</creationdate><title>Method and device for loading data in machine learning accelerator</title><author>CHINYA, GAUTHAM ; MATYAKUTI, DEEPAK ; BRICK CORMAC ; RAHA, ARNAB ; MOHAPATRA, DEBABRATA ; KIM SANG-KYUN</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_CN115526301A3</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>chi ; eng</language><creationdate>2022</creationdate><topic>CALCULATING</topic><topic>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>PHYSICS</topic><toplevel>online_resources</toplevel><creatorcontrib>CHINYA, GAUTHAM</creatorcontrib><creatorcontrib>MATYAKUTI, DEEPAK</creatorcontrib><creatorcontrib>BRICK CORMAC</creatorcontrib><creatorcontrib>RAHA, ARNAB</creatorcontrib><creatorcontrib>MOHAPATRA, DEBABRATA</creatorcontrib><creatorcontrib>KIM SANG-KYUN</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>CHINYA, GAUTHAM</au><au>MATYAKUTI, DEEPAK</au><au>BRICK CORMAC</au><au>RAHA, ARNAB</au><au>MOHAPATRA, DEBABRATA</au><au>KIM SANG-KYUN</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Method and device for loading data in machine learning accelerator</title><date>2022-12-27</date><risdate>2022</risdate><abstract>Methods, apparatus, systems, and articles of manufacture for loading data into accelerators are disclosed. An example apparatus includes data provider circuitry to load a first segment and an additional amount of compressed machine learning parameter data into a processor engine. Processor engine circuitry performs machine learning operations using the compressed machine learning parameter data for the first segment. The compressed local data re-user circuit determines whether there is a second segment in the additional amount of compressed machine learning parameter data. When there is a second segment in the additional amount of compressed machine learning parameter data, the processor engine circuitry performs a machine learning operation using the second segment.
公开了用于将数据加载到加速器中的方法、装置、系统、和制品。示例装置包括数据提供器电路,来将第一片段和额外量的压缩机器学习参数数据加载到处理器引擎中。处理器引擎电路使用第一片段的压缩机器学习参数数据来执行机器学习操作。压缩本地数据再使用器电路确定在额外量的压缩机器学习参数数据中是否存在第二片段。当在额外量的压缩机器学习参数数据中存在第二片段时,处理器引擎电路使用第二片段来执行机器学习操作。</abstract><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | |
ispartof | |
issn | |
language | chi ; eng |
recordid | cdi_epo_espacenet_CN115526301A |
source | esp@cenet |
subjects | CALCULATING COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS COMPUTING COUNTING PHYSICS |
title | Method and device for loading data in machine learning accelerator |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-14T12%3A14%3A41IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=CHINYA,%20GAUTHAM&rft.date=2022-12-27&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3ECN115526301A%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |