PERFORMING DYNAMIC SPARSE COMPUTATION ON DENSE COMPUTATION-EFFICIENT COMPUTING DEVICES

Embodiments of the present disclosure include techniques processing dynamically sparse neural networks as dense computations. A permutation is performed to translate an input tensor from a sparse format into a dense format. Once in a dense format, dense computation can be performed to generate outpu...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: YANG, Mao, ZHENG, Ningxin, JIANG, Huiqiang, ZHOU, Lidong, QIU, Lili, MA, Lingxiao, HAN, Zhenhua, ZHANG, Quanlu, YANG, Fan, YANG, Yuqing
Format: Patent
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator YANG, Mao
ZHENG, Ningxin
JIANG, Huiqiang
ZHOU, Lidong
QIU, Lili
MA, Lingxiao
HAN, Zhenhua
ZHANG, Quanlu
YANG, Fan
YANG, Yuqing
description Embodiments of the present disclosure include techniques processing dynamically sparse neural networks as dense computations. A permutation is performed to translate an input tensor from a sparse format into a dense format. Once in a dense format, dense computation can be performed to generate output data that is also in the dense format. A reverse permutation may then be performed to translate the output data back into the sparse format. An analysis of the operator is performed prior to runtime to determine the one or more dimensions of the tensor expression associated with the operator that are permutation invariant. The permutation may permutate the input tensor across dimensions that are permutation invariant.
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_US2024403618A1</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>US2024403618A1</sourcerecordid><originalsourceid>FETCH-epo_espacenet_US2024403618A13</originalsourceid><addsrcrecordid>eNrjZAgLcA1y8w_y9fRzV3CJ9HP09XRWCA5wDAp2VXD29w0IDXEM8fT3UwAiF1c_VEFdVzc3T2dPV78QqCjYDNcwT2fXYB4G1rTEnOJUXijNzaDs5hri7KGbWpAfn1pckJicmpdaEh8abGRgZGJiYGxmaOFoaEycKgCY6TIe</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>PERFORMING DYNAMIC SPARSE COMPUTATION ON DENSE COMPUTATION-EFFICIENT COMPUTING DEVICES</title><source>esp@cenet</source><creator>YANG, Mao ; ZHENG, Ningxin ; JIANG, Huiqiang ; ZHOU, Lidong ; QIU, Lili ; MA, Lingxiao ; HAN, Zhenhua ; ZHANG, Quanlu ; YANG, Fan ; YANG, Yuqing</creator><creatorcontrib>YANG, Mao ; ZHENG, Ningxin ; JIANG, Huiqiang ; ZHOU, Lidong ; QIU, Lili ; MA, Lingxiao ; HAN, Zhenhua ; ZHANG, Quanlu ; YANG, Fan ; YANG, Yuqing</creatorcontrib><description>Embodiments of the present disclosure include techniques processing dynamically sparse neural networks as dense computations. A permutation is performed to translate an input tensor from a sparse format into a dense format. Once in a dense format, dense computation can be performed to generate output data that is also in the dense format. A reverse permutation may then be performed to translate the output data back into the sparse format. An analysis of the operator is performed prior to runtime to determine the one or more dimensions of the tensor expression associated with the operator that are permutation invariant. The permutation may permutate the input tensor across dimensions that are permutation invariant.</description><language>eng</language><subject>CALCULATING ; COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS ; COMPUTING ; COUNTING ; PHYSICS</subject><creationdate>2024</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20241205&amp;DB=EPODOC&amp;CC=US&amp;NR=2024403618A1$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,776,881,25542,76290</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20241205&amp;DB=EPODOC&amp;CC=US&amp;NR=2024403618A1$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>YANG, Mao</creatorcontrib><creatorcontrib>ZHENG, Ningxin</creatorcontrib><creatorcontrib>JIANG, Huiqiang</creatorcontrib><creatorcontrib>ZHOU, Lidong</creatorcontrib><creatorcontrib>QIU, Lili</creatorcontrib><creatorcontrib>MA, Lingxiao</creatorcontrib><creatorcontrib>HAN, Zhenhua</creatorcontrib><creatorcontrib>ZHANG, Quanlu</creatorcontrib><creatorcontrib>YANG, Fan</creatorcontrib><creatorcontrib>YANG, Yuqing</creatorcontrib><title>PERFORMING DYNAMIC SPARSE COMPUTATION ON DENSE COMPUTATION-EFFICIENT COMPUTING DEVICES</title><description>Embodiments of the present disclosure include techniques processing dynamically sparse neural networks as dense computations. A permutation is performed to translate an input tensor from a sparse format into a dense format. Once in a dense format, dense computation can be performed to generate output data that is also in the dense format. A reverse permutation may then be performed to translate the output data back into the sparse format. An analysis of the operator is performed prior to runtime to determine the one or more dimensions of the tensor expression associated with the operator that are permutation invariant. The permutation may permutate the input tensor across dimensions that are permutation invariant.</description><subject>CALCULATING</subject><subject>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>PHYSICS</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2024</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNrjZAgLcA1y8w_y9fRzV3CJ9HP09XRWCA5wDAp2VXD29w0IDXEM8fT3UwAiF1c_VEFdVzc3T2dPV78QqCjYDNcwT2fXYB4G1rTEnOJUXijNzaDs5hri7KGbWpAfn1pckJicmpdaEh8abGRgZGJiYGxmaOFoaEycKgCY6TIe</recordid><startdate>20241205</startdate><enddate>20241205</enddate><creator>YANG, Mao</creator><creator>ZHENG, Ningxin</creator><creator>JIANG, Huiqiang</creator><creator>ZHOU, Lidong</creator><creator>QIU, Lili</creator><creator>MA, Lingxiao</creator><creator>HAN, Zhenhua</creator><creator>ZHANG, Quanlu</creator><creator>YANG, Fan</creator><creator>YANG, Yuqing</creator><scope>EVB</scope></search><sort><creationdate>20241205</creationdate><title>PERFORMING DYNAMIC SPARSE COMPUTATION ON DENSE COMPUTATION-EFFICIENT COMPUTING DEVICES</title><author>YANG, Mao ; ZHENG, Ningxin ; JIANG, Huiqiang ; ZHOU, Lidong ; QIU, Lili ; MA, Lingxiao ; HAN, Zhenhua ; ZHANG, Quanlu ; YANG, Fan ; YANG, Yuqing</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_US2024403618A13</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>eng</language><creationdate>2024</creationdate><topic>CALCULATING</topic><topic>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>PHYSICS</topic><toplevel>online_resources</toplevel><creatorcontrib>YANG, Mao</creatorcontrib><creatorcontrib>ZHENG, Ningxin</creatorcontrib><creatorcontrib>JIANG, Huiqiang</creatorcontrib><creatorcontrib>ZHOU, Lidong</creatorcontrib><creatorcontrib>QIU, Lili</creatorcontrib><creatorcontrib>MA, Lingxiao</creatorcontrib><creatorcontrib>HAN, Zhenhua</creatorcontrib><creatorcontrib>ZHANG, Quanlu</creatorcontrib><creatorcontrib>YANG, Fan</creatorcontrib><creatorcontrib>YANG, Yuqing</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>YANG, Mao</au><au>ZHENG, Ningxin</au><au>JIANG, Huiqiang</au><au>ZHOU, Lidong</au><au>QIU, Lili</au><au>MA, Lingxiao</au><au>HAN, Zhenhua</au><au>ZHANG, Quanlu</au><au>YANG, Fan</au><au>YANG, Yuqing</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>PERFORMING DYNAMIC SPARSE COMPUTATION ON DENSE COMPUTATION-EFFICIENT COMPUTING DEVICES</title><date>2024-12-05</date><risdate>2024</risdate><abstract>Embodiments of the present disclosure include techniques processing dynamically sparse neural networks as dense computations. A permutation is performed to translate an input tensor from a sparse format into a dense format. Once in a dense format, dense computation can be performed to generate output data that is also in the dense format. A reverse permutation may then be performed to translate the output data back into the sparse format. An analysis of the operator is performed prior to runtime to determine the one or more dimensions of the tensor expression associated with the operator that are permutation invariant. The permutation may permutate the input tensor across dimensions that are permutation invariant.</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language eng
recordid cdi_epo_espacenet_US2024403618A1
source esp@cenet
subjects CALCULATING
COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
COMPUTING
COUNTING
PHYSICS
title PERFORMING DYNAMIC SPARSE COMPUTATION ON DENSE COMPUTATION-EFFICIENT COMPUTING DEVICES
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-31T01%3A32%3A28IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=YANG,%20Mao&rft.date=2024-12-05&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3EUS2024403618A1%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true