Deep neural network model compiling optimization method based on SW processor

The invention relates to a deep neural network model compilation optimization method based on an SW processor, and the method comprises the steps: carrying out the corresponding packaging of an automatically generated AOT (Ahead-of-Time) code according to the code specification of the SW processor,...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: ZHENG XUEGUI, LIAO JIANJIN, YANG HAILONG, LUAN ZHONGZHI, QIAN DEPEI, LI MINGZHEN
Format: Patent
Sprache:chi ; eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator ZHENG XUEGUI
LIAO JIANJIN
YANG HAILONG
LUAN ZHONGZHI
QIAN DEPEI
LI MINGZHEN
description The invention relates to a deep neural network model compilation optimization method based on an SW processor, and the method comprises the steps: carrying out the corresponding packaging of an automatically generated AOT (Ahead-of-Time) code according to the code specification of the SW processor, and carrying out the correct initialization and calling in a main function, a local data storage management (LDM) method is used for transmitting data related to partial calculation to a local data cache in batches so as to improve the transmission rate, and a direct memory access (DMA) code insertion method is used for realizing efficient data transmission of a main memory and the LDM. The method disclosed by the invention is suitable for a hardware system structure of the SW processor, high-performance codes can be quickly generated, and the development efficiency of programmers is improved. 本发明涉及一种基于申威处理器的深度神经网络模型编译优化方法,其方法依照申威处理器的代码规范,对自动生成的AOT(Ahead-of-Time)代码进行相应的封装,并在主函数中进行正确的初始化和调用,并利用局部数据存储管理(LDM)方法将部分计算涉及
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_CN115576561A</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>CN115576561A</sourcerecordid><originalsourceid>FETCH-epo_espacenet_CN115576561A3</originalsourceid><addsrcrecordid>eNrjZPB1SU0tUMhLLS1KzAFSJeX5RdkKufkpqTkKyfm5BZk5mXnpCvkFJZm5mVWJJZn5eQq5qSUZ-SkKSYnFqSkKQH5wuEJBUX5yanFxfhEPA2taYk5xKi-U5mZQdHMNcfbQTS3Ij08tLkhMTgXaEe_sZ2hoampuZmpm6GhMjBoA-Ig2bg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Deep neural network model compiling optimization method based on SW processor</title><source>esp@cenet</source><creator>ZHENG XUEGUI ; LIAO JIANJIN ; YANG HAILONG ; LUAN ZHONGZHI ; QIAN DEPEI ; LI MINGZHEN</creator><creatorcontrib>ZHENG XUEGUI ; LIAO JIANJIN ; YANG HAILONG ; LUAN ZHONGZHI ; QIAN DEPEI ; LI MINGZHEN</creatorcontrib><description>The invention relates to a deep neural network model compilation optimization method based on an SW processor, and the method comprises the steps: carrying out the corresponding packaging of an automatically generated AOT (Ahead-of-Time) code according to the code specification of the SW processor, and carrying out the correct initialization and calling in a main function, a local data storage management (LDM) method is used for transmitting data related to partial calculation to a local data cache in batches so as to improve the transmission rate, and a direct memory access (DMA) code insertion method is used for realizing efficient data transmission of a main memory and the LDM. The method disclosed by the invention is suitable for a hardware system structure of the SW processor, high-performance codes can be quickly generated, and the development efficiency of programmers is improved. 本发明涉及一种基于申威处理器的深度神经网络模型编译优化方法,其方法依照申威处理器的代码规范,对自动生成的AOT(Ahead-of-Time)代码进行相应的封装,并在主函数中进行正确的初始化和调用,并利用局部数据存储管理(LDM)方法将部分计算涉及</description><language>chi ; eng</language><subject>CALCULATING ; COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS ; COMPUTING ; COUNTING ; ELECTRIC DIGITAL DATA PROCESSING ; PHYSICS</subject><creationdate>2023</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20230106&amp;DB=EPODOC&amp;CC=CN&amp;NR=115576561A$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,776,881,25542,76290</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20230106&amp;DB=EPODOC&amp;CC=CN&amp;NR=115576561A$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>ZHENG XUEGUI</creatorcontrib><creatorcontrib>LIAO JIANJIN</creatorcontrib><creatorcontrib>YANG HAILONG</creatorcontrib><creatorcontrib>LUAN ZHONGZHI</creatorcontrib><creatorcontrib>QIAN DEPEI</creatorcontrib><creatorcontrib>LI MINGZHEN</creatorcontrib><title>Deep neural network model compiling optimization method based on SW processor</title><description>The invention relates to a deep neural network model compilation optimization method based on an SW processor, and the method comprises the steps: carrying out the corresponding packaging of an automatically generated AOT (Ahead-of-Time) code according to the code specification of the SW processor, and carrying out the correct initialization and calling in a main function, a local data storage management (LDM) method is used for transmitting data related to partial calculation to a local data cache in batches so as to improve the transmission rate, and a direct memory access (DMA) code insertion method is used for realizing efficient data transmission of a main memory and the LDM. The method disclosed by the invention is suitable for a hardware system structure of the SW processor, high-performance codes can be quickly generated, and the development efficiency of programmers is improved. 本发明涉及一种基于申威处理器的深度神经网络模型编译优化方法,其方法依照申威处理器的代码规范,对自动生成的AOT(Ahead-of-Time)代码进行相应的封装,并在主函数中进行正确的初始化和调用,并利用局部数据存储管理(LDM)方法将部分计算涉及</description><subject>CALCULATING</subject><subject>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>ELECTRIC DIGITAL DATA PROCESSING</subject><subject>PHYSICS</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2023</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNrjZPB1SU0tUMhLLS1KzAFSJeX5RdkKufkpqTkKyfm5BZk5mXnpCvkFJZm5mVWJJZn5eQq5qSUZ-SkKSYnFqSkKQH5wuEJBUX5yanFxfhEPA2taYk5xKi-U5mZQdHMNcfbQTS3Ij08tLkhMTgXaEe_sZ2hoampuZmpm6GhMjBoA-Ig2bg</recordid><startdate>20230106</startdate><enddate>20230106</enddate><creator>ZHENG XUEGUI</creator><creator>LIAO JIANJIN</creator><creator>YANG HAILONG</creator><creator>LUAN ZHONGZHI</creator><creator>QIAN DEPEI</creator><creator>LI MINGZHEN</creator><scope>EVB</scope></search><sort><creationdate>20230106</creationdate><title>Deep neural network model compiling optimization method based on SW processor</title><author>ZHENG XUEGUI ; LIAO JIANJIN ; YANG HAILONG ; LUAN ZHONGZHI ; QIAN DEPEI ; LI MINGZHEN</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_CN115576561A3</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>chi ; eng</language><creationdate>2023</creationdate><topic>CALCULATING</topic><topic>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>ELECTRIC DIGITAL DATA PROCESSING</topic><topic>PHYSICS</topic><toplevel>online_resources</toplevel><creatorcontrib>ZHENG XUEGUI</creatorcontrib><creatorcontrib>LIAO JIANJIN</creatorcontrib><creatorcontrib>YANG HAILONG</creatorcontrib><creatorcontrib>LUAN ZHONGZHI</creatorcontrib><creatorcontrib>QIAN DEPEI</creatorcontrib><creatorcontrib>LI MINGZHEN</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>ZHENG XUEGUI</au><au>LIAO JIANJIN</au><au>YANG HAILONG</au><au>LUAN ZHONGZHI</au><au>QIAN DEPEI</au><au>LI MINGZHEN</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Deep neural network model compiling optimization method based on SW processor</title><date>2023-01-06</date><risdate>2023</risdate><abstract>The invention relates to a deep neural network model compilation optimization method based on an SW processor, and the method comprises the steps: carrying out the corresponding packaging of an automatically generated AOT (Ahead-of-Time) code according to the code specification of the SW processor, and carrying out the correct initialization and calling in a main function, a local data storage management (LDM) method is used for transmitting data related to partial calculation to a local data cache in batches so as to improve the transmission rate, and a direct memory access (DMA) code insertion method is used for realizing efficient data transmission of a main memory and the LDM. The method disclosed by the invention is suitable for a hardware system structure of the SW processor, high-performance codes can be quickly generated, and the development efficiency of programmers is improved. 本发明涉及一种基于申威处理器的深度神经网络模型编译优化方法,其方法依照申威处理器的代码规范,对自动生成的AOT(Ahead-of-Time)代码进行相应的封装,并在主函数中进行正确的初始化和调用,并利用局部数据存储管理(LDM)方法将部分计算涉及</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language chi ; eng
recordid cdi_epo_espacenet_CN115576561A
source esp@cenet
subjects CALCULATING
COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
COMPUTING
COUNTING
ELECTRIC DIGITAL DATA PROCESSING
PHYSICS
title Deep neural network model compiling optimization method based on SW processor
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-29T02%3A25%3A08IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=ZHENG%20XUEGUI&rft.date=2023-01-06&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3ECN115576561A%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true