Neural network task processing method and device, electronic equipment and storage medium
The invention relates to a neural network task processing method and device, electronic equipment and a storage medium, which are used for grouping a plurality of calculation cores included in the electronic equipment under the condition of receiving a plurality of neural network tasks to obtain cal...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Patent |
Sprache: | chi ; eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | MA SONGCHEN SHI LUPING PEI JING QU HUANYU ZHAO RONG ZHANG WEIHAO |
description | The invention relates to a neural network task processing method and device, electronic equipment and a storage medium, which are used for grouping a plurality of calculation cores included in the electronic equipment under the condition of receiving a plurality of neural network tasks to obtain calculation core groups for executing each neural network task, and determining an operation cycle corresponding to each neural network task. And executing the neural network task according to the corresponding operation cycle through each calculation core group. Wherein the calculation core groups carry out at least one time of data exchange in one operation cycle, and at least two calculation cores of which the data exchange moments are kept synchronous exist in each calculation core group. Different tasks are asynchronously processed through different calculation core groups, an independent running environment is developed for each task, and delay and unnecessary waiting caused by task switching are reduced. Meanwh |
format | Patent |
fullrecord | <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_CN115099391A</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>CN115099391A</sourcerecordid><originalsourceid>FETCH-epo_espacenet_CN115099391A3</originalsourceid><addsrcrecordid>eNqNjbsKwjAYRrM4iPoOv7uCoThkLEVx6uTiVELyWUNzMxd9fYv4AE5nOYezZLceNUlLHuUd0kRF5oliCgo5Gz-SQ3kETdJr0ngZhR3BQpUUvFGEZzXRwZevkEtIcsTcaFPdmi3u0mZsflyx7fl07S57xDAgR6kwT4eu5_x4EKIRvG3-cT4QWTsG</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Neural network task processing method and device, electronic equipment and storage medium</title><source>esp@cenet</source><creator>MA SONGCHEN ; SHI LUPING ; PEI JING ; QU HUANYU ; ZHAO RONG ; ZHANG WEIHAO</creator><creatorcontrib>MA SONGCHEN ; SHI LUPING ; PEI JING ; QU HUANYU ; ZHAO RONG ; ZHANG WEIHAO</creatorcontrib><description>The invention relates to a neural network task processing method and device, electronic equipment and a storage medium, which are used for grouping a plurality of calculation cores included in the electronic equipment under the condition of receiving a plurality of neural network tasks to obtain calculation core groups for executing each neural network task, and determining an operation cycle corresponding to each neural network task. And executing the neural network task according to the corresponding operation cycle through each calculation core group. Wherein the calculation core groups carry out at least one time of data exchange in one operation cycle, and at least two calculation cores of which the data exchange moments are kept synchronous exist in each calculation core group. Different tasks are asynchronously processed through different calculation core groups, an independent running environment is developed for each task, and delay and unnecessary waiting caused by task switching are reduced. Meanwh</description><language>chi ; eng</language><subject>CALCULATING ; COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS ; COMPUTING ; COUNTING ; ELECTRIC DIGITAL DATA PROCESSING ; PHYSICS</subject><creationdate>2022</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&date=20220923&DB=EPODOC&CC=CN&NR=115099391A$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,777,882,25545,76296</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&date=20220923&DB=EPODOC&CC=CN&NR=115099391A$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>MA SONGCHEN</creatorcontrib><creatorcontrib>SHI LUPING</creatorcontrib><creatorcontrib>PEI JING</creatorcontrib><creatorcontrib>QU HUANYU</creatorcontrib><creatorcontrib>ZHAO RONG</creatorcontrib><creatorcontrib>ZHANG WEIHAO</creatorcontrib><title>Neural network task processing method and device, electronic equipment and storage medium</title><description>The invention relates to a neural network task processing method and device, electronic equipment and a storage medium, which are used for grouping a plurality of calculation cores included in the electronic equipment under the condition of receiving a plurality of neural network tasks to obtain calculation core groups for executing each neural network task, and determining an operation cycle corresponding to each neural network task. And executing the neural network task according to the corresponding operation cycle through each calculation core group. Wherein the calculation core groups carry out at least one time of data exchange in one operation cycle, and at least two calculation cores of which the data exchange moments are kept synchronous exist in each calculation core group. Different tasks are asynchronously processed through different calculation core groups, an independent running environment is developed for each task, and delay and unnecessary waiting caused by task switching are reduced. Meanwh</description><subject>CALCULATING</subject><subject>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>ELECTRIC DIGITAL DATA PROCESSING</subject><subject>PHYSICS</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2022</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNqNjbsKwjAYRrM4iPoOv7uCoThkLEVx6uTiVELyWUNzMxd9fYv4AE5nOYezZLceNUlLHuUd0kRF5oliCgo5Gz-SQ3kETdJr0ngZhR3BQpUUvFGEZzXRwZevkEtIcsTcaFPdmi3u0mZsflyx7fl07S57xDAgR6kwT4eu5_x4EKIRvG3-cT4QWTsG</recordid><startdate>20220923</startdate><enddate>20220923</enddate><creator>MA SONGCHEN</creator><creator>SHI LUPING</creator><creator>PEI JING</creator><creator>QU HUANYU</creator><creator>ZHAO RONG</creator><creator>ZHANG WEIHAO</creator><scope>EVB</scope></search><sort><creationdate>20220923</creationdate><title>Neural network task processing method and device, electronic equipment and storage medium</title><author>MA SONGCHEN ; SHI LUPING ; PEI JING ; QU HUANYU ; ZHAO RONG ; ZHANG WEIHAO</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_CN115099391A3</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>chi ; eng</language><creationdate>2022</creationdate><topic>CALCULATING</topic><topic>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>ELECTRIC DIGITAL DATA PROCESSING</topic><topic>PHYSICS</topic><toplevel>online_resources</toplevel><creatorcontrib>MA SONGCHEN</creatorcontrib><creatorcontrib>SHI LUPING</creatorcontrib><creatorcontrib>PEI JING</creatorcontrib><creatorcontrib>QU HUANYU</creatorcontrib><creatorcontrib>ZHAO RONG</creatorcontrib><creatorcontrib>ZHANG WEIHAO</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>MA SONGCHEN</au><au>SHI LUPING</au><au>PEI JING</au><au>QU HUANYU</au><au>ZHAO RONG</au><au>ZHANG WEIHAO</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Neural network task processing method and device, electronic equipment and storage medium</title><date>2022-09-23</date><risdate>2022</risdate><abstract>The invention relates to a neural network task processing method and device, electronic equipment and a storage medium, which are used for grouping a plurality of calculation cores included in the electronic equipment under the condition of receiving a plurality of neural network tasks to obtain calculation core groups for executing each neural network task, and determining an operation cycle corresponding to each neural network task. And executing the neural network task according to the corresponding operation cycle through each calculation core group. Wherein the calculation core groups carry out at least one time of data exchange in one operation cycle, and at least two calculation cores of which the data exchange moments are kept synchronous exist in each calculation core group. Different tasks are asynchronously processed through different calculation core groups, an independent running environment is developed for each task, and delay and unnecessary waiting caused by task switching are reduced. Meanwh</abstract><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | |
ispartof | |
issn | |
language | chi ; eng |
recordid | cdi_epo_espacenet_CN115099391A |
source | esp@cenet |
subjects | CALCULATING COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS COMPUTING COUNTING ELECTRIC DIGITAL DATA PROCESSING PHYSICS |
title | Neural network task processing method and device, electronic equipment and storage medium |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-21T09%3A01%3A16IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=MA%20SONGCHEN&rft.date=2022-09-23&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3ECN115099391A%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |