DATA PROCESSING METHOD AND APPARATUS FOR NEURAL NETWORK

Disclosed are a data processing method and apparatus for a neural network, which method and apparatus relate to the field of artificial intelligence. The method comprises: according to the data amount of input data, a first feature of an internal memory in a chip that runs a neural network, and a se...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: YUAN, Honghui, GAO, Shanqing, GAO, Liwen, XIONG, Lejin
Format: Patent
Sprache:chi ; eng ; fre
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator YUAN, Honghui
GAO, Shanqing
GAO, Liwen
XIONG, Lejin
description Disclosed are a data processing method and apparatus for a neural network, which method and apparatus relate to the field of artificial intelligence. The method comprises: according to the data amount of input data, a first feature of an internal memory in a chip that runs a neural network, and a second feature of multiple layers in the neural network, dynamically segmenting the input data, and configuring different batch sizes for the layers in the neural network. By means of configuring a rational batch size for each layer in a neural network, during a neural network inference procedure, an internal memory can be fully utilized to store inter-layer data of the neural network, thereby improving the utilization rate of the internal memory, and ensuring the computational efficiency of hardware that runs the neural network. Sont divulgués ici un procédé et un appareil de traitement de données d'un réseau neuronal qui se rapportent au domaine de l'intelligence artificielle. Le procédé consiste : en fonction de l
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_WO2021243489A1</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>WO2021243489A1</sourcerecordid><originalsourceid>FETCH-epo_espacenet_WO2021243489A13</originalsourceid><addsrcrecordid>eNrjZDB3cQxxVAgI8nd2DQ729HNX8HUN8fB3UXD0A-KAAMcgx5DQYAU3_yAFP9fQIEcfIBUS7h_kzcPAmpaYU5zKC6W5GZTdXEOcPXRTC_LjU4sLEpNT81JL4sP9jQyMDI1MjE0sLB0NjYlTBQDaqiko</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>DATA PROCESSING METHOD AND APPARATUS FOR NEURAL NETWORK</title><source>esp@cenet</source><creator>YUAN, Honghui ; GAO, Shanqing ; GAO, Liwen ; XIONG, Lejin</creator><creatorcontrib>YUAN, Honghui ; GAO, Shanqing ; GAO, Liwen ; XIONG, Lejin</creatorcontrib><description>Disclosed are a data processing method and apparatus for a neural network, which method and apparatus relate to the field of artificial intelligence. The method comprises: according to the data amount of input data, a first feature of an internal memory in a chip that runs a neural network, and a second feature of multiple layers in the neural network, dynamically segmenting the input data, and configuring different batch sizes for the layers in the neural network. By means of configuring a rational batch size for each layer in a neural network, during a neural network inference procedure, an internal memory can be fully utilized to store inter-layer data of the neural network, thereby improving the utilization rate of the internal memory, and ensuring the computational efficiency of hardware that runs the neural network. Sont divulgués ici un procédé et un appareil de traitement de données d'un réseau neuronal qui se rapportent au domaine de l'intelligence artificielle. Le procédé consiste : en fonction de l</description><language>chi ; eng ; fre</language><subject>CALCULATING ; COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS ; COMPUTING ; COUNTING ; PHYSICS</subject><creationdate>2021</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20211209&amp;DB=EPODOC&amp;CC=WO&amp;NR=2021243489A1$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,780,885,25564,76547</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20211209&amp;DB=EPODOC&amp;CC=WO&amp;NR=2021243489A1$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>YUAN, Honghui</creatorcontrib><creatorcontrib>GAO, Shanqing</creatorcontrib><creatorcontrib>GAO, Liwen</creatorcontrib><creatorcontrib>XIONG, Lejin</creatorcontrib><title>DATA PROCESSING METHOD AND APPARATUS FOR NEURAL NETWORK</title><description>Disclosed are a data processing method and apparatus for a neural network, which method and apparatus relate to the field of artificial intelligence. The method comprises: according to the data amount of input data, a first feature of an internal memory in a chip that runs a neural network, and a second feature of multiple layers in the neural network, dynamically segmenting the input data, and configuring different batch sizes for the layers in the neural network. By means of configuring a rational batch size for each layer in a neural network, during a neural network inference procedure, an internal memory can be fully utilized to store inter-layer data of the neural network, thereby improving the utilization rate of the internal memory, and ensuring the computational efficiency of hardware that runs the neural network. Sont divulgués ici un procédé et un appareil de traitement de données d'un réseau neuronal qui se rapportent au domaine de l'intelligence artificielle. Le procédé consiste : en fonction de l</description><subject>CALCULATING</subject><subject>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>PHYSICS</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2021</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNrjZDB3cQxxVAgI8nd2DQ729HNX8HUN8fB3UXD0A-KAAMcgx5DQYAU3_yAFP9fQIEcfIBUS7h_kzcPAmpaYU5zKC6W5GZTdXEOcPXRTC_LjU4sLEpNT81JL4sP9jQyMDI1MjE0sLB0NjYlTBQDaqiko</recordid><startdate>20211209</startdate><enddate>20211209</enddate><creator>YUAN, Honghui</creator><creator>GAO, Shanqing</creator><creator>GAO, Liwen</creator><creator>XIONG, Lejin</creator><scope>EVB</scope></search><sort><creationdate>20211209</creationdate><title>DATA PROCESSING METHOD AND APPARATUS FOR NEURAL NETWORK</title><author>YUAN, Honghui ; GAO, Shanqing ; GAO, Liwen ; XIONG, Lejin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_WO2021243489A13</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>chi ; eng ; fre</language><creationdate>2021</creationdate><topic>CALCULATING</topic><topic>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>PHYSICS</topic><toplevel>online_resources</toplevel><creatorcontrib>YUAN, Honghui</creatorcontrib><creatorcontrib>GAO, Shanqing</creatorcontrib><creatorcontrib>GAO, Liwen</creatorcontrib><creatorcontrib>XIONG, Lejin</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>YUAN, Honghui</au><au>GAO, Shanqing</au><au>GAO, Liwen</au><au>XIONG, Lejin</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>DATA PROCESSING METHOD AND APPARATUS FOR NEURAL NETWORK</title><date>2021-12-09</date><risdate>2021</risdate><abstract>Disclosed are a data processing method and apparatus for a neural network, which method and apparatus relate to the field of artificial intelligence. The method comprises: according to the data amount of input data, a first feature of an internal memory in a chip that runs a neural network, and a second feature of multiple layers in the neural network, dynamically segmenting the input data, and configuring different batch sizes for the layers in the neural network. By means of configuring a rational batch size for each layer in a neural network, during a neural network inference procedure, an internal memory can be fully utilized to store inter-layer data of the neural network, thereby improving the utilization rate of the internal memory, and ensuring the computational efficiency of hardware that runs the neural network. Sont divulgués ici un procédé et un appareil de traitement de données d'un réseau neuronal qui se rapportent au domaine de l'intelligence artificielle. Le procédé consiste : en fonction de l</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language chi ; eng ; fre
recordid cdi_epo_espacenet_WO2021243489A1
source esp@cenet
subjects CALCULATING
COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
COMPUTING
COUNTING
PHYSICS
title DATA PROCESSING METHOD AND APPARATUS FOR NEURAL NETWORK
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-22T05%3A05%3A26IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=YUAN,%20Honghui&rft.date=2021-12-09&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3EWO2021243489A1%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true