Data processing method and device for neural network

The invention discloses a data processing method and device of a neural network, and relates to the field of artificial intelligence. The method includes dynamically segmenting input data according to a data volume of the input data, a first feature of an internal memory in a chip running a neural n...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: XIONG LEJIN, YUAN HONGHUI, GAO SHANQING, GAO LIWEN
Format: Patent
Sprache:chi ; eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator XIONG LEJIN
YUAN HONGHUI
GAO SHANQING
GAO LIWEN
description The invention discloses a data processing method and device of a neural network, and relates to the field of artificial intelligence. The method includes dynamically segmenting input data according to a data volume of the input data, a first feature of an internal memory in a chip running a neural network, and a second feature of a plurality of layers in the neural network, and setting different batch sizes for the layers in the neural network. Due to the fact that the reasonable batch size is set for each layer in the neural network, the internal memory can be fully utilized to store the interlayer data of the neural network in the reasoning process of the neural network, the utilization rate of the internal memory is improved, and the calculation efficiency of hardware for operating the neural network is ensured. 本申请公开了一种神经网络的数据处理方法及装置,涉及人工智能领域。所述方法包括依据输入数据的数据量、运行神经网络的芯片内的内部存储器的第一特征和神经网络中多个层的第二特征动态切分输入数据,为神经网络中的层设置不同的批大小。由于通过为神经网络中的每层设置合理的批大小,在神经网络推理过程中,可充分利用内部存储器存储神经网络的层间数据,从而提高了内部存储器的利用率,以及确保运行神经网络的硬件的计算效
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_CN115668222A</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>CN115668222A</sourcerecordid><originalsourceid>FETCH-epo_espacenet_CN115668222A3</originalsourceid><addsrcrecordid>eNrjZDBxSSxJVCgoyk9OLS7OzEtXyE0tychPUUjMS1FISS3LTE5VSMsvUshLLS1KzAFSJeX5Rdk8DKxpiTnFqbxQmptB0c01xNlDN7UgPz61uCAxORWoMt7Zz9DQ1MzMwsjIyNGYGDUAIW0ssA</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Data processing method and device for neural network</title><source>esp@cenet</source><creator>XIONG LEJIN ; YUAN HONGHUI ; GAO SHANQING ; GAO LIWEN</creator><creatorcontrib>XIONG LEJIN ; YUAN HONGHUI ; GAO SHANQING ; GAO LIWEN</creatorcontrib><description>The invention discloses a data processing method and device of a neural network, and relates to the field of artificial intelligence. The method includes dynamically segmenting input data according to a data volume of the input data, a first feature of an internal memory in a chip running a neural network, and a second feature of a plurality of layers in the neural network, and setting different batch sizes for the layers in the neural network. Due to the fact that the reasonable batch size is set for each layer in the neural network, the internal memory can be fully utilized to store the interlayer data of the neural network in the reasoning process of the neural network, the utilization rate of the internal memory is improved, and the calculation efficiency of hardware for operating the neural network is ensured. 本申请公开了一种神经网络的数据处理方法及装置,涉及人工智能领域。所述方法包括依据输入数据的数据量、运行神经网络的芯片内的内部存储器的第一特征和神经网络中多个层的第二特征动态切分输入数据,为神经网络中的层设置不同的批大小。由于通过为神经网络中的每层设置合理的批大小,在神经网络推理过程中,可充分利用内部存储器存储神经网络的层间数据,从而提高了内部存储器的利用率,以及确保运行神经网络的硬件的计算效</description><language>chi ; eng</language><subject>CALCULATING ; COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS ; COMPUTING ; COUNTING ; PHYSICS</subject><creationdate>2023</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20230131&amp;DB=EPODOC&amp;CC=CN&amp;NR=115668222A$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,780,885,25564,76547</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20230131&amp;DB=EPODOC&amp;CC=CN&amp;NR=115668222A$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>XIONG LEJIN</creatorcontrib><creatorcontrib>YUAN HONGHUI</creatorcontrib><creatorcontrib>GAO SHANQING</creatorcontrib><creatorcontrib>GAO LIWEN</creatorcontrib><title>Data processing method and device for neural network</title><description>The invention discloses a data processing method and device of a neural network, and relates to the field of artificial intelligence. The method includes dynamically segmenting input data according to a data volume of the input data, a first feature of an internal memory in a chip running a neural network, and a second feature of a plurality of layers in the neural network, and setting different batch sizes for the layers in the neural network. Due to the fact that the reasonable batch size is set for each layer in the neural network, the internal memory can be fully utilized to store the interlayer data of the neural network in the reasoning process of the neural network, the utilization rate of the internal memory is improved, and the calculation efficiency of hardware for operating the neural network is ensured. 本申请公开了一种神经网络的数据处理方法及装置,涉及人工智能领域。所述方法包括依据输入数据的数据量、运行神经网络的芯片内的内部存储器的第一特征和神经网络中多个层的第二特征动态切分输入数据,为神经网络中的层设置不同的批大小。由于通过为神经网络中的每层设置合理的批大小,在神经网络推理过程中,可充分利用内部存储器存储神经网络的层间数据,从而提高了内部存储器的利用率,以及确保运行神经网络的硬件的计算效</description><subject>CALCULATING</subject><subject>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>PHYSICS</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2023</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNrjZDBxSSxJVCgoyk9OLS7OzEtXyE0tychPUUjMS1FISS3LTE5VSMsvUshLLS1KzAFSJeX5Rdk8DKxpiTnFqbxQmptB0c01xNlDN7UgPz61uCAxORWoMt7Zz9DQ1MzMwsjIyNGYGDUAIW0ssA</recordid><startdate>20230131</startdate><enddate>20230131</enddate><creator>XIONG LEJIN</creator><creator>YUAN HONGHUI</creator><creator>GAO SHANQING</creator><creator>GAO LIWEN</creator><scope>EVB</scope></search><sort><creationdate>20230131</creationdate><title>Data processing method and device for neural network</title><author>XIONG LEJIN ; YUAN HONGHUI ; GAO SHANQING ; GAO LIWEN</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_CN115668222A3</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>chi ; eng</language><creationdate>2023</creationdate><topic>CALCULATING</topic><topic>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>PHYSICS</topic><toplevel>online_resources</toplevel><creatorcontrib>XIONG LEJIN</creatorcontrib><creatorcontrib>YUAN HONGHUI</creatorcontrib><creatorcontrib>GAO SHANQING</creatorcontrib><creatorcontrib>GAO LIWEN</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>XIONG LEJIN</au><au>YUAN HONGHUI</au><au>GAO SHANQING</au><au>GAO LIWEN</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Data processing method and device for neural network</title><date>2023-01-31</date><risdate>2023</risdate><abstract>The invention discloses a data processing method and device of a neural network, and relates to the field of artificial intelligence. The method includes dynamically segmenting input data according to a data volume of the input data, a first feature of an internal memory in a chip running a neural network, and a second feature of a plurality of layers in the neural network, and setting different batch sizes for the layers in the neural network. Due to the fact that the reasonable batch size is set for each layer in the neural network, the internal memory can be fully utilized to store the interlayer data of the neural network in the reasoning process of the neural network, the utilization rate of the internal memory is improved, and the calculation efficiency of hardware for operating the neural network is ensured. 本申请公开了一种神经网络的数据处理方法及装置,涉及人工智能领域。所述方法包括依据输入数据的数据量、运行神经网络的芯片内的内部存储器的第一特征和神经网络中多个层的第二特征动态切分输入数据,为神经网络中的层设置不同的批大小。由于通过为神经网络中的每层设置合理的批大小,在神经网络推理过程中,可充分利用内部存储器存储神经网络的层间数据,从而提高了内部存储器的利用率,以及确保运行神经网络的硬件的计算效</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language chi ; eng
recordid cdi_epo_espacenet_CN115668222A
source esp@cenet
subjects CALCULATING
COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
COMPUTING
COUNTING
PHYSICS
title Data processing method and device for neural network
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-02T16%3A34%3A14IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=XIONG%20LEJIN&rft.date=2023-01-31&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3ECN115668222A%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true