Apparatus and method for increasing artificial intelligence neural network speed by implementing neural network architecture with partial convolution

An apparatus employing an efficient neural network architecture through partial convolution includes a fast network module, a data input module, and a result module. A fast neural network comprising a plurality of fast neural network blocks is integrated in a fast network module, having at least one...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: CHAN SHUENG HAN GARY, CHEN JIERUN
Format: Patent
Sprache:chi ; eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator CHAN SHUENG HAN GARY
CHEN JIERUN
description An apparatus employing an efficient neural network architecture through partial convolution includes a fast network module, a data input module, and a result module. A fast neural network comprising a plurality of fast neural network blocks is integrated in a fast network module, having at least one PConv layer and at least two PWConv layers. The data input module is responsible for loading and providing input data to the fast network module. The PConv layer is used for carrying out partial convolution on input data, standard convolution operation on partial channels is achieved by utilizing redundant information in a feature map, meanwhile, other channels are not affected, and only partial input channels are processed through selective convolution. The two PWConv layers follow the PConv layer, and are configured to convert and integrate features. The result module is configured to receive a result generated by the fast network module. 在此提供了一种通过部分卷积采用高效神经网络架构的装置,包括快速网络模块、数据输入模块和结果模块。在快速网络模块中集成了包括多个快速神经网络块的快速神
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_CN118378674A</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>CN118378674A</sourcerecordid><originalsourceid>FETCH-epo_espacenet_CN118378674A3</originalsourceid><addsrcrecordid>eNqNjDEKwkAQRdNYiHqH8QAWoqhtEMXKyl7GzY8Z3MwuuxPFg3hfI1hZWX14vP-GxauMkRNbl4m1ohbWhIrqkEjUJXAWvRInk1qcsO-pwXu5Qh1I0aWeKewR0o1yBCq6PEna6NFC7XP-kTi5RgzOugR6iDUUP_necEHvwXcmQcfFoGafMfnuqJjud6ftYYYYzsiRHfreeXuczzeL9Wa1XpaLf5w3SApTEw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Apparatus and method for increasing artificial intelligence neural network speed by implementing neural network architecture with partial convolution</title><source>esp@cenet</source><creator>CHAN SHUENG HAN GARY ; CHEN JIERUN</creator><creatorcontrib>CHAN SHUENG HAN GARY ; CHEN JIERUN</creatorcontrib><description>An apparatus employing an efficient neural network architecture through partial convolution includes a fast network module, a data input module, and a result module. A fast neural network comprising a plurality of fast neural network blocks is integrated in a fast network module, having at least one PConv layer and at least two PWConv layers. The data input module is responsible for loading and providing input data to the fast network module. The PConv layer is used for carrying out partial convolution on input data, standard convolution operation on partial channels is achieved by utilizing redundant information in a feature map, meanwhile, other channels are not affected, and only partial input channels are processed through selective convolution. The two PWConv layers follow the PConv layer, and are configured to convert and integrate features. The result module is configured to receive a result generated by the fast network module. 在此提供了一种通过部分卷积采用高效神经网络架构的装置,包括快速网络模块、数据输入模块和结果模块。在快速网络模块中集成了包括多个快速神经网络块的快速神</description><language>chi ; eng</language><subject>CALCULATING ; COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS ; COMPUTING ; COUNTING ; PHYSICS</subject><creationdate>2024</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20240723&amp;DB=EPODOC&amp;CC=CN&amp;NR=118378674A$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,776,881,25542,76290</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20240723&amp;DB=EPODOC&amp;CC=CN&amp;NR=118378674A$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>CHAN SHUENG HAN GARY</creatorcontrib><creatorcontrib>CHEN JIERUN</creatorcontrib><title>Apparatus and method for increasing artificial intelligence neural network speed by implementing neural network architecture with partial convolution</title><description>An apparatus employing an efficient neural network architecture through partial convolution includes a fast network module, a data input module, and a result module. A fast neural network comprising a plurality of fast neural network blocks is integrated in a fast network module, having at least one PConv layer and at least two PWConv layers. The data input module is responsible for loading and providing input data to the fast network module. The PConv layer is used for carrying out partial convolution on input data, standard convolution operation on partial channels is achieved by utilizing redundant information in a feature map, meanwhile, other channels are not affected, and only partial input channels are processed through selective convolution. The two PWConv layers follow the PConv layer, and are configured to convert and integrate features. The result module is configured to receive a result generated by the fast network module. 在此提供了一种通过部分卷积采用高效神经网络架构的装置,包括快速网络模块、数据输入模块和结果模块。在快速网络模块中集成了包括多个快速神经网络块的快速神</description><subject>CALCULATING</subject><subject>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>PHYSICS</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2024</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNqNjDEKwkAQRdNYiHqH8QAWoqhtEMXKyl7GzY8Z3MwuuxPFg3hfI1hZWX14vP-GxauMkRNbl4m1ohbWhIrqkEjUJXAWvRInk1qcsO-pwXu5Qh1I0aWeKewR0o1yBCq6PEna6NFC7XP-kTi5RgzOugR6iDUUP_necEHvwXcmQcfFoGafMfnuqJjud6ftYYYYzsiRHfreeXuczzeL9Wa1XpaLf5w3SApTEw</recordid><startdate>20240723</startdate><enddate>20240723</enddate><creator>CHAN SHUENG HAN GARY</creator><creator>CHEN JIERUN</creator><scope>EVB</scope></search><sort><creationdate>20240723</creationdate><title>Apparatus and method for increasing artificial intelligence neural network speed by implementing neural network architecture with partial convolution</title><author>CHAN SHUENG HAN GARY ; CHEN JIERUN</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_CN118378674A3</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>chi ; eng</language><creationdate>2024</creationdate><topic>CALCULATING</topic><topic>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>PHYSICS</topic><toplevel>online_resources</toplevel><creatorcontrib>CHAN SHUENG HAN GARY</creatorcontrib><creatorcontrib>CHEN JIERUN</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>CHAN SHUENG HAN GARY</au><au>CHEN JIERUN</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Apparatus and method for increasing artificial intelligence neural network speed by implementing neural network architecture with partial convolution</title><date>2024-07-23</date><risdate>2024</risdate><abstract>An apparatus employing an efficient neural network architecture through partial convolution includes a fast network module, a data input module, and a result module. A fast neural network comprising a plurality of fast neural network blocks is integrated in a fast network module, having at least one PConv layer and at least two PWConv layers. The data input module is responsible for loading and providing input data to the fast network module. The PConv layer is used for carrying out partial convolution on input data, standard convolution operation on partial channels is achieved by utilizing redundant information in a feature map, meanwhile, other channels are not affected, and only partial input channels are processed through selective convolution. The two PWConv layers follow the PConv layer, and are configured to convert and integrate features. The result module is configured to receive a result generated by the fast network module. 在此提供了一种通过部分卷积采用高效神经网络架构的装置,包括快速网络模块、数据输入模块和结果模块。在快速网络模块中集成了包括多个快速神经网络块的快速神</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language chi ; eng
recordid cdi_epo_espacenet_CN118378674A
source esp@cenet
subjects CALCULATING
COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
COMPUTING
COUNTING
PHYSICS
title Apparatus and method for increasing artificial intelligence neural network speed by implementing neural network architecture with partial convolution
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-31T16%3A25%3A29IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=CHAN%20SHUENG%20HAN%20GARY&rft.date=2024-07-23&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3ECN118378674A%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true