Method and apparatus for efficiently processing convolution neural network operations

Artificial intelligence is an increasingly important sector of the computer industry. One of the most important applications for artificial intelligence is object recognition and classification from digital images. Convolutional neural networks have proven to be a very effective tool for object reco...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Ma, Siyad Chih-Hua, Chole, Sharad Vasantrao, Chuang, Shang-Tse
Format: Patent
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Ma, Siyad Chih-Hua
Chole, Sharad Vasantrao
Chuang, Shang-Tse
description Artificial intelligence is an increasingly important sector of the computer industry. One of the most important applications for artificial intelligence is object recognition and classification from digital images. Convolutional neural networks have proven to be a very effective tool for object recognition and classification from digital images. However, convolutional neural networks are extremely computationally intensive thus requiring high-performance processors, significant computation time, and significant energy consumption. To reduce the computation time and energy consumption a "cone of dependency" and "cone of influence" processing techniques are disclosed. These two techniques arrange the computations required in a manner that minimizes memory accesses such that computations may be performed in local cache memory. These techniques significantly reduce the time to perform the computations and the energy consumed by the hardware implementing a convolutional neural network.
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_US11151416B2</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>US11151416B2</sourcerecordid><originalsourceid>FETCH-epo_espacenet_US11151416B23</originalsourceid><addsrcrecordid>eNqNizsOAjEMBbehQMAdzAEowu8AIBANFWy9soIDEZFtxQmI25OCA1A8TfFmxl1_pvKQGyC3qWLGUg2CZKAQoo_EJX1As3gyi3wHL_ySVEsUBqaaMTWUt-QniFLL22HTbhQwGc1-nHTz4-G6Py1IZSBT9NSiob845zZu7ba75eof5wtqRzqI</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Method and apparatus for efficiently processing convolution neural network operations</title><source>esp@cenet</source><creator>Ma, Siyad Chih-Hua ; Chole, Sharad Vasantrao ; Chuang, Shang-Tse</creator><creatorcontrib>Ma, Siyad Chih-Hua ; Chole, Sharad Vasantrao ; Chuang, Shang-Tse</creatorcontrib><description>Artificial intelligence is an increasingly important sector of the computer industry. One of the most important applications for artificial intelligence is object recognition and classification from digital images. Convolutional neural networks have proven to be a very effective tool for object recognition and classification from digital images. However, convolutional neural networks are extremely computationally intensive thus requiring high-performance processors, significant computation time, and significant energy consumption. To reduce the computation time and energy consumption a "cone of dependency" and "cone of influence" processing techniques are disclosed. These two techniques arrange the computations required in a manner that minimizes memory accesses such that computations may be performed in local cache memory. These techniques significantly reduce the time to perform the computations and the energy consumed by the hardware implementing a convolutional neural network.</description><language>eng</language><subject>CALCULATING ; COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS ; COMPUTING ; COUNTING ; PHYSICS</subject><creationdate>2021</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20211019&amp;DB=EPODOC&amp;CC=US&amp;NR=11151416B2$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,777,882,25545,76296</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20211019&amp;DB=EPODOC&amp;CC=US&amp;NR=11151416B2$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>Ma, Siyad Chih-Hua</creatorcontrib><creatorcontrib>Chole, Sharad Vasantrao</creatorcontrib><creatorcontrib>Chuang, Shang-Tse</creatorcontrib><title>Method and apparatus for efficiently processing convolution neural network operations</title><description>Artificial intelligence is an increasingly important sector of the computer industry. One of the most important applications for artificial intelligence is object recognition and classification from digital images. Convolutional neural networks have proven to be a very effective tool for object recognition and classification from digital images. However, convolutional neural networks are extremely computationally intensive thus requiring high-performance processors, significant computation time, and significant energy consumption. To reduce the computation time and energy consumption a "cone of dependency" and "cone of influence" processing techniques are disclosed. These two techniques arrange the computations required in a manner that minimizes memory accesses such that computations may be performed in local cache memory. These techniques significantly reduce the time to perform the computations and the energy consumed by the hardware implementing a convolutional neural network.</description><subject>CALCULATING</subject><subject>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>PHYSICS</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2021</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNqNizsOAjEMBbehQMAdzAEowu8AIBANFWy9soIDEZFtxQmI25OCA1A8TfFmxl1_pvKQGyC3qWLGUg2CZKAQoo_EJX1As3gyi3wHL_ySVEsUBqaaMTWUt-QniFLL22HTbhQwGc1-nHTz4-G6Py1IZSBT9NSiob845zZu7ba75eof5wtqRzqI</recordid><startdate>20211019</startdate><enddate>20211019</enddate><creator>Ma, Siyad Chih-Hua</creator><creator>Chole, Sharad Vasantrao</creator><creator>Chuang, Shang-Tse</creator><scope>EVB</scope></search><sort><creationdate>20211019</creationdate><title>Method and apparatus for efficiently processing convolution neural network operations</title><author>Ma, Siyad Chih-Hua ; Chole, Sharad Vasantrao ; Chuang, Shang-Tse</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_US11151416B23</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>eng</language><creationdate>2021</creationdate><topic>CALCULATING</topic><topic>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>PHYSICS</topic><toplevel>online_resources</toplevel><creatorcontrib>Ma, Siyad Chih-Hua</creatorcontrib><creatorcontrib>Chole, Sharad Vasantrao</creatorcontrib><creatorcontrib>Chuang, Shang-Tse</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Ma, Siyad Chih-Hua</au><au>Chole, Sharad Vasantrao</au><au>Chuang, Shang-Tse</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Method and apparatus for efficiently processing convolution neural network operations</title><date>2021-10-19</date><risdate>2021</risdate><abstract>Artificial intelligence is an increasingly important sector of the computer industry. One of the most important applications for artificial intelligence is object recognition and classification from digital images. Convolutional neural networks have proven to be a very effective tool for object recognition and classification from digital images. However, convolutional neural networks are extremely computationally intensive thus requiring high-performance processors, significant computation time, and significant energy consumption. To reduce the computation time and energy consumption a "cone of dependency" and "cone of influence" processing techniques are disclosed. These two techniques arrange the computations required in a manner that minimizes memory accesses such that computations may be performed in local cache memory. These techniques significantly reduce the time to perform the computations and the energy consumed by the hardware implementing a convolutional neural network.</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language eng
recordid cdi_epo_espacenet_US11151416B2
source esp@cenet
subjects CALCULATING
COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
COMPUTING
COUNTING
PHYSICS
title Method and apparatus for efficiently processing convolution neural network operations
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-17T22%3A55%3A07IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=Ma,%20Siyad%20Chih-Hua&rft.date=2021-10-19&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3EUS11151416B2%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true