NnSP: embedded neural networks stream processor
Exploiting neural networks native parallelism and interaction locality, dedicated parallel hardware implementation of neural networks is essential for their effective use in time-critical applications. The architecture proposed in this paper is a parallel stream processor called neural networks stre...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 226 Vol. 1 |
---|---|
container_issue | |
container_start_page | 223 |
container_title | |
container_volume | |
creator | Esmaeilzadeh, H. Farzan, F. Shahidi, N. Fakhraie, S.M. Lucas, C. Tehranipoor, M. |
description | Exploiting neural networks native parallelism and interaction locality, dedicated parallel hardware implementation of neural networks is essential for their effective use in time-critical applications. The architecture proposed in this paper is a parallel stream processor called neural networks stream processor or NnSP which can be programmed to realize different neural-network topologies and architectures. NnSP is a collection of programmable processing engines organized in custom FIFO-based cache architecture and busing system. Streams of synaptic data flow through the parallel processing elements, and computations are performed based on the instructions embedded in the preambles of the data streams. The command and configuration words embedded in the preamble of a stream, program each processing element to perform a desired computation on the upcoming data. The packetized nature of the stream architecture brings up a high degree of flexibility and scalability for NnSP. The stream processor is synthesized targeting an ASIC standard cell library for SoC implementation and also is realized on Xilinx VirtexII-Pro SoPC beds. A neural network employed for mobile robot navigation control, is implemented on the realized SoPC hardware. The realization-speedup achievements are presented here |
doi_str_mv | 10.1109/MWSCAS.2005.1594079 |
format | Conference Proceeding |
fullrecord | <record><control><sourceid>ieee_6IE</sourceid><recordid>TN_cdi_ieee_primary_1594079</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>1594079</ieee_id><sourcerecordid>1594079</sourcerecordid><originalsourceid>FETCH-LOGICAL-i90t-b87ec773addeb4f4c3d71319c84d59f3d940a808ca51f632dd3415ad79b07b503</originalsourceid><addsrcrecordid>eNotj8tqwzAQRUUf0DTNF2TjH7AzY0mRprtg-oI0KTjQZZCtMbiN4yC5lP59XRq4cBYXDvcKMUfIEIEWr-9lsSqzHEBnqEmBoQsxQa1tKi3RpZiRsTBGEpKBq79OjZ1RyxtxG-MHQC4N0kQsNsfy7T7hrmLv2SdH_gruMGL47sNnTOIQ2HXJKfQ1x9iHO3HduEPk2ZlTsXt82BXP6Xr79FKs1mlLMKSVNVwbI93orFSjaukNSqTaKq-pkX6c7CzY2mlsljL3XirUzhuqwFQa5FTM_7UtM-9Poe1c-Nmfr8pf1UtF2g</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>NnSP: embedded neural networks stream processor</title><source>IEEE Electronic Library (IEL) Conference Proceedings</source><creator>Esmaeilzadeh, H. ; Farzan, F. ; Shahidi, N. ; Fakhraie, S.M. ; Lucas, C. ; Tehranipoor, M.</creator><creatorcontrib>Esmaeilzadeh, H. ; Farzan, F. ; Shahidi, N. ; Fakhraie, S.M. ; Lucas, C. ; Tehranipoor, M.</creatorcontrib><description>Exploiting neural networks native parallelism and interaction locality, dedicated parallel hardware implementation of neural networks is essential for their effective use in time-critical applications. The architecture proposed in this paper is a parallel stream processor called neural networks stream processor or NnSP which can be programmed to realize different neural-network topologies and architectures. NnSP is a collection of programmable processing engines organized in custom FIFO-based cache architecture and busing system. Streams of synaptic data flow through the parallel processing elements, and computations are performed based on the instructions embedded in the preambles of the data streams. The command and configuration words embedded in the preamble of a stream, program each processing element to perform a desired computation on the upcoming data. The packetized nature of the stream architecture brings up a high degree of flexibility and scalability for NnSP. The stream processor is synthesized targeting an ASIC standard cell library for SoC implementation and also is realized on Xilinx VirtexII-Pro SoPC beds. A neural network employed for mobile robot navigation control, is implemented on the realized SoPC hardware. The realization-speedup achievements are presented here</description><identifier>ISSN: 1548-3746</identifier><identifier>ISBN: 9780780391970</identifier><identifier>ISBN: 0780391977</identifier><identifier>EISSN: 1558-3899</identifier><identifier>DOI: 10.1109/MWSCAS.2005.1594079</identifier><language>eng</language><publisher>IEEE</publisher><subject>Computer aided instruction ; Computer architecture ; Concurrent computing ; Embedded computing ; Engines ; Network topology ; Neural network hardware ; Neural networks ; Parallel processing ; Time factors</subject><ispartof>48th Midwest Symposium on Circuits and Systems, 2005, 2005, p.223-226 Vol. 1</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/1594079$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,776,780,785,786,2052,4036,4037,27902,54895</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/1594079$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Esmaeilzadeh, H.</creatorcontrib><creatorcontrib>Farzan, F.</creatorcontrib><creatorcontrib>Shahidi, N.</creatorcontrib><creatorcontrib>Fakhraie, S.M.</creatorcontrib><creatorcontrib>Lucas, C.</creatorcontrib><creatorcontrib>Tehranipoor, M.</creatorcontrib><title>NnSP: embedded neural networks stream processor</title><title>48th Midwest Symposium on Circuits and Systems, 2005</title><addtitle>MWSCAS</addtitle><description>Exploiting neural networks native parallelism and interaction locality, dedicated parallel hardware implementation of neural networks is essential for their effective use in time-critical applications. The architecture proposed in this paper is a parallel stream processor called neural networks stream processor or NnSP which can be programmed to realize different neural-network topologies and architectures. NnSP is a collection of programmable processing engines organized in custom FIFO-based cache architecture and busing system. Streams of synaptic data flow through the parallel processing elements, and computations are performed based on the instructions embedded in the preambles of the data streams. The command and configuration words embedded in the preamble of a stream, program each processing element to perform a desired computation on the upcoming data. The packetized nature of the stream architecture brings up a high degree of flexibility and scalability for NnSP. The stream processor is synthesized targeting an ASIC standard cell library for SoC implementation and also is realized on Xilinx VirtexII-Pro SoPC beds. A neural network employed for mobile robot navigation control, is implemented on the realized SoPC hardware. The realization-speedup achievements are presented here</description><subject>Computer aided instruction</subject><subject>Computer architecture</subject><subject>Concurrent computing</subject><subject>Embedded computing</subject><subject>Engines</subject><subject>Network topology</subject><subject>Neural network hardware</subject><subject>Neural networks</subject><subject>Parallel processing</subject><subject>Time factors</subject><issn>1548-3746</issn><issn>1558-3899</issn><isbn>9780780391970</isbn><isbn>0780391977</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2005</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><sourceid>RIE</sourceid><recordid>eNotj8tqwzAQRUUf0DTNF2TjH7AzY0mRprtg-oI0KTjQZZCtMbiN4yC5lP59XRq4cBYXDvcKMUfIEIEWr-9lsSqzHEBnqEmBoQsxQa1tKi3RpZiRsTBGEpKBq79OjZ1RyxtxG-MHQC4N0kQsNsfy7T7hrmLv2SdH_gruMGL47sNnTOIQ2HXJKfQ1x9iHO3HduEPk2ZlTsXt82BXP6Xr79FKs1mlLMKSVNVwbI93orFSjaukNSqTaKq-pkX6c7CzY2mlsljL3XirUzhuqwFQa5FTM_7UtM-9Poe1c-Nmfr8pf1UtF2g</recordid><startdate>2005</startdate><enddate>2005</enddate><creator>Esmaeilzadeh, H.</creator><creator>Farzan, F.</creator><creator>Shahidi, N.</creator><creator>Fakhraie, S.M.</creator><creator>Lucas, C.</creator><creator>Tehranipoor, M.</creator><general>IEEE</general><scope>6IE</scope><scope>6IH</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIO</scope></search><sort><creationdate>2005</creationdate><title>NnSP: embedded neural networks stream processor</title><author>Esmaeilzadeh, H. ; Farzan, F. ; Shahidi, N. ; Fakhraie, S.M. ; Lucas, C. ; Tehranipoor, M.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i90t-b87ec773addeb4f4c3d71319c84d59f3d940a808ca51f632dd3415ad79b07b503</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2005</creationdate><topic>Computer aided instruction</topic><topic>Computer architecture</topic><topic>Concurrent computing</topic><topic>Embedded computing</topic><topic>Engines</topic><topic>Network topology</topic><topic>Neural network hardware</topic><topic>Neural networks</topic><topic>Parallel processing</topic><topic>Time factors</topic><toplevel>online_resources</toplevel><creatorcontrib>Esmaeilzadeh, H.</creatorcontrib><creatorcontrib>Farzan, F.</creatorcontrib><creatorcontrib>Shahidi, N.</creatorcontrib><creatorcontrib>Fakhraie, S.M.</creatorcontrib><creatorcontrib>Lucas, C.</creatorcontrib><creatorcontrib>Tehranipoor, M.</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan (POP) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library (IEL)</collection><collection>IEEE Proceedings Order Plans (POP) 1998-present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Esmaeilzadeh, H.</au><au>Farzan, F.</au><au>Shahidi, N.</au><au>Fakhraie, S.M.</au><au>Lucas, C.</au><au>Tehranipoor, M.</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>NnSP: embedded neural networks stream processor</atitle><btitle>48th Midwest Symposium on Circuits and Systems, 2005</btitle><stitle>MWSCAS</stitle><date>2005</date><risdate>2005</risdate><spage>223</spage><epage>226 Vol. 1</epage><pages>223-226 Vol. 1</pages><issn>1548-3746</issn><eissn>1558-3899</eissn><isbn>9780780391970</isbn><isbn>0780391977</isbn><abstract>Exploiting neural networks native parallelism and interaction locality, dedicated parallel hardware implementation of neural networks is essential for their effective use in time-critical applications. The architecture proposed in this paper is a parallel stream processor called neural networks stream processor or NnSP which can be programmed to realize different neural-network topologies and architectures. NnSP is a collection of programmable processing engines organized in custom FIFO-based cache architecture and busing system. Streams of synaptic data flow through the parallel processing elements, and computations are performed based on the instructions embedded in the preambles of the data streams. The command and configuration words embedded in the preamble of a stream, program each processing element to perform a desired computation on the upcoming data. The packetized nature of the stream architecture brings up a high degree of flexibility and scalability for NnSP. The stream processor is synthesized targeting an ASIC standard cell library for SoC implementation and also is realized on Xilinx VirtexII-Pro SoPC beds. A neural network employed for mobile robot navigation control, is implemented on the realized SoPC hardware. The realization-speedup achievements are presented here</abstract><pub>IEEE</pub><doi>10.1109/MWSCAS.2005.1594079</doi></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1548-3746 |
ispartof | 48th Midwest Symposium on Circuits and Systems, 2005, 2005, p.223-226 Vol. 1 |
issn | 1548-3746 1558-3899 |
language | eng |
recordid | cdi_ieee_primary_1594079 |
source | IEEE Electronic Library (IEL) Conference Proceedings |
subjects | Computer aided instruction Computer architecture Concurrent computing Embedded computing Engines Network topology Neural network hardware Neural networks Parallel processing Time factors |
title | NnSP: embedded neural networks stream processor |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-02T14%3A23%3A21IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_6IE&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=NnSP:%20embedded%20neural%20networks%20stream%20processor&rft.btitle=48th%20Midwest%20Symposium%20on%20Circuits%20and%20Systems,%202005&rft.au=Esmaeilzadeh,%20H.&rft.date=2005&rft.spage=223&rft.epage=226%20Vol.%201&rft.pages=223-226%20Vol.%201&rft.issn=1548-3746&rft.eissn=1558-3899&rft.isbn=9780780391970&rft.isbn_list=0780391977&rft_id=info:doi/10.1109/MWSCAS.2005.1594079&rft_dat=%3Cieee_6IE%3E1594079%3C/ieee_6IE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=1594079&rfr_iscdi=true |