Efficient parallel implementation of reservoir computing systems
Reservoir computing (RC) is a powerful machine learning methodology well suited for time-series processing. The hardware implementation of RC systems (HRC) may extend the utility of this neural approach to solve real-life problems for which software solutions are not satisfactory. Nevertheless, the...
Gespeichert in:
Veröffentlicht in: | Neural computing & applications 2020-04, Vol.32 (7), p.2299-2313 |
---|---|
Hauptverfasser: | , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 2313 |
---|---|
container_issue | 7 |
container_start_page | 2299 |
container_title | Neural computing & applications |
container_volume | 32 |
creator | Alomar, M. L. Skibinsky-Gitlin, Erik S. Frasser, Christiam F. Canals, Vincent Isern, Eugeni Roca, Miquel Rosselló, Josep L. |
description | Reservoir computing (RC) is a powerful machine learning methodology well suited for time-series processing. The hardware implementation of RC systems (HRC) may extend the utility of this neural approach to solve real-life problems for which software solutions are not satisfactory. Nevertheless, the implementation of massive parallel-connected reservoir networks is costly in terms of circuit area and power, mainly due to the requirement of implementing synapse multipliers that increase gate count to prohibitive values. Most HRC systems present in the literature solve this area problem by sequencializing the processes, thus loosing the expected fault-tolerance and low latency of fully parallel-connected HRCs. Therefore, the development of new methodologies to implement fully parallel HRC systems is of high interest to many computational intelligence applications requiring quick responses. In this article, we propose a compact hardware implementation for Echo-State Networks (an specific type of reservoir) that reduces the area cost by simplifying the synapses and using linear piece-wise activation functions for neurons. The proposed design is synthesized in a Field-Programmable Gate Array and evaluated for different time-series prediction tasks. Without compromising the overall accuracy, the proposed approach achieves a significant saving in terms of power and hardware when compared with recently published implementations. This technique pave the way for the low-power implementation of fully parallel reservoir networks containing thousands of neurons in a single integrated circuit. |
doi_str_mv | 10.1007/s00521-018-3912-4 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2384393766</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2384393766</sourcerecordid><originalsourceid>FETCH-LOGICAL-c359t-bc9fc7b40bdbcbc60a8b3a2a2636d35ed0a3d5e2f18fc7eef91a537d71bb89c53</originalsourceid><addsrcrecordid>eNp1kE1LxDAQhoMoWFd_gLeC5-qkk7TpTVnWD1jwoueQpMmSpV8mXWH_vVkqePI0MPO878BDyC2FewpQP0QAXtICqCiwoWXBzkhGGWKBwMU5yaBh6VoxvCRXMe4BgFWCZ-Rx45w33g5zPqmgus52ue-nzvZppWY_Dvno8mCjDd-jD7kZ--kw-2GXx2OcbR-vyYVTXbQ3v3NFPp83H-vXYvv-8rZ-2hYGeTMX2jTO1JqBbrXRpgIlNKpSlRVWLXLbgsKW29JRkThrXUMVx7qtqdaiMRxX5G7pncL4dbBxlvvxEIb0UpYoGDZYV1Wi6EKZMMYYrJNT8L0KR0lBnkTJRZRMouRJlGQpUy6ZmNhhZ8Nf8_-hH87cbNA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2384393766</pqid></control><display><type>article</type><title>Efficient parallel implementation of reservoir computing systems</title><source>SpringerLink Journals - AutoHoldings</source><creator>Alomar, M. L. ; Skibinsky-Gitlin, Erik S. ; Frasser, Christiam F. ; Canals, Vincent ; Isern, Eugeni ; Roca, Miquel ; Rosselló, Josep L.</creator><creatorcontrib>Alomar, M. L. ; Skibinsky-Gitlin, Erik S. ; Frasser, Christiam F. ; Canals, Vincent ; Isern, Eugeni ; Roca, Miquel ; Rosselló, Josep L.</creatorcontrib><description>Reservoir computing (RC) is a powerful machine learning methodology well suited for time-series processing. The hardware implementation of RC systems (HRC) may extend the utility of this neural approach to solve real-life problems for which software solutions are not satisfactory. Nevertheless, the implementation of massive parallel-connected reservoir networks is costly in terms of circuit area and power, mainly due to the requirement of implementing synapse multipliers that increase gate count to prohibitive values. Most HRC systems present in the literature solve this area problem by sequencializing the processes, thus loosing the expected fault-tolerance and low latency of fully parallel-connected HRCs. Therefore, the development of new methodologies to implement fully parallel HRC systems is of high interest to many computational intelligence applications requiring quick responses. In this article, we propose a compact hardware implementation for Echo-State Networks (an specific type of reservoir) that reduces the area cost by simplifying the synapses and using linear piece-wise activation functions for neurons. The proposed design is synthesized in a Field-Programmable Gate Array and evaluated for different time-series prediction tasks. Without compromising the overall accuracy, the proposed approach achieves a significant saving in terms of power and hardware when compared with recently published implementations. This technique pave the way for the low-power implementation of fully parallel reservoir networks containing thousands of neurons in a single integrated circuit.</description><identifier>ISSN: 0941-0643</identifier><identifier>EISSN: 1433-3058</identifier><identifier>DOI: 10.1007/s00521-018-3912-4</identifier><language>eng</language><publisher>London: Springer London</publisher><subject>Artificial Intelligence ; Computational Biology/Bioinformatics ; Computational Science and Engineering ; Computer Science ; Data Mining and Knowledge Discovery ; Energy conservation ; Fault tolerance ; Field programmable gate arrays ; Gate counting ; Hardware ; Image Processing and Computer Vision ; Integrated circuits ; Machine learning ; Networks ; Neurons ; Original Article ; Parallel connected ; Power ; Power management ; Probability and Statistics in Computer Science ; Synapses</subject><ispartof>Neural computing & applications, 2020-04, Vol.32 (7), p.2299-2313</ispartof><rights>Springer-Verlag London Ltd., part of Springer Nature 2018</rights><rights>Neural Computing and Applications is a copyright of Springer, (2018). All Rights Reserved.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c359t-bc9fc7b40bdbcbc60a8b3a2a2636d35ed0a3d5e2f18fc7eef91a537d71bb89c53</citedby><cites>FETCH-LOGICAL-c359t-bc9fc7b40bdbcbc60a8b3a2a2636d35ed0a3d5e2f18fc7eef91a537d71bb89c53</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s00521-018-3912-4$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s00521-018-3912-4$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,780,784,27922,27923,41486,42555,51317</link.rule.ids></links><search><creatorcontrib>Alomar, M. L.</creatorcontrib><creatorcontrib>Skibinsky-Gitlin, Erik S.</creatorcontrib><creatorcontrib>Frasser, Christiam F.</creatorcontrib><creatorcontrib>Canals, Vincent</creatorcontrib><creatorcontrib>Isern, Eugeni</creatorcontrib><creatorcontrib>Roca, Miquel</creatorcontrib><creatorcontrib>Rosselló, Josep L.</creatorcontrib><title>Efficient parallel implementation of reservoir computing systems</title><title>Neural computing & applications</title><addtitle>Neural Comput & Applic</addtitle><description>Reservoir computing (RC) is a powerful machine learning methodology well suited for time-series processing. The hardware implementation of RC systems (HRC) may extend the utility of this neural approach to solve real-life problems for which software solutions are not satisfactory. Nevertheless, the implementation of massive parallel-connected reservoir networks is costly in terms of circuit area and power, mainly due to the requirement of implementing synapse multipliers that increase gate count to prohibitive values. Most HRC systems present in the literature solve this area problem by sequencializing the processes, thus loosing the expected fault-tolerance and low latency of fully parallel-connected HRCs. Therefore, the development of new methodologies to implement fully parallel HRC systems is of high interest to many computational intelligence applications requiring quick responses. In this article, we propose a compact hardware implementation for Echo-State Networks (an specific type of reservoir) that reduces the area cost by simplifying the synapses and using linear piece-wise activation functions for neurons. The proposed design is synthesized in a Field-Programmable Gate Array and evaluated for different time-series prediction tasks. Without compromising the overall accuracy, the proposed approach achieves a significant saving in terms of power and hardware when compared with recently published implementations. This technique pave the way for the low-power implementation of fully parallel reservoir networks containing thousands of neurons in a single integrated circuit.</description><subject>Artificial Intelligence</subject><subject>Computational Biology/Bioinformatics</subject><subject>Computational Science and Engineering</subject><subject>Computer Science</subject><subject>Data Mining and Knowledge Discovery</subject><subject>Energy conservation</subject><subject>Fault tolerance</subject><subject>Field programmable gate arrays</subject><subject>Gate counting</subject><subject>Hardware</subject><subject>Image Processing and Computer Vision</subject><subject>Integrated circuits</subject><subject>Machine learning</subject><subject>Networks</subject><subject>Neurons</subject><subject>Original Article</subject><subject>Parallel connected</subject><subject>Power</subject><subject>Power management</subject><subject>Probability and Statistics in Computer Science</subject><subject>Synapses</subject><issn>0941-0643</issn><issn>1433-3058</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>AFKRA</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNp1kE1LxDAQhoMoWFd_gLeC5-qkk7TpTVnWD1jwoueQpMmSpV8mXWH_vVkqePI0MPO878BDyC2FewpQP0QAXtICqCiwoWXBzkhGGWKBwMU5yaBh6VoxvCRXMe4BgFWCZ-Rx45w33g5zPqmgus52ue-nzvZppWY_Dvno8mCjDd-jD7kZ--kw-2GXx2OcbR-vyYVTXbQ3v3NFPp83H-vXYvv-8rZ-2hYGeTMX2jTO1JqBbrXRpgIlNKpSlRVWLXLbgsKW29JRkThrXUMVx7qtqdaiMRxX5G7pncL4dbBxlvvxEIb0UpYoGDZYV1Wi6EKZMMYYrJNT8L0KR0lBnkTJRZRMouRJlGQpUy6ZmNhhZ8Nf8_-hH87cbNA</recordid><startdate>20200401</startdate><enddate>20200401</enddate><creator>Alomar, M. L.</creator><creator>Skibinsky-Gitlin, Erik S.</creator><creator>Frasser, Christiam F.</creator><creator>Canals, Vincent</creator><creator>Isern, Eugeni</creator><creator>Roca, Miquel</creator><creator>Rosselló, Josep L.</creator><general>Springer London</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>8FE</scope><scope>8FG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>P5Z</scope><scope>P62</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope></search><sort><creationdate>20200401</creationdate><title>Efficient parallel implementation of reservoir computing systems</title><author>Alomar, M. L. ; Skibinsky-Gitlin, Erik S. ; Frasser, Christiam F. ; Canals, Vincent ; Isern, Eugeni ; Roca, Miquel ; Rosselló, Josep L.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c359t-bc9fc7b40bdbcbc60a8b3a2a2636d35ed0a3d5e2f18fc7eef91a537d71bb89c53</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Artificial Intelligence</topic><topic>Computational Biology/Bioinformatics</topic><topic>Computational Science and Engineering</topic><topic>Computer Science</topic><topic>Data Mining and Knowledge Discovery</topic><topic>Energy conservation</topic><topic>Fault tolerance</topic><topic>Field programmable gate arrays</topic><topic>Gate counting</topic><topic>Hardware</topic><topic>Image Processing and Computer Vision</topic><topic>Integrated circuits</topic><topic>Machine learning</topic><topic>Networks</topic><topic>Neurons</topic><topic>Original Article</topic><topic>Parallel connected</topic><topic>Power</topic><topic>Power management</topic><topic>Probability and Statistics in Computer Science</topic><topic>Synapses</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Alomar, M. L.</creatorcontrib><creatorcontrib>Skibinsky-Gitlin, Erik S.</creatorcontrib><creatorcontrib>Frasser, Christiam F.</creatorcontrib><creatorcontrib>Canals, Vincent</creatorcontrib><creatorcontrib>Isern, Eugeni</creatorcontrib><creatorcontrib>Roca, Miquel</creatorcontrib><creatorcontrib>Rosselló, Josep L.</creatorcontrib><collection>CrossRef</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central</collection><collection>Technology Collection (ProQuest)</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><jtitle>Neural computing & applications</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Alomar, M. L.</au><au>Skibinsky-Gitlin, Erik S.</au><au>Frasser, Christiam F.</au><au>Canals, Vincent</au><au>Isern, Eugeni</au><au>Roca, Miquel</au><au>Rosselló, Josep L.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Efficient parallel implementation of reservoir computing systems</atitle><jtitle>Neural computing & applications</jtitle><stitle>Neural Comput & Applic</stitle><date>2020-04-01</date><risdate>2020</risdate><volume>32</volume><issue>7</issue><spage>2299</spage><epage>2313</epage><pages>2299-2313</pages><issn>0941-0643</issn><eissn>1433-3058</eissn><abstract>Reservoir computing (RC) is a powerful machine learning methodology well suited for time-series processing. The hardware implementation of RC systems (HRC) may extend the utility of this neural approach to solve real-life problems for which software solutions are not satisfactory. Nevertheless, the implementation of massive parallel-connected reservoir networks is costly in terms of circuit area and power, mainly due to the requirement of implementing synapse multipliers that increase gate count to prohibitive values. Most HRC systems present in the literature solve this area problem by sequencializing the processes, thus loosing the expected fault-tolerance and low latency of fully parallel-connected HRCs. Therefore, the development of new methodologies to implement fully parallel HRC systems is of high interest to many computational intelligence applications requiring quick responses. In this article, we propose a compact hardware implementation for Echo-State Networks (an specific type of reservoir) that reduces the area cost by simplifying the synapses and using linear piece-wise activation functions for neurons. The proposed design is synthesized in a Field-Programmable Gate Array and evaluated for different time-series prediction tasks. Without compromising the overall accuracy, the proposed approach achieves a significant saving in terms of power and hardware when compared with recently published implementations. This technique pave the way for the low-power implementation of fully parallel reservoir networks containing thousands of neurons in a single integrated circuit.</abstract><cop>London</cop><pub>Springer London</pub><doi>10.1007/s00521-018-3912-4</doi><tpages>15</tpages><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0941-0643 |
ispartof | Neural computing & applications, 2020-04, Vol.32 (7), p.2299-2313 |
issn | 0941-0643 1433-3058 |
language | eng |
recordid | cdi_proquest_journals_2384393766 |
source | SpringerLink Journals - AutoHoldings |
subjects | Artificial Intelligence Computational Biology/Bioinformatics Computational Science and Engineering Computer Science Data Mining and Knowledge Discovery Energy conservation Fault tolerance Field programmable gate arrays Gate counting Hardware Image Processing and Computer Vision Integrated circuits Machine learning Networks Neurons Original Article Parallel connected Power Power management Probability and Statistics in Computer Science Synapses |
title | Efficient parallel implementation of reservoir computing systems |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-09T11%3A05%3A53IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Efficient%20parallel%20implementation%20of%20reservoir%20computing%20systems&rft.jtitle=Neural%20computing%20&%20applications&rft.au=Alomar,%20M.%20L.&rft.date=2020-04-01&rft.volume=32&rft.issue=7&rft.spage=2299&rft.epage=2313&rft.pages=2299-2313&rft.issn=0941-0643&rft.eissn=1433-3058&rft_id=info:doi/10.1007/s00521-018-3912-4&rft_dat=%3Cproquest_cross%3E2384393766%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2384393766&rft_id=info:pmid/&rfr_iscdi=true |