Reservoir-size dependent learning in analogue neural networks
The implementation of artificial neural networks in hardware substrates is a major interdisciplinary enterprise. Well suited candidates for physical implementations must combine nonlinear neurons with dedicated and efficient hardware solutions for both connectivity and training. Reservoir computing...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2019-07 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Porte, Xavier Andreoli, Louis Jacquot, Maxime Larger, Laurent Brunner, Daniel |
description | The implementation of artificial neural networks in hardware substrates is a major interdisciplinary enterprise. Well suited candidates for physical implementations must combine nonlinear neurons with dedicated and efficient hardware solutions for both connectivity and training. Reservoir computing addresses the problems related with the network connectivity and training in an elegant and efficient way. However, important questions regarding impact of reservoir size and learning routines on the convergence-speed during learning remain unaddressed. Here, we study in detail the learning process of a recently demonstrated photonic neural network based on a reservoir. We use a greedy algorithm to train our neural network for the task of chaotic signals prediction and analyze the learning-error landscape. Our results unveil fundamental properties of the system's optimization hyperspace. Particularly, we determine the convergence speed of learning as a function of reservoir size and find exceptional, close to linear scaling. This linear dependence, together with our parallel diffractive coupling, represent optimal scaling conditions for our photonic neural network scheme. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2277747039</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2277747039</sourcerecordid><originalsourceid>FETCH-proquest_journals_22777470393</originalsourceid><addsrcrecordid>eNqNy7EKwjAQgOEgCBbtOwScCzFpjQ5OojiLewn0LKnhUu8SBZ_eDj6A07f8_0wU2phNtau1XoiSeVBK6a3VTWMKcbgCA72ip4r9B2QHI2AHmGQAR-ixlx6lQxdin0EiZHJhIr0jPXgl5ncXGMqfS7E-n27HSzVSfGbg1A4x0zRzq7W1trbK7M1_1RfV0Ti5</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2277747039</pqid></control><display><type>article</type><title>Reservoir-size dependent learning in analogue neural networks</title><source>Free E- Journals</source><creator>Porte, Xavier ; Andreoli, Louis ; Jacquot, Maxime ; Larger, Laurent ; Brunner, Daniel</creator><creatorcontrib>Porte, Xavier ; Andreoli, Louis ; Jacquot, Maxime ; Larger, Laurent ; Brunner, Daniel</creatorcontrib><description>The implementation of artificial neural networks in hardware substrates is a major interdisciplinary enterprise. Well suited candidates for physical implementations must combine nonlinear neurons with dedicated and efficient hardware solutions for both connectivity and training. Reservoir computing addresses the problems related with the network connectivity and training in an elegant and efficient way. However, important questions regarding impact of reservoir size and learning routines on the convergence-speed during learning remain unaddressed. Here, we study in detail the learning process of a recently demonstrated photonic neural network based on a reservoir. We use a greedy algorithm to train our neural network for the task of chaotic signals prediction and analyze the learning-error landscape. Our results unveil fundamental properties of the system's optimization hyperspace. Particularly, we determine the convergence speed of learning as a function of reservoir size and find exceptional, close to linear scaling. This linear dependence, together with our parallel diffractive coupling, represent optimal scaling conditions for our photonic neural network scheme.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Artificial neural networks ; Convergence ; Dependence ; Error analysis ; Greedy algorithms ; Hardware ; Hyperspaces ; Learning theory ; Neural networks ; Optimization ; Photonics ; Scaling ; Substrates ; Training</subject><ispartof>arXiv.org, 2019-07</ispartof><rights>2019. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Porte, Xavier</creatorcontrib><creatorcontrib>Andreoli, Louis</creatorcontrib><creatorcontrib>Jacquot, Maxime</creatorcontrib><creatorcontrib>Larger, Laurent</creatorcontrib><creatorcontrib>Brunner, Daniel</creatorcontrib><title>Reservoir-size dependent learning in analogue neural networks</title><title>arXiv.org</title><description>The implementation of artificial neural networks in hardware substrates is a major interdisciplinary enterprise. Well suited candidates for physical implementations must combine nonlinear neurons with dedicated and efficient hardware solutions for both connectivity and training. Reservoir computing addresses the problems related with the network connectivity and training in an elegant and efficient way. However, important questions regarding impact of reservoir size and learning routines on the convergence-speed during learning remain unaddressed. Here, we study in detail the learning process of a recently demonstrated photonic neural network based on a reservoir. We use a greedy algorithm to train our neural network for the task of chaotic signals prediction and analyze the learning-error landscape. Our results unveil fundamental properties of the system's optimization hyperspace. Particularly, we determine the convergence speed of learning as a function of reservoir size and find exceptional, close to linear scaling. This linear dependence, together with our parallel diffractive coupling, represent optimal scaling conditions for our photonic neural network scheme.</description><subject>Artificial neural networks</subject><subject>Convergence</subject><subject>Dependence</subject><subject>Error analysis</subject><subject>Greedy algorithms</subject><subject>Hardware</subject><subject>Hyperspaces</subject><subject>Learning theory</subject><subject>Neural networks</subject><subject>Optimization</subject><subject>Photonics</subject><subject>Scaling</subject><subject>Substrates</subject><subject>Training</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNy7EKwjAQgOEgCBbtOwScCzFpjQ5OojiLewn0LKnhUu8SBZ_eDj6A07f8_0wU2phNtau1XoiSeVBK6a3VTWMKcbgCA72ip4r9B2QHI2AHmGQAR-ixlx6lQxdin0EiZHJhIr0jPXgl5ncXGMqfS7E-n27HSzVSfGbg1A4x0zRzq7W1trbK7M1_1RfV0Ti5</recordid><startdate>20190723</startdate><enddate>20190723</enddate><creator>Porte, Xavier</creator><creator>Andreoli, Louis</creator><creator>Jacquot, Maxime</creator><creator>Larger, Laurent</creator><creator>Brunner, Daniel</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20190723</creationdate><title>Reservoir-size dependent learning in analogue neural networks</title><author>Porte, Xavier ; Andreoli, Louis ; Jacquot, Maxime ; Larger, Laurent ; Brunner, Daniel</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_22777470393</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Artificial neural networks</topic><topic>Convergence</topic><topic>Dependence</topic><topic>Error analysis</topic><topic>Greedy algorithms</topic><topic>Hardware</topic><topic>Hyperspaces</topic><topic>Learning theory</topic><topic>Neural networks</topic><topic>Optimization</topic><topic>Photonics</topic><topic>Scaling</topic><topic>Substrates</topic><topic>Training</topic><toplevel>online_resources</toplevel><creatorcontrib>Porte, Xavier</creatorcontrib><creatorcontrib>Andreoli, Louis</creatorcontrib><creatorcontrib>Jacquot, Maxime</creatorcontrib><creatorcontrib>Larger, Laurent</creatorcontrib><creatorcontrib>Brunner, Daniel</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Porte, Xavier</au><au>Andreoli, Louis</au><au>Jacquot, Maxime</au><au>Larger, Laurent</au><au>Brunner, Daniel</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Reservoir-size dependent learning in analogue neural networks</atitle><jtitle>arXiv.org</jtitle><date>2019-07-23</date><risdate>2019</risdate><eissn>2331-8422</eissn><abstract>The implementation of artificial neural networks in hardware substrates is a major interdisciplinary enterprise. Well suited candidates for physical implementations must combine nonlinear neurons with dedicated and efficient hardware solutions for both connectivity and training. Reservoir computing addresses the problems related with the network connectivity and training in an elegant and efficient way. However, important questions regarding impact of reservoir size and learning routines on the convergence-speed during learning remain unaddressed. Here, we study in detail the learning process of a recently demonstrated photonic neural network based on a reservoir. We use a greedy algorithm to train our neural network for the task of chaotic signals prediction and analyze the learning-error landscape. Our results unveil fundamental properties of the system's optimization hyperspace. Particularly, we determine the convergence speed of learning as a function of reservoir size and find exceptional, close to linear scaling. This linear dependence, together with our parallel diffractive coupling, represent optimal scaling conditions for our photonic neural network scheme.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2019-07 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2277747039 |
source | Free E- Journals |
subjects | Artificial neural networks Convergence Dependence Error analysis Greedy algorithms Hardware Hyperspaces Learning theory Neural networks Optimization Photonics Scaling Substrates Training |
title | Reservoir-size dependent learning in analogue neural networks |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-08T20%3A56%3A44IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Reservoir-size%20dependent%20learning%20in%20analogue%20neural%20networks&rft.jtitle=arXiv.org&rft.au=Porte,%20Xavier&rft.date=2019-07-23&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2277747039%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2277747039&rft_id=info:pmid/&rfr_iscdi=true |