Statistical physics of learning in high-dimensional chaotic systems

In many complex systems, elementary units live in a chaotic environment and need to adapt their strategies to perform a task by extracting information from the environment and controlling the feedback loop on it. One of the main examples of systems of this kind is provided by recurrent neural networ...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:JSTAT 2023-11, Vol.2023 (11), p.113301
Hauptverfasser: Fournier, Samantha J, Urbani, Pierfrancesco
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue 11
container_start_page 113301
container_title JSTAT
container_volume 2023
creator Fournier, Samantha J
Urbani, Pierfrancesco
description In many complex systems, elementary units live in a chaotic environment and need to adapt their strategies to perform a task by extracting information from the environment and controlling the feedback loop on it. One of the main examples of systems of this kind is provided by recurrent neural networks. In this case, recurrent connections between neurons drive chaotic behavior, and when learning takes place, the response of the system to a perturbation should also take into account its feedback on the dynamics of the network itself. In this work, we consider an abstract model of a high-dimensional chaotic system as a paradigmatic model and study its dynamics. We study the model under two particular settings: Hebbian driving and FORCE training. In the first case, we show that Hebbian driving can be used to tune the level of chaos in the dynamics, and this reproduces some results recently obtained in the study of more biologically realistic models of recurrent neural networks. In the latter case, we show that the dynamical system can be trained to reproduce simple periodic functions. To do this, we consider the FORCE algorithm—originally developed to train recurrent neural networks—and adapt it to our high-dimensional chaotic system. We show that this algorithm drives the dynamics close to an asymptotic attractor the larger the training time. All our results are valid in the thermodynamic limit due to an exact analysis of the dynamics through dynamical mean field theory.
doi_str_mv 10.1088/1742-5468/ad082d
format Article
fullrecord <record><control><sourceid>hal_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1088_1742_5468_ad082d</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>oai_HAL_hal_04270548v1</sourcerecordid><originalsourceid>FETCH-LOGICAL-c309t-48985e683261707559048a1aa1edfaaf7c877a72f7493557af81375896d864ef3</originalsourceid><addsrcrecordid>eNp1kEFLw0AQRhdRsFbvHnMVjN1NdrOTYynaCgUP6nkZkt1mS5ItmSjk35sQKV48zTC8b5h5jN0L_iQ4wEpomcRKZrDCkkNSXrDFeXT5p79mN0RHztOES1iwzXuPvafeF1hHp2ogX1AUXFRb7FrfHiLfRpU_VHHpG9uSD-3IFRWGMRHRQL1t6JZdOazJ3v3WJft8ef7Y7OL92_Z1s97HRcrzPpaQg7IZpEkmNNdK5eMFKBCFLR2i0wVojTpxWuapUhodiFQryLMSMmldumQP894Ka3PqfIPdYAJ6s1vvzTTjMtFcSfgWI8tntugCUWfdOSC4mYSZyYiZjJhZ2Bh5nCM-nMwxfHXjq_Q__gP8q2sb</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Statistical physics of learning in high-dimensional chaotic systems</title><source>IOP Publishing Journals</source><source>Institute of Physics (IOP) Journals - HEAL-Link</source><creator>Fournier, Samantha J ; Urbani, Pierfrancesco</creator><creatorcontrib>Fournier, Samantha J ; Urbani, Pierfrancesco</creatorcontrib><description>In many complex systems, elementary units live in a chaotic environment and need to adapt their strategies to perform a task by extracting information from the environment and controlling the feedback loop on it. One of the main examples of systems of this kind is provided by recurrent neural networks. In this case, recurrent connections between neurons drive chaotic behavior, and when learning takes place, the response of the system to a perturbation should also take into account its feedback on the dynamics of the network itself. In this work, we consider an abstract model of a high-dimensional chaotic system as a paradigmatic model and study its dynamics. We study the model under two particular settings: Hebbian driving and FORCE training. In the first case, we show that Hebbian driving can be used to tune the level of chaos in the dynamics, and this reproduces some results recently obtained in the study of more biologically realistic models of recurrent neural networks. In the latter case, we show that the dynamical system can be trained to reproduce simple periodic functions. To do this, we consider the FORCE algorithm—originally developed to train recurrent neural networks—and adapt it to our high-dimensional chaotic system. We show that this algorithm drives the dynamics close to an asymptotic attractor the larger the training time. All our results are valid in the thermodynamic limit due to an exact analysis of the dynamics through dynamical mean field theory.</description><identifier>ISSN: 1742-5468</identifier><identifier>EISSN: 1742-5468</identifier><identifier>DOI: 10.1088/1742-5468/ad082d</identifier><language>eng</language><publisher>IOP Publishing</publisher><subject>learning theory ; neuronal networks ; Physics ; spin glasses</subject><ispartof>JSTAT, 2023-11, Vol.2023 (11), p.113301</ispartof><rights>2023 IOP Publishing Ltd and SISSA Medialab srl</rights><rights>Distributed under a Creative Commons Attribution 4.0 International License</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c309t-48985e683261707559048a1aa1edfaaf7c877a72f7493557af81375896d864ef3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://iopscience.iop.org/article/10.1088/1742-5468/ad082d/pdf$$EPDF$$P50$$Giop$$H</linktopdf><link.rule.ids>230,314,780,784,885,27924,27925,53846,53893</link.rule.ids><backlink>$$Uhttps://hal.science/hal-04270548$$DView record in HAL$$Hfree_for_read</backlink></links><search><creatorcontrib>Fournier, Samantha J</creatorcontrib><creatorcontrib>Urbani, Pierfrancesco</creatorcontrib><title>Statistical physics of learning in high-dimensional chaotic systems</title><title>JSTAT</title><addtitle>JSTAT</addtitle><addtitle>J. Stat. Mech</addtitle><description>In many complex systems, elementary units live in a chaotic environment and need to adapt their strategies to perform a task by extracting information from the environment and controlling the feedback loop on it. One of the main examples of systems of this kind is provided by recurrent neural networks. In this case, recurrent connections between neurons drive chaotic behavior, and when learning takes place, the response of the system to a perturbation should also take into account its feedback on the dynamics of the network itself. In this work, we consider an abstract model of a high-dimensional chaotic system as a paradigmatic model and study its dynamics. We study the model under two particular settings: Hebbian driving and FORCE training. In the first case, we show that Hebbian driving can be used to tune the level of chaos in the dynamics, and this reproduces some results recently obtained in the study of more biologically realistic models of recurrent neural networks. In the latter case, we show that the dynamical system can be trained to reproduce simple periodic functions. To do this, we consider the FORCE algorithm—originally developed to train recurrent neural networks—and adapt it to our high-dimensional chaotic system. We show that this algorithm drives the dynamics close to an asymptotic attractor the larger the training time. All our results are valid in the thermodynamic limit due to an exact analysis of the dynamics through dynamical mean field theory.</description><subject>learning theory</subject><subject>neuronal networks</subject><subject>Physics</subject><subject>spin glasses</subject><issn>1742-5468</issn><issn>1742-5468</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNp1kEFLw0AQRhdRsFbvHnMVjN1NdrOTYynaCgUP6nkZkt1mS5ItmSjk35sQKV48zTC8b5h5jN0L_iQ4wEpomcRKZrDCkkNSXrDFeXT5p79mN0RHztOES1iwzXuPvafeF1hHp2ogX1AUXFRb7FrfHiLfRpU_VHHpG9uSD-3IFRWGMRHRQL1t6JZdOazJ3v3WJft8ef7Y7OL92_Z1s97HRcrzPpaQg7IZpEkmNNdK5eMFKBCFLR2i0wVojTpxWuapUhodiFQryLMSMmldumQP894Ka3PqfIPdYAJ6s1vvzTTjMtFcSfgWI8tntugCUWfdOSC4mYSZyYiZjJhZ2Bh5nCM-nMwxfHXjq_Q__gP8q2sb</recordid><startdate>20231101</startdate><enddate>20231101</enddate><creator>Fournier, Samantha J</creator><creator>Urbani, Pierfrancesco</creator><general>IOP Publishing</general><scope>AAYXX</scope><scope>CITATION</scope><scope>1XC</scope><scope>VOOES</scope></search><sort><creationdate>20231101</creationdate><title>Statistical physics of learning in high-dimensional chaotic systems</title><author>Fournier, Samantha J ; Urbani, Pierfrancesco</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c309t-48985e683261707559048a1aa1edfaaf7c877a72f7493557af81375896d864ef3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>learning theory</topic><topic>neuronal networks</topic><topic>Physics</topic><topic>spin glasses</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Fournier, Samantha J</creatorcontrib><creatorcontrib>Urbani, Pierfrancesco</creatorcontrib><collection>CrossRef</collection><collection>Hyper Article en Ligne (HAL)</collection><collection>Hyper Article en Ligne (HAL) (Open Access)</collection><jtitle>JSTAT</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Fournier, Samantha J</au><au>Urbani, Pierfrancesco</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Statistical physics of learning in high-dimensional chaotic systems</atitle><jtitle>JSTAT</jtitle><stitle>JSTAT</stitle><addtitle>J. Stat. Mech</addtitle><date>2023-11-01</date><risdate>2023</risdate><volume>2023</volume><issue>11</issue><spage>113301</spage><pages>113301-</pages><issn>1742-5468</issn><eissn>1742-5468</eissn><abstract>In many complex systems, elementary units live in a chaotic environment and need to adapt their strategies to perform a task by extracting information from the environment and controlling the feedback loop on it. One of the main examples of systems of this kind is provided by recurrent neural networks. In this case, recurrent connections between neurons drive chaotic behavior, and when learning takes place, the response of the system to a perturbation should also take into account its feedback on the dynamics of the network itself. In this work, we consider an abstract model of a high-dimensional chaotic system as a paradigmatic model and study its dynamics. We study the model under two particular settings: Hebbian driving and FORCE training. In the first case, we show that Hebbian driving can be used to tune the level of chaos in the dynamics, and this reproduces some results recently obtained in the study of more biologically realistic models of recurrent neural networks. In the latter case, we show that the dynamical system can be trained to reproduce simple periodic functions. To do this, we consider the FORCE algorithm—originally developed to train recurrent neural networks—and adapt it to our high-dimensional chaotic system. We show that this algorithm drives the dynamics close to an asymptotic attractor the larger the training time. All our results are valid in the thermodynamic limit due to an exact analysis of the dynamics through dynamical mean field theory.</abstract><pub>IOP Publishing</pub><doi>10.1088/1742-5468/ad082d</doi><tpages>33</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1742-5468
ispartof JSTAT, 2023-11, Vol.2023 (11), p.113301
issn 1742-5468
1742-5468
language eng
recordid cdi_crossref_primary_10_1088_1742_5468_ad082d
source IOP Publishing Journals; Institute of Physics (IOP) Journals - HEAL-Link
subjects learning theory
neuronal networks
Physics
spin glasses
title Statistical physics of learning in high-dimensional chaotic systems
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-28T01%3A17%3A52IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-hal_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Statistical%20physics%20of%20learning%20in%20high-dimensional%20chaotic%20systems&rft.jtitle=JSTAT&rft.au=Fournier,%20Samantha%20J&rft.date=2023-11-01&rft.volume=2023&rft.issue=11&rft.spage=113301&rft.pages=113301-&rft.issn=1742-5468&rft.eissn=1742-5468&rft_id=info:doi/10.1088/1742-5468/ad082d&rft_dat=%3Chal_cross%3Eoai_HAL_hal_04270548v1%3C/hal_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true