“FORCE” learning in recurrent neural networks as data assimilation
It is shown that the “FORCE” algorithm for learning in arbitrarily connected networks of simple neuronal units can be cast as a Kalman Filter, with a particular state-dependent form for the background error covariances. The resulting interpretation has implications for initialization of the learning...
Gespeichert in:
Veröffentlicht in: | Chaos (Woodbury, N.Y.) N.Y.), 2017-12, Vol.27 (12), p.126804-126804 |
---|---|
1. Verfasser: | |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 126804 |
---|---|
container_issue | 12 |
container_start_page | 126804 |
container_title | Chaos (Woodbury, N.Y.) |
container_volume | 27 |
creator | Duane, Gregory S. |
description | It is shown that the “FORCE” algorithm for learning in arbitrarily connected networks of simple neuronal units can be cast as a Kalman Filter, with a particular state-dependent form for the background error covariances. The resulting interpretation has implications for initialization of the learning algorithm, leads to an extension to include interactions between the weight updates for different neurons, and can represent relationships within groups of multiple target output signals. |
doi_str_mv | 10.1063/1.4990730 |
format | Article |
fullrecord | <record><control><sourceid>proquest_pubme</sourceid><recordid>TN_cdi_proquest_journals_2116018256</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2116018256</sourcerecordid><originalsourceid>FETCH-LOGICAL-c383t-ff58d6b843ca2a63f7a643f993b7899c13f88101129d9b53c5008f7c1b96985f3</originalsourceid><addsrcrecordid>eNp90NFKwzAUBuAgipvTC19ACt6oUM1Jmja5lLGpMBiIXoc0TaSza2fSKt7tQfTl9iR2dioIevWfi4-fw4_QIeBzwDG9gPNICJxQvIX6gLkIk5iT7fXNohAYxj205_0MYwyEsl3UI4JwgSnro_Fq-Tae3g5Hq-V7UBjlyrx8CPIycEY3zpmyDkrTOFW0Ub9U7tEHygeZqlWbPp_nharzqtxHO1YV3hxscoDux6O74XU4mV7dDC8noaac1qG1jGdxyiOqFVExtYmKI2qFoGnChdBALeeAAYjIRMqobn_nNtGQilhwZukAnXS9C1c9NcbXcp57bYpClaZqvATBKWEJYXFLj3_RWdW4sv1OEoAYA-_Uaae0q7x3xsqFy-fKvUrAcj2uBLkZt7VHm8YmnZvsW36t2YKzDnid15-7_Nv2J36u3A-Ui8zSD4i4j4I</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2116018256</pqid></control><display><type>article</type><title>“FORCE” learning in recurrent neural networks as data assimilation</title><source>American Institute of Physics (AIP) Journals</source><source>Alma/SFX Local Collection</source><creator>Duane, Gregory S.</creator><creatorcontrib>Duane, Gregory S.</creatorcontrib><description>It is shown that the “FORCE” algorithm for learning in arbitrarily connected networks of simple neuronal units can be cast as a Kalman Filter, with a particular state-dependent form for the background error covariances. The resulting interpretation has implications for initialization of the learning algorithm, leads to an extension to include interactions between the weight updates for different neurons, and can represent relationships within groups of multiple target output signals.</description><identifier>ISSN: 1054-1500</identifier><identifier>EISSN: 1089-7682</identifier><identifier>DOI: 10.1063/1.4990730</identifier><identifier>PMID: 29289035</identifier><identifier>CODEN: CHAOEH</identifier><language>eng</language><publisher>United States: American Institute of Physics</publisher><subject>Algorithms ; Kalman filters ; Machine learning ; Recurrent neural networks ; Weight</subject><ispartof>Chaos (Woodbury, N.Y.), 2017-12, Vol.27 (12), p.126804-126804</ispartof><rights>Author(s)</rights><rights>2017 Author(s). Published by AIP Publishing.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c383t-ff58d6b843ca2a63f7a643f993b7899c13f88101129d9b53c5008f7c1b96985f3</citedby><cites>FETCH-LOGICAL-c383t-ff58d6b843ca2a63f7a643f993b7899c13f88101129d9b53c5008f7c1b96985f3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,794,4510,27923,27924</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/29289035$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Duane, Gregory S.</creatorcontrib><title>“FORCE” learning in recurrent neural networks as data assimilation</title><title>Chaos (Woodbury, N.Y.)</title><addtitle>Chaos</addtitle><description>It is shown that the “FORCE” algorithm for learning in arbitrarily connected networks of simple neuronal units can be cast as a Kalman Filter, with a particular state-dependent form for the background error covariances. The resulting interpretation has implications for initialization of the learning algorithm, leads to an extension to include interactions between the weight updates for different neurons, and can represent relationships within groups of multiple target output signals.</description><subject>Algorithms</subject><subject>Kalman filters</subject><subject>Machine learning</subject><subject>Recurrent neural networks</subject><subject>Weight</subject><issn>1054-1500</issn><issn>1089-7682</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2017</creationdate><recordtype>article</recordtype><recordid>eNp90NFKwzAUBuAgipvTC19ACt6oUM1Jmja5lLGpMBiIXoc0TaSza2fSKt7tQfTl9iR2dioIevWfi4-fw4_QIeBzwDG9gPNICJxQvIX6gLkIk5iT7fXNohAYxj205_0MYwyEsl3UI4JwgSnro_Fq-Tae3g5Hq-V7UBjlyrx8CPIycEY3zpmyDkrTOFW0Ub9U7tEHygeZqlWbPp_nharzqtxHO1YV3hxscoDux6O74XU4mV7dDC8noaac1qG1jGdxyiOqFVExtYmKI2qFoGnChdBALeeAAYjIRMqobn_nNtGQilhwZukAnXS9C1c9NcbXcp57bYpClaZqvATBKWEJYXFLj3_RWdW4sv1OEoAYA-_Uaae0q7x3xsqFy-fKvUrAcj2uBLkZt7VHm8YmnZvsW36t2YKzDnid15-7_Nv2J36u3A-Ui8zSD4i4j4I</recordid><startdate>201712</startdate><enddate>201712</enddate><creator>Duane, Gregory S.</creator><general>American Institute of Physics</general><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>8FD</scope><scope>H8D</scope><scope>L7M</scope><scope>7X8</scope></search><sort><creationdate>201712</creationdate><title>“FORCE” learning in recurrent neural networks as data assimilation</title><author>Duane, Gregory S.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c383t-ff58d6b843ca2a63f7a643f993b7899c13f88101129d9b53c5008f7c1b96985f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2017</creationdate><topic>Algorithms</topic><topic>Kalman filters</topic><topic>Machine learning</topic><topic>Recurrent neural networks</topic><topic>Weight</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Duane, Gregory S.</creatorcontrib><collection>PubMed</collection><collection>CrossRef</collection><collection>Technology Research Database</collection><collection>Aerospace Database</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>MEDLINE - Academic</collection><jtitle>Chaos (Woodbury, N.Y.)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Duane, Gregory S.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>“FORCE” learning in recurrent neural networks as data assimilation</atitle><jtitle>Chaos (Woodbury, N.Y.)</jtitle><addtitle>Chaos</addtitle><date>2017-12</date><risdate>2017</risdate><volume>27</volume><issue>12</issue><spage>126804</spage><epage>126804</epage><pages>126804-126804</pages><issn>1054-1500</issn><eissn>1089-7682</eissn><coden>CHAOEH</coden><abstract>It is shown that the “FORCE” algorithm for learning in arbitrarily connected networks of simple neuronal units can be cast as a Kalman Filter, with a particular state-dependent form for the background error covariances. The resulting interpretation has implications for initialization of the learning algorithm, leads to an extension to include interactions between the weight updates for different neurons, and can represent relationships within groups of multiple target output signals.</abstract><cop>United States</cop><pub>American Institute of Physics</pub><pmid>29289035</pmid><doi>10.1063/1.4990730</doi><tpages>8</tpages></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1054-1500 |
ispartof | Chaos (Woodbury, N.Y.), 2017-12, Vol.27 (12), p.126804-126804 |
issn | 1054-1500 1089-7682 |
language | eng |
recordid | cdi_proquest_journals_2116018256 |
source | American Institute of Physics (AIP) Journals; Alma/SFX Local Collection |
subjects | Algorithms Kalman filters Machine learning Recurrent neural networks Weight |
title | “FORCE” learning in recurrent neural networks as data assimilation |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-10T14%3A29%3A20IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_pubme&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=%E2%80%9CFORCE%E2%80%9D%20learning%20in%20recurrent%20neural%20networks%20as%20data%20assimilation&rft.jtitle=Chaos%20(Woodbury,%20N.Y.)&rft.au=Duane,%20Gregory%20S.&rft.date=2017-12&rft.volume=27&rft.issue=12&rft.spage=126804&rft.epage=126804&rft.pages=126804-126804&rft.issn=1054-1500&rft.eissn=1089-7682&rft.coden=CHAOEH&rft_id=info:doi/10.1063/1.4990730&rft_dat=%3Cproquest_pubme%3E2116018256%3C/proquest_pubme%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2116018256&rft_id=info:pmid/29289035&rfr_iscdi=true |