Episodic Memory Theory for the Mechanistic Interpretation of Recurrent Neural Networks
Understanding the intricate operations of Recurrent Neural Networks (RNNs) mechanistically is pivotal for advancing their capabilities and applications. In this pursuit, we propose the Episodic Memory Theory (EMT), illustrating that RNNs can be conceptualized as discrete-time analogs of the recently...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2023-10 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Karuvally, Arjun Delmastro, Peter Siegelmann, Hava T |
description | Understanding the intricate operations of Recurrent Neural Networks (RNNs) mechanistically is pivotal for advancing their capabilities and applications. In this pursuit, we propose the Episodic Memory Theory (EMT), illustrating that RNNs can be conceptualized as discrete-time analogs of the recently proposed General Sequential Episodic Memory Model. To substantiate EMT, we introduce a novel set of algorithmic tasks tailored to probe the variable binding behavior in RNNs. Utilizing the EMT, we formulate a mathematically rigorous circuit that facilitates variable binding in these tasks. Our empirical investigations reveal that trained RNNs consistently converge to the variable binding circuit, thus indicating universality in the dynamics of RNNs. Building on these findings, we devise an algorithm to define a privileged basis, which reveals hidden neurons instrumental in the temporal storage and composition of variables, a mechanism vital for the successful generalization in these tasks. We show that the privileged basis enhances the interpretability of the learned parameters and hidden states of RNNs. Our work represents a step toward demystifying the internal mechanisms of RNNs and, for computational neuroscience, serves to bridge the gap between artificial neural networks and neural memory models. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2873068708</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2873068708</sourcerecordid><originalsourceid>FETCH-proquest_journals_28730687083</originalsourceid><addsrcrecordid>eNqNyrEKwjAUheEgCBbtOwScCzGxbXap6KCDFNcS6i1NrUm9SRDf3gg-gNMH5_wzknAhNpnccr4gqXMDY4wXJc9zkZBrNWlnb7qlJ3hYfNO6hy-dRep7iGvbK6Odj8XReMAJwSuvraG2oxdoAyIYT88QUI0R_7J4dysy79ToIP25JOt9Ve8O2YT2GcD5ZrABTbwaLkvBClkyKf6rPkJjQbw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2873068708</pqid></control><display><type>article</type><title>Episodic Memory Theory for the Mechanistic Interpretation of Recurrent Neural Networks</title><source>Free E- Journals</source><creator>Karuvally, Arjun ; Delmastro, Peter ; Siegelmann, Hava T</creator><creatorcontrib>Karuvally, Arjun ; Delmastro, Peter ; Siegelmann, Hava T</creatorcontrib><description>Understanding the intricate operations of Recurrent Neural Networks (RNNs) mechanistically is pivotal for advancing their capabilities and applications. In this pursuit, we propose the Episodic Memory Theory (EMT), illustrating that RNNs can be conceptualized as discrete-time analogs of the recently proposed General Sequential Episodic Memory Model. To substantiate EMT, we introduce a novel set of algorithmic tasks tailored to probe the variable binding behavior in RNNs. Utilizing the EMT, we formulate a mathematically rigorous circuit that facilitates variable binding in these tasks. Our empirical investigations reveal that trained RNNs consistently converge to the variable binding circuit, thus indicating universality in the dynamics of RNNs. Building on these findings, we devise an algorithm to define a privileged basis, which reveals hidden neurons instrumental in the temporal storage and composition of variables, a mechanism vital for the successful generalization in these tasks. We show that the privileged basis enhances the interpretability of the learned parameters and hidden states of RNNs. Our work represents a step toward demystifying the internal mechanisms of RNNs and, for computational neuroscience, serves to bridge the gap between artificial neural networks and neural memory models.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Algorithms ; Artificial neural networks ; Binding ; Circuits ; Memory ; Neural networks ; Recurrent neural networks</subject><ispartof>arXiv.org, 2023-10</ispartof><rights>2023. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Karuvally, Arjun</creatorcontrib><creatorcontrib>Delmastro, Peter</creatorcontrib><creatorcontrib>Siegelmann, Hava T</creatorcontrib><title>Episodic Memory Theory for the Mechanistic Interpretation of Recurrent Neural Networks</title><title>arXiv.org</title><description>Understanding the intricate operations of Recurrent Neural Networks (RNNs) mechanistically is pivotal for advancing their capabilities and applications. In this pursuit, we propose the Episodic Memory Theory (EMT), illustrating that RNNs can be conceptualized as discrete-time analogs of the recently proposed General Sequential Episodic Memory Model. To substantiate EMT, we introduce a novel set of algorithmic tasks tailored to probe the variable binding behavior in RNNs. Utilizing the EMT, we formulate a mathematically rigorous circuit that facilitates variable binding in these tasks. Our empirical investigations reveal that trained RNNs consistently converge to the variable binding circuit, thus indicating universality in the dynamics of RNNs. Building on these findings, we devise an algorithm to define a privileged basis, which reveals hidden neurons instrumental in the temporal storage and composition of variables, a mechanism vital for the successful generalization in these tasks. We show that the privileged basis enhances the interpretability of the learned parameters and hidden states of RNNs. Our work represents a step toward demystifying the internal mechanisms of RNNs and, for computational neuroscience, serves to bridge the gap between artificial neural networks and neural memory models.</description><subject>Algorithms</subject><subject>Artificial neural networks</subject><subject>Binding</subject><subject>Circuits</subject><subject>Memory</subject><subject>Neural networks</subject><subject>Recurrent neural networks</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNyrEKwjAUheEgCBbtOwScCzGxbXap6KCDFNcS6i1NrUm9SRDf3gg-gNMH5_wzknAhNpnccr4gqXMDY4wXJc9zkZBrNWlnb7qlJ3hYfNO6hy-dRep7iGvbK6Odj8XReMAJwSuvraG2oxdoAyIYT88QUI0R_7J4dysy79ToIP25JOt9Ve8O2YT2GcD5ZrABTbwaLkvBClkyKf6rPkJjQbw</recordid><startdate>20231003</startdate><enddate>20231003</enddate><creator>Karuvally, Arjun</creator><creator>Delmastro, Peter</creator><creator>Siegelmann, Hava T</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20231003</creationdate><title>Episodic Memory Theory for the Mechanistic Interpretation of Recurrent Neural Networks</title><author>Karuvally, Arjun ; Delmastro, Peter ; Siegelmann, Hava T</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_28730687083</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Algorithms</topic><topic>Artificial neural networks</topic><topic>Binding</topic><topic>Circuits</topic><topic>Memory</topic><topic>Neural networks</topic><topic>Recurrent neural networks</topic><toplevel>online_resources</toplevel><creatorcontrib>Karuvally, Arjun</creatorcontrib><creatorcontrib>Delmastro, Peter</creatorcontrib><creatorcontrib>Siegelmann, Hava T</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Access via ProQuest (Open Access)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Karuvally, Arjun</au><au>Delmastro, Peter</au><au>Siegelmann, Hava T</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Episodic Memory Theory for the Mechanistic Interpretation of Recurrent Neural Networks</atitle><jtitle>arXiv.org</jtitle><date>2023-10-03</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>Understanding the intricate operations of Recurrent Neural Networks (RNNs) mechanistically is pivotal for advancing their capabilities and applications. In this pursuit, we propose the Episodic Memory Theory (EMT), illustrating that RNNs can be conceptualized as discrete-time analogs of the recently proposed General Sequential Episodic Memory Model. To substantiate EMT, we introduce a novel set of algorithmic tasks tailored to probe the variable binding behavior in RNNs. Utilizing the EMT, we formulate a mathematically rigorous circuit that facilitates variable binding in these tasks. Our empirical investigations reveal that trained RNNs consistently converge to the variable binding circuit, thus indicating universality in the dynamics of RNNs. Building on these findings, we devise an algorithm to define a privileged basis, which reveals hidden neurons instrumental in the temporal storage and composition of variables, a mechanism vital for the successful generalization in these tasks. We show that the privileged basis enhances the interpretability of the learned parameters and hidden states of RNNs. Our work represents a step toward demystifying the internal mechanisms of RNNs and, for computational neuroscience, serves to bridge the gap between artificial neural networks and neural memory models.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2023-10 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2873068708 |
source | Free E- Journals |
subjects | Algorithms Artificial neural networks Binding Circuits Memory Neural networks Recurrent neural networks |
title | Episodic Memory Theory for the Mechanistic Interpretation of Recurrent Neural Networks |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-30T21%3A52%3A42IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Episodic%20Memory%20Theory%20for%20the%20Mechanistic%20Interpretation%20of%20Recurrent%20Neural%20Networks&rft.jtitle=arXiv.org&rft.au=Karuvally,%20Arjun&rft.date=2023-10-03&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2873068708%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2873068708&rft_id=info:pmid/&rfr_iscdi=true |