Exploring strategy differences between humans and monkeys with recurrent neural networks

Animal models are used to understand principles of human biology. Within cognitive neuroscience, non-human primates are considered the premier model for studying decision-making behaviors in which direct manipulation experiments are still possible. Some prominent studies have brought to light major...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:PLoS computational biology 2023-11, Vol.19 (11), p.e1011618-e1011618
Hauptverfasser: Tsuda, Ben, Richmond, Barry J, Sejnowski, Terrence J
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page e1011618
container_issue 11
container_start_page e1011618
container_title PLoS computational biology
container_volume 19
creator Tsuda, Ben
Richmond, Barry J
Sejnowski, Terrence J
description Animal models are used to understand principles of human biology. Within cognitive neuroscience, non-human primates are considered the premier model for studying decision-making behaviors in which direct manipulation experiments are still possible. Some prominent studies have brought to light major discrepancies between monkey and human cognition, highlighting problems with unverified extrapolation from monkey to human. Here, we use a parallel model system—artificial neural networks (ANNs)—to investigate a well-established discrepancy identified between monkeys and humans with a working memory task, in which monkeys appear to use a recency-based strategy while humans use a target-selective strategy. We find that ANNs trained on the same task exhibit a progression of behavior from random behavior (untrained) to recency-like behavior (partially trained) and finally to selective behavior (further trained), suggesting monkeys and humans may occupy different points in the same overall learning progression. Surprisingly, what appears to be recency-like behavior in the ANN, is in fact an emergent non-recency-based property of the organization of the neural network’s state space during its development through training. We find that explicit encouragement of recency behavior during training has a dual effect, not only causing an accentuated recency-like behavior, but also speeding up the learning process altogether, resulting in an efficient shaping mechanism to achieve the optimal strategy. Our results suggest a new explanation for the discrepency observed between monkeys and humans and reveal that what can appear to be a recency-based strategy in some cases may not be recency at all.
doi_str_mv 10.1371/journal.pcbi.1011618
format Article
fullrecord <record><control><sourceid>gale_plos_</sourceid><recordid>TN_cdi_plos_journals_3069179694</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A775219133</galeid><doaj_id>oai_doaj_org_article_715556fec5b146619b7ec2eb7c346860</doaj_id><sourcerecordid>A775219133</sourcerecordid><originalsourceid>FETCH-LOGICAL-c588t-5a0bd3153f76dd773377aa75abb8cc83e92a028c964a38e6422f58c5694ca303</originalsourceid><addsrcrecordid>eNqVkk1v1DAQhiMEoh_wD5CIxKU97GLH8UdOqKoKrFSBBD1wsxxnkvU2sRfbYbv_Hi8bEEG9IB_GGj_zjt_RZNkrjJaYcPx240ZvVb_c6tosMcKYYfEkO8WUkgUnVDz9636SnYWwQShdK_Y8OyG8EqSg6DT7dvOw7Z03tstD9CpCt88b07bgwWoIeQ1xB2Dz9TgoG3Jlm3xw9h72Id-ZuM496NEnNuYWRq_6FOLO-fvwInvWqj7AyymeZ3fvb-6uPy5uP39YXV_dLjQVIi6oQnVDMCUtZ03DOSGcK8WpqmuhtSBQFQoVQlesVEQAK4uipUJTVpVaEUTOs9dH2eQiyGkmQRLEKsyrRCVidSQapzZy682g_F46ZeSvhPOdVD4a3YPkaWCUtaBpjUvGcFVz0AXUXJOSCXbo9m7qNtYDNDr5Tp5novMXa9aycz8kTv-hhJGkcDEpePd9hBDlYIKGvlcW3BhkIaqiSCynCX3zD_q4vYnqVHJgbOtSY30QlVec0wJXmBzaLh-h0mlgMNpZaE3KzwouZwWJifAQOzWGIFdfv_wH-2nOlkdWexeCh_bP8DCSh73-bVIe9lpOe01-AiVa6ds</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3069179694</pqid></control><display><type>article</type><title>Exploring strategy differences between humans and monkeys with recurrent neural networks</title><source>DOAJ Directory of Open Access Journals</source><source>Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals</source><source>Public Library of Science (PLoS)</source><source>PubMed Central</source><creator>Tsuda, Ben ; Richmond, Barry J ; Sejnowski, Terrence J</creator><contributor>Buschman, Tim</contributor><creatorcontrib>Tsuda, Ben ; Richmond, Barry J ; Sejnowski, Terrence J ; Buschman, Tim</creatorcontrib><description>Animal models are used to understand principles of human biology. Within cognitive neuroscience, non-human primates are considered the premier model for studying decision-making behaviors in which direct manipulation experiments are still possible. Some prominent studies have brought to light major discrepancies between monkey and human cognition, highlighting problems with unverified extrapolation from monkey to human. Here, we use a parallel model system—artificial neural networks (ANNs)—to investigate a well-established discrepancy identified between monkeys and humans with a working memory task, in which monkeys appear to use a recency-based strategy while humans use a target-selective strategy. We find that ANNs trained on the same task exhibit a progression of behavior from random behavior (untrained) to recency-like behavior (partially trained) and finally to selective behavior (further trained), suggesting monkeys and humans may occupy different points in the same overall learning progression. Surprisingly, what appears to be recency-like behavior in the ANN, is in fact an emergent non-recency-based property of the organization of the neural network’s state space during its development through training. We find that explicit encouragement of recency behavior during training has a dual effect, not only causing an accentuated recency-like behavior, but also speeding up the learning process altogether, resulting in an efficient shaping mechanism to achieve the optimal strategy. Our results suggest a new explanation for the discrepency observed between monkeys and humans and reveal that what can appear to be a recency-based strategy in some cases may not be recency at all.</description><identifier>ISSN: 1553-7358</identifier><identifier>ISSN: 1553-734X</identifier><identifier>EISSN: 1553-7358</identifier><identifier>DOI: 10.1371/journal.pcbi.1011618</identifier><identifier>PMID: 37983250</identifier><language>eng</language><publisher>San Francisco: Public Library of Science</publisher><subject>Analysis ; Animal experimentation ; Animal models ; Artificial neural networks ; Behavior ; Biology and Life Sciences ; Cognition ; Computer and Information Sciences ; Decision making ; Geometry ; Human-animal relationships ; Learning ; Memory tasks ; Mental task performance ; Methods ; Monkeys ; Monkeys &amp; apes ; Neural networks ; Neurosciences ; Recurrent neural networks ; Research and Analysis Methods ; Social Sciences ; Training</subject><ispartof>PLoS computational biology, 2023-11, Vol.19 (11), p.e1011618-e1011618</ispartof><rights>COPYRIGHT 2023 Public Library of Science</rights><rights>This is an open access article, free of all copyright, and may be freely reproduced, distributed, transmitted, modified, built upon, or otherwise used by anyone for any lawful purpose. The work is made available under the Creative Commons CC0 public domain dedication: https://creativecommons.org/publicdomain/zero/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c588t-5a0bd3153f76dd773377aa75abb8cc83e92a028c964a38e6422f58c5694ca303</cites><orcidid>0000-0003-2484-9044 ; 0000-0002-8234-1540</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC10695363/pdf/$$EPDF$$P50$$Gpubmedcentral$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC10695363/$$EHTML$$P50$$Gpubmedcentral$$Hfree_for_read</linktohtml><link.rule.ids>230,315,728,781,785,865,886,2103,2929,23871,27929,27930,53796,53798</link.rule.ids></links><search><contributor>Buschman, Tim</contributor><creatorcontrib>Tsuda, Ben</creatorcontrib><creatorcontrib>Richmond, Barry J</creatorcontrib><creatorcontrib>Sejnowski, Terrence J</creatorcontrib><title>Exploring strategy differences between humans and monkeys with recurrent neural networks</title><title>PLoS computational biology</title><description>Animal models are used to understand principles of human biology. Within cognitive neuroscience, non-human primates are considered the premier model for studying decision-making behaviors in which direct manipulation experiments are still possible. Some prominent studies have brought to light major discrepancies between monkey and human cognition, highlighting problems with unverified extrapolation from monkey to human. Here, we use a parallel model system—artificial neural networks (ANNs)—to investigate a well-established discrepancy identified between monkeys and humans with a working memory task, in which monkeys appear to use a recency-based strategy while humans use a target-selective strategy. We find that ANNs trained on the same task exhibit a progression of behavior from random behavior (untrained) to recency-like behavior (partially trained) and finally to selective behavior (further trained), suggesting monkeys and humans may occupy different points in the same overall learning progression. Surprisingly, what appears to be recency-like behavior in the ANN, is in fact an emergent non-recency-based property of the organization of the neural network’s state space during its development through training. We find that explicit encouragement of recency behavior during training has a dual effect, not only causing an accentuated recency-like behavior, but also speeding up the learning process altogether, resulting in an efficient shaping mechanism to achieve the optimal strategy. Our results suggest a new explanation for the discrepency observed between monkeys and humans and reveal that what can appear to be a recency-based strategy in some cases may not be recency at all.</description><subject>Analysis</subject><subject>Animal experimentation</subject><subject>Animal models</subject><subject>Artificial neural networks</subject><subject>Behavior</subject><subject>Biology and Life Sciences</subject><subject>Cognition</subject><subject>Computer and Information Sciences</subject><subject>Decision making</subject><subject>Geometry</subject><subject>Human-animal relationships</subject><subject>Learning</subject><subject>Memory tasks</subject><subject>Mental task performance</subject><subject>Methods</subject><subject>Monkeys</subject><subject>Monkeys &amp; apes</subject><subject>Neural networks</subject><subject>Neurosciences</subject><subject>Recurrent neural networks</subject><subject>Research and Analysis Methods</subject><subject>Social Sciences</subject><subject>Training</subject><issn>1553-7358</issn><issn>1553-734X</issn><issn>1553-7358</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><sourceid>DOA</sourceid><recordid>eNqVkk1v1DAQhiMEoh_wD5CIxKU97GLH8UdOqKoKrFSBBD1wsxxnkvU2sRfbYbv_Hi8bEEG9IB_GGj_zjt_RZNkrjJaYcPx240ZvVb_c6tosMcKYYfEkO8WUkgUnVDz9636SnYWwQShdK_Y8OyG8EqSg6DT7dvOw7Z03tstD9CpCt88b07bgwWoIeQ1xB2Dz9TgoG3Jlm3xw9h72Id-ZuM496NEnNuYWRq_6FOLO-fvwInvWqj7AyymeZ3fvb-6uPy5uP39YXV_dLjQVIi6oQnVDMCUtZ03DOSGcK8WpqmuhtSBQFQoVQlesVEQAK4uipUJTVpVaEUTOs9dH2eQiyGkmQRLEKsyrRCVidSQapzZy682g_F46ZeSvhPOdVD4a3YPkaWCUtaBpjUvGcFVz0AXUXJOSCXbo9m7qNtYDNDr5Tp5novMXa9aycz8kTv-hhJGkcDEpePd9hBDlYIKGvlcW3BhkIaqiSCynCX3zD_q4vYnqVHJgbOtSY30QlVec0wJXmBzaLh-h0mlgMNpZaE3KzwouZwWJifAQOzWGIFdfv_wH-2nOlkdWexeCh_bP8DCSh73-bVIe9lpOe01-AiVa6ds</recordid><startdate>20231120</startdate><enddate>20231120</enddate><creator>Tsuda, Ben</creator><creator>Richmond, Barry J</creator><creator>Sejnowski, Terrence J</creator><general>Public Library of Science</general><general>Public Library of Science (PLoS)</general><scope>AAYXX</scope><scope>CITATION</scope><scope>ISN</scope><scope>ISR</scope><scope>3V.</scope><scope>7QO</scope><scope>7QP</scope><scope>7TK</scope><scope>7TM</scope><scope>7X7</scope><scope>7XB</scope><scope>88E</scope><scope>8AL</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FH</scope><scope>8FI</scope><scope>8FJ</scope><scope>8FK</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BBNVY</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>BHPHI</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FR3</scope><scope>FYUFA</scope><scope>GHDGH</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K7-</scope><scope>K9.</scope><scope>LK8</scope><scope>M0N</scope><scope>M0S</scope><scope>M1P</scope><scope>M7P</scope><scope>P5Z</scope><scope>P62</scope><scope>P64</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>Q9U</scope><scope>RC3</scope><scope>7X8</scope><scope>5PM</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0003-2484-9044</orcidid><orcidid>https://orcid.org/0000-0002-8234-1540</orcidid></search><sort><creationdate>20231120</creationdate><title>Exploring strategy differences between humans and monkeys with recurrent neural networks</title><author>Tsuda, Ben ; Richmond, Barry J ; Sejnowski, Terrence J</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c588t-5a0bd3153f76dd773377aa75abb8cc83e92a028c964a38e6422f58c5694ca303</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Analysis</topic><topic>Animal experimentation</topic><topic>Animal models</topic><topic>Artificial neural networks</topic><topic>Behavior</topic><topic>Biology and Life Sciences</topic><topic>Cognition</topic><topic>Computer and Information Sciences</topic><topic>Decision making</topic><topic>Geometry</topic><topic>Human-animal relationships</topic><topic>Learning</topic><topic>Memory tasks</topic><topic>Mental task performance</topic><topic>Methods</topic><topic>Monkeys</topic><topic>Monkeys &amp; apes</topic><topic>Neural networks</topic><topic>Neurosciences</topic><topic>Recurrent neural networks</topic><topic>Research and Analysis Methods</topic><topic>Social Sciences</topic><topic>Training</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Tsuda, Ben</creatorcontrib><creatorcontrib>Richmond, Barry J</creatorcontrib><creatorcontrib>Sejnowski, Terrence J</creatorcontrib><collection>CrossRef</collection><collection>Gale In Context: Canada</collection><collection>Gale In Context: Science</collection><collection>ProQuest Central (Corporate)</collection><collection>Biotechnology Research Abstracts</collection><collection>Calcium &amp; Calcified Tissue Abstracts</collection><collection>Neurosciences Abstracts</collection><collection>Nucleic Acids Abstracts</collection><collection>Health &amp; Medical Collection</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Medical Database (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Natural Science Collection</collection><collection>Hospital Premium Collection</collection><collection>Hospital Premium Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>Biological Science Collection</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>Natural Science Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Engineering Research Database</collection><collection>Health Research Premium Collection</collection><collection>Health Research Premium Collection (Alumni)</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer Science Database</collection><collection>ProQuest Health &amp; Medical Complete (Alumni)</collection><collection>ProQuest Biological Science Collection</collection><collection>Computing Database</collection><collection>Health &amp; Medical Collection (Alumni Edition)</collection><collection>Medical Database</collection><collection>Biological Science Database</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>ProQuest Central Basic</collection><collection>Genetics Abstracts</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>PLoS computational biology</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Tsuda, Ben</au><au>Richmond, Barry J</au><au>Sejnowski, Terrence J</au><au>Buschman, Tim</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Exploring strategy differences between humans and monkeys with recurrent neural networks</atitle><jtitle>PLoS computational biology</jtitle><date>2023-11-20</date><risdate>2023</risdate><volume>19</volume><issue>11</issue><spage>e1011618</spage><epage>e1011618</epage><pages>e1011618-e1011618</pages><issn>1553-7358</issn><issn>1553-734X</issn><eissn>1553-7358</eissn><abstract>Animal models are used to understand principles of human biology. Within cognitive neuroscience, non-human primates are considered the premier model for studying decision-making behaviors in which direct manipulation experiments are still possible. Some prominent studies have brought to light major discrepancies between monkey and human cognition, highlighting problems with unverified extrapolation from monkey to human. Here, we use a parallel model system—artificial neural networks (ANNs)—to investigate a well-established discrepancy identified between monkeys and humans with a working memory task, in which monkeys appear to use a recency-based strategy while humans use a target-selective strategy. We find that ANNs trained on the same task exhibit a progression of behavior from random behavior (untrained) to recency-like behavior (partially trained) and finally to selective behavior (further trained), suggesting monkeys and humans may occupy different points in the same overall learning progression. Surprisingly, what appears to be recency-like behavior in the ANN, is in fact an emergent non-recency-based property of the organization of the neural network’s state space during its development through training. We find that explicit encouragement of recency behavior during training has a dual effect, not only causing an accentuated recency-like behavior, but also speeding up the learning process altogether, resulting in an efficient shaping mechanism to achieve the optimal strategy. Our results suggest a new explanation for the discrepency observed between monkeys and humans and reveal that what can appear to be a recency-based strategy in some cases may not be recency at all.</abstract><cop>San Francisco</cop><pub>Public Library of Science</pub><pmid>37983250</pmid><doi>10.1371/journal.pcbi.1011618</doi><tpages>e1011618</tpages><orcidid>https://orcid.org/0000-0003-2484-9044</orcidid><orcidid>https://orcid.org/0000-0002-8234-1540</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1553-7358
ispartof PLoS computational biology, 2023-11, Vol.19 (11), p.e1011618-e1011618
issn 1553-7358
1553-734X
1553-7358
language eng
recordid cdi_plos_journals_3069179694
source DOAJ Directory of Open Access Journals; Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals; Public Library of Science (PLoS); PubMed Central
subjects Analysis
Animal experimentation
Animal models
Artificial neural networks
Behavior
Biology and Life Sciences
Cognition
Computer and Information Sciences
Decision making
Geometry
Human-animal relationships
Learning
Memory tasks
Mental task performance
Methods
Monkeys
Monkeys & apes
Neural networks
Neurosciences
Recurrent neural networks
Research and Analysis Methods
Social Sciences
Training
title Exploring strategy differences between humans and monkeys with recurrent neural networks
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-15T06%3A49%3A01IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_plos_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Exploring%20strategy%20differences%20between%20humans%20and%20monkeys%20with%20recurrent%20neural%20networks&rft.jtitle=PLoS%20computational%20biology&rft.au=Tsuda,%20Ben&rft.date=2023-11-20&rft.volume=19&rft.issue=11&rft.spage=e1011618&rft.epage=e1011618&rft.pages=e1011618-e1011618&rft.issn=1553-7358&rft.eissn=1553-7358&rft_id=info:doi/10.1371/journal.pcbi.1011618&rft_dat=%3Cgale_plos_%3EA775219133%3C/gale_plos_%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3069179694&rft_id=info:pmid/37983250&rft_galeid=A775219133&rft_doaj_id=oai_doaj_org_article_715556fec5b146619b7ec2eb7c346860&rfr_iscdi=true