Language models show human-like content effects on reasoning tasks

Reasoning is a key ability for an intelligent system. Large language models (LMs) achieve above-chance performance on abstract reasoning tasks, but exhibit many imperfections. However, human abstract reasoning is also imperfect. For example, human reasoning is affected by our real-world knowledge an...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-07
Hauptverfasser: Dasgupta, Ishita, Lampinen, Andrew K, Chan, Stephanie C Y, Sheahan, Hannah R, Creswell, Antonia, Kumaran, Dharshan, McClelland, James L, Hill, Felix
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Dasgupta, Ishita
Lampinen, Andrew K
Chan, Stephanie C Y
Sheahan, Hannah R
Creswell, Antonia
Kumaran, Dharshan
McClelland, James L
Hill, Felix
description Reasoning is a key ability for an intelligent system. Large language models (LMs) achieve above-chance performance on abstract reasoning tasks, but exhibit many imperfections. However, human abstract reasoning is also imperfect. For example, human reasoning is affected by our real-world knowledge and beliefs, and shows notable "content effects"; humans reason more reliably when the semantic content of a problem supports the correct logical inferences. These content-entangled reasoning patterns play a central role in debates about the fundamental nature of human intelligence. Here, we investigate whether language models \(\unicode{x2014}\) whose prior expectations capture some aspects of human knowledge \(\unicode{x2014}\) similarly mix content into their answers to logical problems. We explored this question across three logical reasoning tasks: natural language inference, judging the logical validity of syllogisms, and the Wason selection task. We evaluate state of the art large language models, as well as humans, and find that the language models reflect many of the same patterns observed in humans across these tasks \(\unicode{x2014}\) like humans, models answer more accurately when the semantic content of a task supports the logical inferences. These parallels are reflected both in answer patterns, and in lower-level features like the relationship between model answer distributions and human response times. Our findings have implications for understanding both these cognitive effects in humans, and the factors that contribute to language model performance.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2689954269</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2689954269</sourcerecordid><originalsourceid>FETCH-proquest_journals_26899542693</originalsourceid><addsrcrecordid>eNqNyj0KwkAQQOFFEAyaOwxYB-JuEpNWUSws7cOSTP4zo5ldvL4WHsDqFd9bqUAbc4jyROuNCkWGOI51dtRpagJ1ultqvW0RZq5xEpCO39D52VI09SNCxeSQHGDTYOUEmGBBK0w9teCsjLJT68ZOguGvW7W_Xh7nW_Rc-OVRXDmwX-hLpc7yokgTnRXmv-sD21o6Nw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2689954269</pqid></control><display><type>article</type><title>Language models show human-like content effects on reasoning tasks</title><source>Free E- Journals</source><creator>Dasgupta, Ishita ; Lampinen, Andrew K ; Chan, Stephanie C Y ; Sheahan, Hannah R ; Creswell, Antonia ; Kumaran, Dharshan ; McClelland, James L ; Hill, Felix</creator><creatorcontrib>Dasgupta, Ishita ; Lampinen, Andrew K ; Chan, Stephanie C Y ; Sheahan, Hannah R ; Creswell, Antonia ; Kumaran, Dharshan ; McClelland, James L ; Hill, Felix</creatorcontrib><description>Reasoning is a key ability for an intelligent system. Large language models (LMs) achieve above-chance performance on abstract reasoning tasks, but exhibit many imperfections. However, human abstract reasoning is also imperfect. For example, human reasoning is affected by our real-world knowledge and beliefs, and shows notable "content effects"; humans reason more reliably when the semantic content of a problem supports the correct logical inferences. These content-entangled reasoning patterns play a central role in debates about the fundamental nature of human intelligence. Here, we investigate whether language models \(\unicode{x2014}\) whose prior expectations capture some aspects of human knowledge \(\unicode{x2014}\) similarly mix content into their answers to logical problems. We explored this question across three logical reasoning tasks: natural language inference, judging the logical validity of syllogisms, and the Wason selection task. We evaluate state of the art large language models, as well as humans, and find that the language models reflect many of the same patterns observed in humans across these tasks \(\unicode{x2014}\) like humans, models answer more accurately when the semantic content of a task supports the logical inferences. These parallels are reflected both in answer patterns, and in lower-level features like the relationship between model answer distributions and human response times. Our findings have implications for understanding both these cognitive effects in humans, and the factors that contribute to language model performance.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Cognition &amp; reasoning ; Language ; Reasoning</subject><ispartof>arXiv.org, 2024-07</ispartof><rights>2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Dasgupta, Ishita</creatorcontrib><creatorcontrib>Lampinen, Andrew K</creatorcontrib><creatorcontrib>Chan, Stephanie C Y</creatorcontrib><creatorcontrib>Sheahan, Hannah R</creatorcontrib><creatorcontrib>Creswell, Antonia</creatorcontrib><creatorcontrib>Kumaran, Dharshan</creatorcontrib><creatorcontrib>McClelland, James L</creatorcontrib><creatorcontrib>Hill, Felix</creatorcontrib><title>Language models show human-like content effects on reasoning tasks</title><title>arXiv.org</title><description>Reasoning is a key ability for an intelligent system. Large language models (LMs) achieve above-chance performance on abstract reasoning tasks, but exhibit many imperfections. However, human abstract reasoning is also imperfect. For example, human reasoning is affected by our real-world knowledge and beliefs, and shows notable "content effects"; humans reason more reliably when the semantic content of a problem supports the correct logical inferences. These content-entangled reasoning patterns play a central role in debates about the fundamental nature of human intelligence. Here, we investigate whether language models \(\unicode{x2014}\) whose prior expectations capture some aspects of human knowledge \(\unicode{x2014}\) similarly mix content into their answers to logical problems. We explored this question across three logical reasoning tasks: natural language inference, judging the logical validity of syllogisms, and the Wason selection task. We evaluate state of the art large language models, as well as humans, and find that the language models reflect many of the same patterns observed in humans across these tasks \(\unicode{x2014}\) like humans, models answer more accurately when the semantic content of a task supports the logical inferences. These parallels are reflected both in answer patterns, and in lower-level features like the relationship between model answer distributions and human response times. Our findings have implications for understanding both these cognitive effects in humans, and the factors that contribute to language model performance.</description><subject>Cognition &amp; reasoning</subject><subject>Language</subject><subject>Reasoning</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNyj0KwkAQQOFFEAyaOwxYB-JuEpNWUSws7cOSTP4zo5ldvL4WHsDqFd9bqUAbc4jyROuNCkWGOI51dtRpagJ1ultqvW0RZq5xEpCO39D52VI09SNCxeSQHGDTYOUEmGBBK0w9teCsjLJT68ZOguGvW7W_Xh7nW_Rc-OVRXDmwX-hLpc7yokgTnRXmv-sD21o6Nw</recordid><startdate>20240717</startdate><enddate>20240717</enddate><creator>Dasgupta, Ishita</creator><creator>Lampinen, Andrew K</creator><creator>Chan, Stephanie C Y</creator><creator>Sheahan, Hannah R</creator><creator>Creswell, Antonia</creator><creator>Kumaran, Dharshan</creator><creator>McClelland, James L</creator><creator>Hill, Felix</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240717</creationdate><title>Language models show human-like content effects on reasoning tasks</title><author>Dasgupta, Ishita ; Lampinen, Andrew K ; Chan, Stephanie C Y ; Sheahan, Hannah R ; Creswell, Antonia ; Kumaran, Dharshan ; McClelland, James L ; Hill, Felix</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_26899542693</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Cognition &amp; reasoning</topic><topic>Language</topic><topic>Reasoning</topic><toplevel>online_resources</toplevel><creatorcontrib>Dasgupta, Ishita</creatorcontrib><creatorcontrib>Lampinen, Andrew K</creatorcontrib><creatorcontrib>Chan, Stephanie C Y</creatorcontrib><creatorcontrib>Sheahan, Hannah R</creatorcontrib><creatorcontrib>Creswell, Antonia</creatorcontrib><creatorcontrib>Kumaran, Dharshan</creatorcontrib><creatorcontrib>McClelland, James L</creatorcontrib><creatorcontrib>Hill, Felix</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Dasgupta, Ishita</au><au>Lampinen, Andrew K</au><au>Chan, Stephanie C Y</au><au>Sheahan, Hannah R</au><au>Creswell, Antonia</au><au>Kumaran, Dharshan</au><au>McClelland, James L</au><au>Hill, Felix</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Language models show human-like content effects on reasoning tasks</atitle><jtitle>arXiv.org</jtitle><date>2024-07-17</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Reasoning is a key ability for an intelligent system. Large language models (LMs) achieve above-chance performance on abstract reasoning tasks, but exhibit many imperfections. However, human abstract reasoning is also imperfect. For example, human reasoning is affected by our real-world knowledge and beliefs, and shows notable "content effects"; humans reason more reliably when the semantic content of a problem supports the correct logical inferences. These content-entangled reasoning patterns play a central role in debates about the fundamental nature of human intelligence. Here, we investigate whether language models \(\unicode{x2014}\) whose prior expectations capture some aspects of human knowledge \(\unicode{x2014}\) similarly mix content into their answers to logical problems. We explored this question across three logical reasoning tasks: natural language inference, judging the logical validity of syllogisms, and the Wason selection task. We evaluate state of the art large language models, as well as humans, and find that the language models reflect many of the same patterns observed in humans across these tasks \(\unicode{x2014}\) like humans, models answer more accurately when the semantic content of a task supports the logical inferences. These parallels are reflected both in answer patterns, and in lower-level features like the relationship between model answer distributions and human response times. Our findings have implications for understanding both these cognitive effects in humans, and the factors that contribute to language model performance.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-07
issn 2331-8422
language eng
recordid cdi_proquest_journals_2689954269
source Free E- Journals
subjects Cognition & reasoning
Language
Reasoning
title Language models show human-like content effects on reasoning tasks
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-12T19%3A56%3A53IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Language%20models%20show%20human-like%20content%20effects%20on%20reasoning%20tasks&rft.jtitle=arXiv.org&rft.au=Dasgupta,%20Ishita&rft.date=2024-07-17&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2689954269%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2689954269&rft_id=info:pmid/&rfr_iscdi=true