Cognitive Effects in Large Language Models
Large Language Models (LLMs) such as ChatGPT have received enormous attention over the past year and are now used by hundreds of millions of people every day. The rapid adoption of this technology naturally raises questions about the possible biases such models might exhibit. In this work, we tested...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2023-08 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Shaki, Jonathan Kraus, Sarit Wooldridge, Michael |
description | Large Language Models (LLMs) such as ChatGPT have received enormous attention over the past year and are now used by hundreds of millions of people every day. The rapid adoption of this technology naturally raises questions about the possible biases such models might exhibit. In this work, we tested one of these models (GPT-3) on a range of cognitive effects, which are systematic patterns that are usually found in human cognitive tasks. We found that LLMs are indeed prone to several human cognitive effects. Specifically, we show that the priming, distance, SNARC, and size congruity effects were presented with GPT-3, while the anchoring effect is absent. We describe our methodology, and specifically the way we converted real-world experiments to text-based experiments. Finally, we speculate on the possible reasons why GPT-3 exhibits these effects and discuss whether they are imitated or reinvented. |
doi_str_mv | 10.48550/arxiv.2308.14337 |
format | Article |
fullrecord | <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_2308_14337</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2908960953</sourcerecordid><originalsourceid>FETCH-LOGICAL-a957-3acdd20a261365d7e428d2b2e309eca23fec7959b6feda3e6fd028fa0d84a3713</originalsourceid><addsrcrecordid>eNotj09Lw0AQxRdBsNR-AE8GvAmJk5nsv6OEaoWIl97DNrsbUmpSd5Oi397YennvHR5v5sfYXQ5ZoTiHJxO-u1OGBCrLCyJ5xRZIlKeqQLxhqxj3AIBCIue0YI_l0Pbd2J1csvbeNWNMuj6pTGjdrH07mTm8D9Yd4i279uYQ3erfl2z7st6Wm7T6eH0rn6vUaC5TMo21CAZFToJb6QpUFnfoCLRrDNJ8RGqud8I7a8gJbwGVN2BVYUjmtGT3l9kzSH0M3acJP_UfUH0GmhsPl8YxDF-Ti2O9H6bQzz_VqEFpAZoT_QJynkyq</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2908960953</pqid></control><display><type>article</type><title>Cognitive Effects in Large Language Models</title><source>Freely Accessible Journals</source><source>arXiv.org</source><creator>Shaki, Jonathan ; Kraus, Sarit ; Wooldridge, Michael</creator><creatorcontrib>Shaki, Jonathan ; Kraus, Sarit ; Wooldridge, Michael</creatorcontrib><description>Large Language Models (LLMs) such as ChatGPT have received enormous attention over the past year and are now used by hundreds of millions of people every day. The rapid adoption of this technology naturally raises questions about the possible biases such models might exhibit. In this work, we tested one of these models (GPT-3) on a range of cognitive effects, which are systematic patterns that are usually found in human cognitive tasks. We found that LLMs are indeed prone to several human cognitive effects. Specifically, we show that the priming, distance, SNARC, and size congruity effects were presented with GPT-3, while the anchoring effect is absent. We describe our methodology, and specifically the way we converted real-world experiments to text-based experiments. Finally, we speculate on the possible reasons why GPT-3 exhibits these effects and discuss whether they are imitated or reinvented.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2308.14337</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Cognitive tasks ; Computer Science - Artificial Intelligence ; Computer Science - Computation and Language ; Large language models</subject><ispartof>arXiv.org, 2023-08</ispartof><rights>2023. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,784,885,27925</link.rule.ids><backlink>$$Uhttps://doi.org/10.48550/arXiv.2308.14337$$DView paper in arXiv$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.3233/FAIA230505$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink></links><search><creatorcontrib>Shaki, Jonathan</creatorcontrib><creatorcontrib>Kraus, Sarit</creatorcontrib><creatorcontrib>Wooldridge, Michael</creatorcontrib><title>Cognitive Effects in Large Language Models</title><title>arXiv.org</title><description>Large Language Models (LLMs) such as ChatGPT have received enormous attention over the past year and are now used by hundreds of millions of people every day. The rapid adoption of this technology naturally raises questions about the possible biases such models might exhibit. In this work, we tested one of these models (GPT-3) on a range of cognitive effects, which are systematic patterns that are usually found in human cognitive tasks. We found that LLMs are indeed prone to several human cognitive effects. Specifically, we show that the priming, distance, SNARC, and size congruity effects were presented with GPT-3, while the anchoring effect is absent. We describe our methodology, and specifically the way we converted real-world experiments to text-based experiments. Finally, we speculate on the possible reasons why GPT-3 exhibits these effects and discuss whether they are imitated or reinvented.</description><subject>Cognitive tasks</subject><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computation and Language</subject><subject>Large language models</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GOX</sourceid><recordid>eNotj09Lw0AQxRdBsNR-AE8GvAmJk5nsv6OEaoWIl97DNrsbUmpSd5Oi397YennvHR5v5sfYXQ5ZoTiHJxO-u1OGBCrLCyJ5xRZIlKeqQLxhqxj3AIBCIue0YI_l0Pbd2J1csvbeNWNMuj6pTGjdrH07mTm8D9Yd4i279uYQ3erfl2z7st6Wm7T6eH0rn6vUaC5TMo21CAZFToJb6QpUFnfoCLRrDNJ8RGqud8I7a8gJbwGVN2BVYUjmtGT3l9kzSH0M3acJP_UfUH0GmhsPl8YxDF-Ti2O9H6bQzz_VqEFpAZoT_QJynkyq</recordid><startdate>20230828</startdate><enddate>20230828</enddate><creator>Shaki, Jonathan</creator><creator>Kraus, Sarit</creator><creator>Wooldridge, Michael</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230828</creationdate><title>Cognitive Effects in Large Language Models</title><author>Shaki, Jonathan ; Kraus, Sarit ; Wooldridge, Michael</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a957-3acdd20a261365d7e428d2b2e309eca23fec7959b6feda3e6fd028fa0d84a3713</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Cognitive tasks</topic><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computation and Language</topic><topic>Large language models</topic><toplevel>online_resources</toplevel><creatorcontrib>Shaki, Jonathan</creatorcontrib><creatorcontrib>Kraus, Sarit</creatorcontrib><creatorcontrib>Wooldridge, Michael</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Shaki, Jonathan</au><au>Kraus, Sarit</au><au>Wooldridge, Michael</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Cognitive Effects in Large Language Models</atitle><jtitle>arXiv.org</jtitle><date>2023-08-28</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>Large Language Models (LLMs) such as ChatGPT have received enormous attention over the past year and are now used by hundreds of millions of people every day. The rapid adoption of this technology naturally raises questions about the possible biases such models might exhibit. In this work, we tested one of these models (GPT-3) on a range of cognitive effects, which are systematic patterns that are usually found in human cognitive tasks. We found that LLMs are indeed prone to several human cognitive effects. Specifically, we show that the priming, distance, SNARC, and size congruity effects were presented with GPT-3, while the anchoring effect is absent. We describe our methodology, and specifically the way we converted real-world experiments to text-based experiments. Finally, we speculate on the possible reasons why GPT-3 exhibits these effects and discuss whether they are imitated or reinvented.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2308.14337</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2023-08 |
issn | 2331-8422 |
language | eng |
recordid | cdi_arxiv_primary_2308_14337 |
source | Freely Accessible Journals; arXiv.org |
subjects | Cognitive tasks Computer Science - Artificial Intelligence Computer Science - Computation and Language Large language models |
title | Cognitive Effects in Large Language Models |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-29T15%3A42%3A11IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Cognitive%20Effects%20in%20Large%20Language%20Models&rft.jtitle=arXiv.org&rft.au=Shaki,%20Jonathan&rft.date=2023-08-28&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2308.14337&rft_dat=%3Cproquest_arxiv%3E2908960953%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2908960953&rft_id=info:pmid/&rfr_iscdi=true |