Emerging Reliance Behaviors in Human-AI Text Generation: Hallucinations, Data Quality Assessment, and Cognitive Forcing Functions

In this paper, we investigate the impact of hallucinations and cognitive forcing functions in human-AI collaborative text generation tasks, focusing on the use of Large Language Models (LLMs) to assist in generating high-quality conversational data. LLMs require data for fine-tuning, a crucial step...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-09
Hauptverfasser: Ashktorab, Zahra, Pan, Qian, Geyer, Werner, Desmond, Michael, Danilevsky, Marina, Johnson, James M, Dugan, Casey, Bachman, Michelle
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Ashktorab, Zahra
Pan, Qian
Geyer, Werner
Desmond, Michael
Danilevsky, Marina
Johnson, James M
Dugan, Casey
Bachman, Michelle
description In this paper, we investigate the impact of hallucinations and cognitive forcing functions in human-AI collaborative text generation tasks, focusing on the use of Large Language Models (LLMs) to assist in generating high-quality conversational data. LLMs require data for fine-tuning, a crucial step in enhancing their performance. In the context of conversational customer support, the data takes the form of a conversation between a human customer and an agent and can be generated with an AI assistant. In our inquiry, involving 11 users who each completed 8 tasks, resulting in a total of 88 tasks, we found that the presence of hallucinations negatively impacts the quality of data. We also find that, although the cognitive forcing function does not always mitigate the detrimental effects of hallucinations on data quality, the presence of cognitive forcing functions and hallucinations together impacts data quality and influences how users leverage the AI responses presented to them. Our analysis of user behavior reveals distinct patterns of reliance on AI-generated responses, highlighting the importance of managing hallucinations in AI-generated content within conversational AI contexts.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3105548566</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3105548566</sourcerecordid><originalsourceid>FETCH-proquest_journals_31055485663</originalsourceid><addsrcrecordid>eNqNjcFKA0EQRAdBMGj-ocFrFjYzmTV4izHrelRyD83arh1me3R6JujRP1eDH5BTUbx61JmZWOfm1XJh7YWZqu7rurbNjfXeTcz3ZqQ0sAzwTIFReoI7esMDx6TAAl0ZUarVI2zpM8MDCSXMHOUWOgyh9CzHqjO4x4zwVDBw_oKVKqmOJHkGKC-wjoNw5gNBG1P_d9cW6Y_mlTl_xaA0_c9Lc91utuuuek_xo5Dm3T6WJL9o5-a194ulbxp32uoH-sNQfQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3105548566</pqid></control><display><type>article</type><title>Emerging Reliance Behaviors in Human-AI Text Generation: Hallucinations, Data Quality Assessment, and Cognitive Forcing Functions</title><source>Ejournal Publishers (free content)</source><creator>Ashktorab, Zahra ; Pan, Qian ; Geyer, Werner ; Desmond, Michael ; Danilevsky, Marina ; Johnson, James M ; Dugan, Casey ; Bachman, Michelle</creator><creatorcontrib>Ashktorab, Zahra ; Pan, Qian ; Geyer, Werner ; Desmond, Michael ; Danilevsky, Marina ; Johnson, James M ; Dugan, Casey ; Bachman, Michelle</creatorcontrib><description>In this paper, we investigate the impact of hallucinations and cognitive forcing functions in human-AI collaborative text generation tasks, focusing on the use of Large Language Models (LLMs) to assist in generating high-quality conversational data. LLMs require data for fine-tuning, a crucial step in enhancing their performance. In the context of conversational customer support, the data takes the form of a conversation between a human customer and an agent and can be generated with an AI assistant. In our inquiry, involving 11 users who each completed 8 tasks, resulting in a total of 88 tasks, we found that the presence of hallucinations negatively impacts the quality of data. We also find that, although the cognitive forcing function does not always mitigate the detrimental effects of hallucinations on data quality, the presence of cognitive forcing functions and hallucinations together impacts data quality and influences how users leverage the AI responses presented to them. Our analysis of user behavior reveals distinct patterns of reliance on AI-generated responses, highlighting the importance of managing hallucinations in AI-generated content within conversational AI contexts.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Cognitive tasks ; Conversational artificial intelligence ; Customers ; Hallucinations ; Large language models ; Quality assessment ; User behavior</subject><ispartof>arXiv.org, 2024-09</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Ashktorab, Zahra</creatorcontrib><creatorcontrib>Pan, Qian</creatorcontrib><creatorcontrib>Geyer, Werner</creatorcontrib><creatorcontrib>Desmond, Michael</creatorcontrib><creatorcontrib>Danilevsky, Marina</creatorcontrib><creatorcontrib>Johnson, James M</creatorcontrib><creatorcontrib>Dugan, Casey</creatorcontrib><creatorcontrib>Bachman, Michelle</creatorcontrib><title>Emerging Reliance Behaviors in Human-AI Text Generation: Hallucinations, Data Quality Assessment, and Cognitive Forcing Functions</title><title>arXiv.org</title><description>In this paper, we investigate the impact of hallucinations and cognitive forcing functions in human-AI collaborative text generation tasks, focusing on the use of Large Language Models (LLMs) to assist in generating high-quality conversational data. LLMs require data for fine-tuning, a crucial step in enhancing their performance. In the context of conversational customer support, the data takes the form of a conversation between a human customer and an agent and can be generated with an AI assistant. In our inquiry, involving 11 users who each completed 8 tasks, resulting in a total of 88 tasks, we found that the presence of hallucinations negatively impacts the quality of data. We also find that, although the cognitive forcing function does not always mitigate the detrimental effects of hallucinations on data quality, the presence of cognitive forcing functions and hallucinations together impacts data quality and influences how users leverage the AI responses presented to them. Our analysis of user behavior reveals distinct patterns of reliance on AI-generated responses, highlighting the importance of managing hallucinations in AI-generated content within conversational AI contexts.</description><subject>Cognitive tasks</subject><subject>Conversational artificial intelligence</subject><subject>Customers</subject><subject>Hallucinations</subject><subject>Large language models</subject><subject>Quality assessment</subject><subject>User behavior</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNjcFKA0EQRAdBMGj-ocFrFjYzmTV4izHrelRyD83arh1me3R6JujRP1eDH5BTUbx61JmZWOfm1XJh7YWZqu7rurbNjfXeTcz3ZqQ0sAzwTIFReoI7esMDx6TAAl0ZUarVI2zpM8MDCSXMHOUWOgyh9CzHqjO4x4zwVDBw_oKVKqmOJHkGKC-wjoNw5gNBG1P_d9cW6Y_mlTl_xaA0_c9Lc91utuuuek_xo5Dm3T6WJL9o5-a194ulbxp32uoH-sNQfQ</recordid><startdate>20240913</startdate><enddate>20240913</enddate><creator>Ashktorab, Zahra</creator><creator>Pan, Qian</creator><creator>Geyer, Werner</creator><creator>Desmond, Michael</creator><creator>Danilevsky, Marina</creator><creator>Johnson, James M</creator><creator>Dugan, Casey</creator><creator>Bachman, Michelle</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240913</creationdate><title>Emerging Reliance Behaviors in Human-AI Text Generation: Hallucinations, Data Quality Assessment, and Cognitive Forcing Functions</title><author>Ashktorab, Zahra ; Pan, Qian ; Geyer, Werner ; Desmond, Michael ; Danilevsky, Marina ; Johnson, James M ; Dugan, Casey ; Bachman, Michelle</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_31055485663</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Cognitive tasks</topic><topic>Conversational artificial intelligence</topic><topic>Customers</topic><topic>Hallucinations</topic><topic>Large language models</topic><topic>Quality assessment</topic><topic>User behavior</topic><toplevel>online_resources</toplevel><creatorcontrib>Ashktorab, Zahra</creatorcontrib><creatorcontrib>Pan, Qian</creatorcontrib><creatorcontrib>Geyer, Werner</creatorcontrib><creatorcontrib>Desmond, Michael</creatorcontrib><creatorcontrib>Danilevsky, Marina</creatorcontrib><creatorcontrib>Johnson, James M</creatorcontrib><creatorcontrib>Dugan, Casey</creatorcontrib><creatorcontrib>Bachman, Michelle</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Access via ProQuest (Open Access)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Ashktorab, Zahra</au><au>Pan, Qian</au><au>Geyer, Werner</au><au>Desmond, Michael</au><au>Danilevsky, Marina</au><au>Johnson, James M</au><au>Dugan, Casey</au><au>Bachman, Michelle</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Emerging Reliance Behaviors in Human-AI Text Generation: Hallucinations, Data Quality Assessment, and Cognitive Forcing Functions</atitle><jtitle>arXiv.org</jtitle><date>2024-09-13</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>In this paper, we investigate the impact of hallucinations and cognitive forcing functions in human-AI collaborative text generation tasks, focusing on the use of Large Language Models (LLMs) to assist in generating high-quality conversational data. LLMs require data for fine-tuning, a crucial step in enhancing their performance. In the context of conversational customer support, the data takes the form of a conversation between a human customer and an agent and can be generated with an AI assistant. In our inquiry, involving 11 users who each completed 8 tasks, resulting in a total of 88 tasks, we found that the presence of hallucinations negatively impacts the quality of data. We also find that, although the cognitive forcing function does not always mitigate the detrimental effects of hallucinations on data quality, the presence of cognitive forcing functions and hallucinations together impacts data quality and influences how users leverage the AI responses presented to them. Our analysis of user behavior reveals distinct patterns of reliance on AI-generated responses, highlighting the importance of managing hallucinations in AI-generated content within conversational AI contexts.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-09
issn 2331-8422
language eng
recordid cdi_proquest_journals_3105548566
source Ejournal Publishers (free content)
subjects Cognitive tasks
Conversational artificial intelligence
Customers
Hallucinations
Large language models
Quality assessment
User behavior
title Emerging Reliance Behaviors in Human-AI Text Generation: Hallucinations, Data Quality Assessment, and Cognitive Forcing Functions
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-01T14%3A51%3A49IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Emerging%20Reliance%20Behaviors%20in%20Human-AI%20Text%20Generation:%20Hallucinations,%20Data%20Quality%20Assessment,%20and%20Cognitive%20Forcing%20Functions&rft.jtitle=arXiv.org&rft.au=Ashktorab,%20Zahra&rft.date=2024-09-13&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3105548566%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3105548566&rft_id=info:pmid/&rfr_iscdi=true