Should We Respect LLMs? A Cross-Lingual Study on the Influence of Prompt Politeness on LLM Performance

We investigate the impact of politeness levels in prompts on the performance of large language models (LLMs). Polite language in human communications often garners more compliance and effectiveness, while rudeness can cause aversion, impacting response quality. We consider that LLMs mirror human com...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Yin, Ziqi, Wang, Hao, Horio, Kaito, Kawahara, Daisuke, Sekine, Satoshi
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Yin, Ziqi
Wang, Hao
Horio, Kaito
Kawahara, Daisuke
Sekine, Satoshi
description We investigate the impact of politeness levels in prompts on the performance of large language models (LLMs). Polite language in human communications often garners more compliance and effectiveness, while rudeness can cause aversion, impacting response quality. We consider that LLMs mirror human communication traits, suggesting they align with human cultural norms. We assess the impact of politeness in prompts on LLMs across English, Chinese, and Japanese tasks. We observed that impolite prompts often result in poor performance, but overly polite language does not guarantee better outcomes. The best politeness level is different according to the language. This phenomenon suggests that LLMs not only reflect human behavior but are also influenced by language, particularly in different cultural contexts. Our findings highlight the need to factor in politeness for cross-cultural natural language processing and LLM usage.
doi_str_mv 10.48550/arxiv.2402.14531
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2402_14531</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2402_14531</sourcerecordid><originalsourceid>FETCH-LOGICAL-a671-12fbf0caab73483741ced81e2b875ff2103da4a14760bdab7ad598450eca1fec3</originalsourceid><addsrcrecordid>eNotj8tOhDAUQLtxYUY_wJX3B8CWtgOuzIT4mISJxJnEJbmUW4cEKGnBOH8vM7o6m5OTHMbuBI9VpjV_QP_TfseJ4kkslJbimtn90c1dA58EHxRGMhMUxS48wQZy70KIinb4mrGD_TQ3J3ADTEeC7WC7mQZD4CyU3vXjBKXr2okGCuFsLREoyVvne1y8G3ZlsQt0-88VO7w8H_K3qHh_3eabIsJ1KiKR2Npyg1inUmUyVcJQkwlK6izV1iaCywYVCpWued0sFjb6MVOak0FhycgVu__LXkar0bc9-lN1Hq4uw_IXCQ5Q6g</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Should We Respect LLMs? A Cross-Lingual Study on the Influence of Prompt Politeness on LLM Performance</title><source>arXiv.org</source><creator>Yin, Ziqi ; Wang, Hao ; Horio, Kaito ; Kawahara, Daisuke ; Sekine, Satoshi</creator><creatorcontrib>Yin, Ziqi ; Wang, Hao ; Horio, Kaito ; Kawahara, Daisuke ; Sekine, Satoshi</creatorcontrib><description>We investigate the impact of politeness levels in prompts on the performance of large language models (LLMs). Polite language in human communications often garners more compliance and effectiveness, while rudeness can cause aversion, impacting response quality. We consider that LLMs mirror human communication traits, suggesting they align with human cultural norms. We assess the impact of politeness in prompts on LLMs across English, Chinese, and Japanese tasks. We observed that impolite prompts often result in poor performance, but overly polite language does not guarantee better outcomes. The best politeness level is different according to the language. This phenomenon suggests that LLMs not only reflect human behavior but are also influenced by language, particularly in different cultural contexts. Our findings highlight the need to factor in politeness for cross-cultural natural language processing and LLM usage.</description><identifier>DOI: 10.48550/arxiv.2402.14531</identifier><language>eng</language><subject>Computer Science - Computation and Language</subject><creationdate>2024-02</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2402.14531$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2402.14531$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Yin, Ziqi</creatorcontrib><creatorcontrib>Wang, Hao</creatorcontrib><creatorcontrib>Horio, Kaito</creatorcontrib><creatorcontrib>Kawahara, Daisuke</creatorcontrib><creatorcontrib>Sekine, Satoshi</creatorcontrib><title>Should We Respect LLMs? A Cross-Lingual Study on the Influence of Prompt Politeness on LLM Performance</title><description>We investigate the impact of politeness levels in prompts on the performance of large language models (LLMs). Polite language in human communications often garners more compliance and effectiveness, while rudeness can cause aversion, impacting response quality. We consider that LLMs mirror human communication traits, suggesting they align with human cultural norms. We assess the impact of politeness in prompts on LLMs across English, Chinese, and Japanese tasks. We observed that impolite prompts often result in poor performance, but overly polite language does not guarantee better outcomes. The best politeness level is different according to the language. This phenomenon suggests that LLMs not only reflect human behavior but are also influenced by language, particularly in different cultural contexts. Our findings highlight the need to factor in politeness for cross-cultural natural language processing and LLM usage.</description><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tOhDAUQLtxYUY_wJX3B8CWtgOuzIT4mISJxJnEJbmUW4cEKGnBOH8vM7o6m5OTHMbuBI9VpjV_QP_TfseJ4kkslJbimtn90c1dA58EHxRGMhMUxS48wQZy70KIinb4mrGD_TQ3J3ADTEeC7WC7mQZD4CyU3vXjBKXr2okGCuFsLREoyVvne1y8G3ZlsQt0-88VO7w8H_K3qHh_3eabIsJ1KiKR2Npyg1inUmUyVcJQkwlK6izV1iaCywYVCpWued0sFjb6MVOak0FhycgVu__LXkar0bc9-lN1Hq4uw_IXCQ5Q6g</recordid><startdate>20240222</startdate><enddate>20240222</enddate><creator>Yin, Ziqi</creator><creator>Wang, Hao</creator><creator>Horio, Kaito</creator><creator>Kawahara, Daisuke</creator><creator>Sekine, Satoshi</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240222</creationdate><title>Should We Respect LLMs? A Cross-Lingual Study on the Influence of Prompt Politeness on LLM Performance</title><author>Yin, Ziqi ; Wang, Hao ; Horio, Kaito ; Kawahara, Daisuke ; Sekine, Satoshi</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a671-12fbf0caab73483741ced81e2b875ff2103da4a14760bdab7ad598450eca1fec3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Yin, Ziqi</creatorcontrib><creatorcontrib>Wang, Hao</creatorcontrib><creatorcontrib>Horio, Kaito</creatorcontrib><creatorcontrib>Kawahara, Daisuke</creatorcontrib><creatorcontrib>Sekine, Satoshi</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Yin, Ziqi</au><au>Wang, Hao</au><au>Horio, Kaito</au><au>Kawahara, Daisuke</au><au>Sekine, Satoshi</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Should We Respect LLMs? A Cross-Lingual Study on the Influence of Prompt Politeness on LLM Performance</atitle><date>2024-02-22</date><risdate>2024</risdate><abstract>We investigate the impact of politeness levels in prompts on the performance of large language models (LLMs). Polite language in human communications often garners more compliance and effectiveness, while rudeness can cause aversion, impacting response quality. We consider that LLMs mirror human communication traits, suggesting they align with human cultural norms. We assess the impact of politeness in prompts on LLMs across English, Chinese, and Japanese tasks. We observed that impolite prompts often result in poor performance, but overly polite language does not guarantee better outcomes. The best politeness level is different according to the language. This phenomenon suggests that LLMs not only reflect human behavior but are also influenced by language, particularly in different cultural contexts. Our findings highlight the need to factor in politeness for cross-cultural natural language processing and LLM usage.</abstract><doi>10.48550/arxiv.2402.14531</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2402.14531
ispartof
issn
language eng
recordid cdi_arxiv_primary_2402_14531
source arXiv.org
subjects Computer Science - Computation and Language
title Should We Respect LLMs? A Cross-Lingual Study on the Influence of Prompt Politeness on LLM Performance
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T10%3A06%3A22IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Should%20We%20Respect%20LLMs?%20A%20Cross-Lingual%20Study%20on%20the%20Influence%20of%20Prompt%20Politeness%20on%20LLM%20Performance&rft.au=Yin,%20Ziqi&rft.date=2024-02-22&rft_id=info:doi/10.48550/arxiv.2402.14531&rft_dat=%3Carxiv_GOX%3E2402_14531%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true