Noisy Exemplars Make Large Language Models More Robust: A Domain-Agnostic Behavioral Analysis

Recent advances in prompt engineering enable large language models (LLMs) to solve multi-hop logical reasoning problems with impressive accuracy. However, there is little existing work investigating the robustness of LLMs with few-shot prompting techniques. Therefore, we introduce a systematic appro...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Zheng, Hongyi, Saparov, Abulhair
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Zheng, Hongyi
Saparov, Abulhair
description Recent advances in prompt engineering enable large language models (LLMs) to solve multi-hop logical reasoning problems with impressive accuracy. However, there is little existing work investigating the robustness of LLMs with few-shot prompting techniques. Therefore, we introduce a systematic approach to test the robustness of LLMs in multi-hop reasoning tasks via domain-agnostic perturbations. We include perturbations at multiple levels of abstractions (e.g. lexical perturbations such as typos, and semantic perturbations such as the inclusion of intermediate reasoning steps in the questions) to conduct behavioral analysis on the LLMs. Throughout our experiments, we find that models are more sensitive to certain perturbations such as replacing words with their synonyms. We also demonstrate that increasing the proportion of perturbed exemplars in the prompts improves the robustness of few-shot prompting methods.
doi_str_mv 10.48550/arxiv.2311.00258
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2311_00258</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2311_00258</sourcerecordid><originalsourceid>FETCH-LOGICAL-a678-d535b1afb9b7dbf3dbdc05fa2b2b31fb60548eec1a8ce6ce48fad449ca3a64c43</originalsourceid><addsrcrecordid>eNotj99OwjAchXvjhUEfwCv7Apvt2o7i3URUkiEJ4dYsv_6bjdtKWiDs7QX05pyTfMlJPoQeKMm5FII8QTz5Y14wSnNCCiFv0ddn8GnEi5Ptdx3EhFfwY3ENsb3k0B7gPFbB2O6MQrR4E9Qh7Z9xhV9DD37IqnYIae81frHfcPQhQoerAbox-XSHbhx0yd7_9wRt3xbb-UdWr9-X86rOoJzKzAgmFAWnZmpqlGNGGU2Eg0IVilGnSiK4tFZTkNqW2nLpwHA-08Cg5JqzCXr8u736Nbvoe4hjc_Fsrp7sF6WzT0E</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Noisy Exemplars Make Large Language Models More Robust: A Domain-Agnostic Behavioral Analysis</title><source>arXiv.org</source><creator>Zheng, Hongyi ; Saparov, Abulhair</creator><creatorcontrib>Zheng, Hongyi ; Saparov, Abulhair</creatorcontrib><description>Recent advances in prompt engineering enable large language models (LLMs) to solve multi-hop logical reasoning problems with impressive accuracy. However, there is little existing work investigating the robustness of LLMs with few-shot prompting techniques. Therefore, we introduce a systematic approach to test the robustness of LLMs in multi-hop reasoning tasks via domain-agnostic perturbations. We include perturbations at multiple levels of abstractions (e.g. lexical perturbations such as typos, and semantic perturbations such as the inclusion of intermediate reasoning steps in the questions) to conduct behavioral analysis on the LLMs. Throughout our experiments, we find that models are more sensitive to certain perturbations such as replacing words with their synonyms. We also demonstrate that increasing the proportion of perturbed exemplars in the prompts improves the robustness of few-shot prompting methods.</description><identifier>DOI: 10.48550/arxiv.2311.00258</identifier><language>eng</language><subject>Computer Science - Computation and Language ; Computer Science - Learning</subject><creationdate>2023-10</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,777,882</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2311.00258$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2311.00258$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Zheng, Hongyi</creatorcontrib><creatorcontrib>Saparov, Abulhair</creatorcontrib><title>Noisy Exemplars Make Large Language Models More Robust: A Domain-Agnostic Behavioral Analysis</title><description>Recent advances in prompt engineering enable large language models (LLMs) to solve multi-hop logical reasoning problems with impressive accuracy. However, there is little existing work investigating the robustness of LLMs with few-shot prompting techniques. Therefore, we introduce a systematic approach to test the robustness of LLMs in multi-hop reasoning tasks via domain-agnostic perturbations. We include perturbations at multiple levels of abstractions (e.g. lexical perturbations such as typos, and semantic perturbations such as the inclusion of intermediate reasoning steps in the questions) to conduct behavioral analysis on the LLMs. Throughout our experiments, we find that models are more sensitive to certain perturbations such as replacing words with their synonyms. We also demonstrate that increasing the proportion of perturbed exemplars in the prompts improves the robustness of few-shot prompting methods.</description><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj99OwjAchXvjhUEfwCv7Apvt2o7i3URUkiEJ4dYsv_6bjdtKWiDs7QX05pyTfMlJPoQeKMm5FII8QTz5Y14wSnNCCiFv0ddn8GnEi5Ptdx3EhFfwY3ENsb3k0B7gPFbB2O6MQrR4E9Qh7Z9xhV9DD37IqnYIae81frHfcPQhQoerAbox-XSHbhx0yd7_9wRt3xbb-UdWr9-X86rOoJzKzAgmFAWnZmpqlGNGGU2Eg0IVilGnSiK4tFZTkNqW2nLpwHA-08Cg5JqzCXr8u736Nbvoe4hjc_Fsrp7sF6WzT0E</recordid><startdate>20231031</startdate><enddate>20231031</enddate><creator>Zheng, Hongyi</creator><creator>Saparov, Abulhair</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20231031</creationdate><title>Noisy Exemplars Make Large Language Models More Robust: A Domain-Agnostic Behavioral Analysis</title><author>Zheng, Hongyi ; Saparov, Abulhair</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a678-d535b1afb9b7dbf3dbdc05fa2b2b31fb60548eec1a8ce6ce48fad449ca3a64c43</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Zheng, Hongyi</creatorcontrib><creatorcontrib>Saparov, Abulhair</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zheng, Hongyi</au><au>Saparov, Abulhair</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Noisy Exemplars Make Large Language Models More Robust: A Domain-Agnostic Behavioral Analysis</atitle><date>2023-10-31</date><risdate>2023</risdate><abstract>Recent advances in prompt engineering enable large language models (LLMs) to solve multi-hop logical reasoning problems with impressive accuracy. However, there is little existing work investigating the robustness of LLMs with few-shot prompting techniques. Therefore, we introduce a systematic approach to test the robustness of LLMs in multi-hop reasoning tasks via domain-agnostic perturbations. We include perturbations at multiple levels of abstractions (e.g. lexical perturbations such as typos, and semantic perturbations such as the inclusion of intermediate reasoning steps in the questions) to conduct behavioral analysis on the LLMs. Throughout our experiments, we find that models are more sensitive to certain perturbations such as replacing words with their synonyms. We also demonstrate that increasing the proportion of perturbed exemplars in the prompts improves the robustness of few-shot prompting methods.</abstract><doi>10.48550/arxiv.2311.00258</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2311.00258
ispartof
issn
language eng
recordid cdi_arxiv_primary_2311_00258
source arXiv.org
subjects Computer Science - Computation and Language
Computer Science - Learning
title Noisy Exemplars Make Large Language Models More Robust: A Domain-Agnostic Behavioral Analysis
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-18T19%3A18%3A20IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Noisy%20Exemplars%20Make%20Large%20Language%20Models%20More%20Robust:%20A%20Domain-Agnostic%20Behavioral%20Analysis&rft.au=Zheng,%20Hongyi&rft.date=2023-10-31&rft_id=info:doi/10.48550/arxiv.2311.00258&rft_dat=%3Carxiv_GOX%3E2311_00258%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true