CoPrompter: User-Centric Evaluation of LLM Instruction Alignment for Improved Prompt Engineering

Ensuring large language models' (LLMs) responses align with prompt instructions is crucial for application development. Based on our formative study with industry professionals, the alignment requires heavy human involvement and tedious trial-and-error especially when there are many instruction...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Joshi, Ishika, Shahid, Simra, Venneti, Shreeya, Vasu, Manushree, Zheng, Yantao, Li, Yunyao, Krishnamurthy, Balaji, Chan, Gromit Yeuk-Yin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Joshi, Ishika
Shahid, Simra
Venneti, Shreeya
Vasu, Manushree
Zheng, Yantao
Li, Yunyao
Krishnamurthy, Balaji
Chan, Gromit Yeuk-Yin
description Ensuring large language models' (LLMs) responses align with prompt instructions is crucial for application development. Based on our formative study with industry professionals, the alignment requires heavy human involvement and tedious trial-and-error especially when there are many instructions in the prompt. To address these challenges, we introduce CoPrompter, a framework that identifies misalignment based on assessing multiple LLM responses with criteria. It proposes a method to generate evaluation criteria questions derived directly from prompt requirements and an interface to turn these questions into a user-editable checklist. Our user study with industry prompt engineers shows that CoPrompter improves the ability to identify and refine instruction alignment with prompt requirements over traditional methods, helps them understand where and how frequently models fail to follow user's prompt requirements, and helps in clarifying their own requirements, giving them greater control over the response evaluation process. We also present the design lessons to underscore our system's potential to streamline the prompt engineering process.
doi_str_mv 10.48550/arxiv.2411.06099
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2411_06099</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2411_06099</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2411_060993</originalsourceid><addsrcrecordid>eNqFjsEKgkAURWfTIqoPaNX7AW0sjWwXYiQYtKi1DTbKgPNG3oxSf19Z-1YXLofDYWwecD_cRhFfCnqo3l-FQeDzDY_jMbsl5kxGt07SDq5WkpdIdKRKSHvRdMIpg2AqyPMTZGgddeVw7RtVo36jUBmCTLdkenmHrwtSrBVKSQrrKRtVorFy9tsJWxzSS3L0hpaiJaUFPYtPUzE0rf8TL184Qpo</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>CoPrompter: User-Centric Evaluation of LLM Instruction Alignment for Improved Prompt Engineering</title><source>arXiv.org</source><creator>Joshi, Ishika ; Shahid, Simra ; Venneti, Shreeya ; Vasu, Manushree ; Zheng, Yantao ; Li, Yunyao ; Krishnamurthy, Balaji ; Chan, Gromit Yeuk-Yin</creator><creatorcontrib>Joshi, Ishika ; Shahid, Simra ; Venneti, Shreeya ; Vasu, Manushree ; Zheng, Yantao ; Li, Yunyao ; Krishnamurthy, Balaji ; Chan, Gromit Yeuk-Yin</creatorcontrib><description>Ensuring large language models' (LLMs) responses align with prompt instructions is crucial for application development. Based on our formative study with industry professionals, the alignment requires heavy human involvement and tedious trial-and-error especially when there are many instructions in the prompt. To address these challenges, we introduce CoPrompter, a framework that identifies misalignment based on assessing multiple LLM responses with criteria. It proposes a method to generate evaluation criteria questions derived directly from prompt requirements and an interface to turn these questions into a user-editable checklist. Our user study with industry prompt engineers shows that CoPrompter improves the ability to identify and refine instruction alignment with prompt requirements over traditional methods, helps them understand where and how frequently models fail to follow user's prompt requirements, and helps in clarifying their own requirements, giving them greater control over the response evaluation process. We also present the design lessons to underscore our system's potential to streamline the prompt engineering process.</description><identifier>DOI: 10.48550/arxiv.2411.06099</identifier><language>eng</language><subject>Computer Science - Human-Computer Interaction</subject><creationdate>2024-11</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2411.06099$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2411.06099$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Joshi, Ishika</creatorcontrib><creatorcontrib>Shahid, Simra</creatorcontrib><creatorcontrib>Venneti, Shreeya</creatorcontrib><creatorcontrib>Vasu, Manushree</creatorcontrib><creatorcontrib>Zheng, Yantao</creatorcontrib><creatorcontrib>Li, Yunyao</creatorcontrib><creatorcontrib>Krishnamurthy, Balaji</creatorcontrib><creatorcontrib>Chan, Gromit Yeuk-Yin</creatorcontrib><title>CoPrompter: User-Centric Evaluation of LLM Instruction Alignment for Improved Prompt Engineering</title><description>Ensuring large language models' (LLMs) responses align with prompt instructions is crucial for application development. Based on our formative study with industry professionals, the alignment requires heavy human involvement and tedious trial-and-error especially when there are many instructions in the prompt. To address these challenges, we introduce CoPrompter, a framework that identifies misalignment based on assessing multiple LLM responses with criteria. It proposes a method to generate evaluation criteria questions derived directly from prompt requirements and an interface to turn these questions into a user-editable checklist. Our user study with industry prompt engineers shows that CoPrompter improves the ability to identify and refine instruction alignment with prompt requirements over traditional methods, helps them understand where and how frequently models fail to follow user's prompt requirements, and helps in clarifying their own requirements, giving them greater control over the response evaluation process. We also present the design lessons to underscore our system's potential to streamline the prompt engineering process.</description><subject>Computer Science - Human-Computer Interaction</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNqFjsEKgkAURWfTIqoPaNX7AW0sjWwXYiQYtKi1DTbKgPNG3oxSf19Z-1YXLofDYWwecD_cRhFfCnqo3l-FQeDzDY_jMbsl5kxGt07SDq5WkpdIdKRKSHvRdMIpg2AqyPMTZGgddeVw7RtVo36jUBmCTLdkenmHrwtSrBVKSQrrKRtVorFy9tsJWxzSS3L0hpaiJaUFPYtPUzE0rf8TL184Qpo</recordid><startdate>20241109</startdate><enddate>20241109</enddate><creator>Joshi, Ishika</creator><creator>Shahid, Simra</creator><creator>Venneti, Shreeya</creator><creator>Vasu, Manushree</creator><creator>Zheng, Yantao</creator><creator>Li, Yunyao</creator><creator>Krishnamurthy, Balaji</creator><creator>Chan, Gromit Yeuk-Yin</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20241109</creationdate><title>CoPrompter: User-Centric Evaluation of LLM Instruction Alignment for Improved Prompt Engineering</title><author>Joshi, Ishika ; Shahid, Simra ; Venneti, Shreeya ; Vasu, Manushree ; Zheng, Yantao ; Li, Yunyao ; Krishnamurthy, Balaji ; Chan, Gromit Yeuk-Yin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2411_060993</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Human-Computer Interaction</topic><toplevel>online_resources</toplevel><creatorcontrib>Joshi, Ishika</creatorcontrib><creatorcontrib>Shahid, Simra</creatorcontrib><creatorcontrib>Venneti, Shreeya</creatorcontrib><creatorcontrib>Vasu, Manushree</creatorcontrib><creatorcontrib>Zheng, Yantao</creatorcontrib><creatorcontrib>Li, Yunyao</creatorcontrib><creatorcontrib>Krishnamurthy, Balaji</creatorcontrib><creatorcontrib>Chan, Gromit Yeuk-Yin</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Joshi, Ishika</au><au>Shahid, Simra</au><au>Venneti, Shreeya</au><au>Vasu, Manushree</au><au>Zheng, Yantao</au><au>Li, Yunyao</au><au>Krishnamurthy, Balaji</au><au>Chan, Gromit Yeuk-Yin</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>CoPrompter: User-Centric Evaluation of LLM Instruction Alignment for Improved Prompt Engineering</atitle><date>2024-11-09</date><risdate>2024</risdate><abstract>Ensuring large language models' (LLMs) responses align with prompt instructions is crucial for application development. Based on our formative study with industry professionals, the alignment requires heavy human involvement and tedious trial-and-error especially when there are many instructions in the prompt. To address these challenges, we introduce CoPrompter, a framework that identifies misalignment based on assessing multiple LLM responses with criteria. It proposes a method to generate evaluation criteria questions derived directly from prompt requirements and an interface to turn these questions into a user-editable checklist. Our user study with industry prompt engineers shows that CoPrompter improves the ability to identify and refine instruction alignment with prompt requirements over traditional methods, helps them understand where and how frequently models fail to follow user's prompt requirements, and helps in clarifying their own requirements, giving them greater control over the response evaluation process. We also present the design lessons to underscore our system's potential to streamline the prompt engineering process.</abstract><doi>10.48550/arxiv.2411.06099</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2411.06099
ispartof
issn
language eng
recordid cdi_arxiv_primary_2411_06099
source arXiv.org
subjects Computer Science - Human-Computer Interaction
title CoPrompter: User-Centric Evaluation of LLM Instruction Alignment for Improved Prompt Engineering
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-20T04%3A39%3A21IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=CoPrompter:%20User-Centric%20Evaluation%20of%20LLM%20Instruction%20Alignment%20for%20Improved%20Prompt%20Engineering&rft.au=Joshi,%20Ishika&rft.date=2024-11-09&rft_id=info:doi/10.48550/arxiv.2411.06099&rft_dat=%3Carxiv_GOX%3E2411_06099%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true