LLMs Can Patch Up Missing Relevance Judgments in Evaluation
Unjudged documents or holes in information retrieval benchmarks are considered non-relevant in evaluation, yielding no gains in measuring effectiveness. However, these missing judgments may inadvertently introduce biases into the evaluation as their prevalence for a retrieval model is heavily contin...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Upadhyay, Shivani Kamalloo, Ehsan Lin, Jimmy |
description | Unjudged documents or holes in information retrieval benchmarks are
considered non-relevant in evaluation, yielding no gains in measuring
effectiveness. However, these missing judgments may inadvertently introduce
biases into the evaluation as their prevalence for a retrieval model is heavily
contingent on the pooling process. Thus, filling holes becomes crucial in
ensuring reliable and accurate evaluation. Collecting human judgment for all
documents is cumbersome and impractical. In this paper, we aim at leveraging
large language models (LLMs) to automatically label unjudged documents. Our
goal is to instruct an LLM using detailed instructions to assign fine-grained
relevance judgments to holes. To this end, we systematically simulate scenarios
with varying degrees of holes by randomly dropping relevant documents from the
relevance judgment in TREC DL tracks. Our experiments reveal a strong
correlation between our LLM-based method and ground-truth relevance judgments.
Based on our simulation experiments conducted on three TREC DL datasets, in the
extreme scenario of retaining only 10% of judgments, our method achieves a
Kendall tau correlation of 0.87 and 0.92 on an average for Vicu\~na-7B and
GPT-3.5 Turbo respectively. |
doi_str_mv | 10.48550/arxiv.2405.04727 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2405_04727</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2405_04727</sourcerecordid><originalsourceid>FETCH-LOGICAL-a677-d0ba9c7dbfd41c49d2441315d95f0e3a160d0de717f52399a32022278c1cfeb83</originalsourceid><addsrcrecordid>eNotj8luwjAUAH3hUEE_oCf8A0mfNxyLE4ropqBWiJ6jFy_UUnBRHKL276vSnuY2miHkjkEpK6XgHoevOJVcgipBaq5vyLppdpnWmOgbjvaDvp_pLuYc05Hufe8nTNbTl4s7nnwaM42JbifsLzjGz7Qgs4B99rf_nJPDw_ZQPxXN6-NzvWkKXGldOOjQWO264CSz0jguJRNMOaMCeIFsBQ6c10wHxYUxKDhwznVlmQ2-q8ScLP-01_r2PMQTDt_t70V7vRA_0pZA4w</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>LLMs Can Patch Up Missing Relevance Judgments in Evaluation</title><source>arXiv.org</source><creator>Upadhyay, Shivani ; Kamalloo, Ehsan ; Lin, Jimmy</creator><creatorcontrib>Upadhyay, Shivani ; Kamalloo, Ehsan ; Lin, Jimmy</creatorcontrib><description>Unjudged documents or holes in information retrieval benchmarks are
considered non-relevant in evaluation, yielding no gains in measuring
effectiveness. However, these missing judgments may inadvertently introduce
biases into the evaluation as their prevalence for a retrieval model is heavily
contingent on the pooling process. Thus, filling holes becomes crucial in
ensuring reliable and accurate evaluation. Collecting human judgment for all
documents is cumbersome and impractical. In this paper, we aim at leveraging
large language models (LLMs) to automatically label unjudged documents. Our
goal is to instruct an LLM using detailed instructions to assign fine-grained
relevance judgments to holes. To this end, we systematically simulate scenarios
with varying degrees of holes by randomly dropping relevant documents from the
relevance judgment in TREC DL tracks. Our experiments reveal a strong
correlation between our LLM-based method and ground-truth relevance judgments.
Based on our simulation experiments conducted on three TREC DL datasets, in the
extreme scenario of retaining only 10% of judgments, our method achieves a
Kendall tau correlation of 0.87 and 0.92 on an average for Vicu\~na-7B and
GPT-3.5 Turbo respectively.</description><identifier>DOI: 10.48550/arxiv.2405.04727</identifier><language>eng</language><subject>Computer Science - Information Retrieval</subject><creationdate>2024-05</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2405.04727$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2405.04727$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Upadhyay, Shivani</creatorcontrib><creatorcontrib>Kamalloo, Ehsan</creatorcontrib><creatorcontrib>Lin, Jimmy</creatorcontrib><title>LLMs Can Patch Up Missing Relevance Judgments in Evaluation</title><description>Unjudged documents or holes in information retrieval benchmarks are
considered non-relevant in evaluation, yielding no gains in measuring
effectiveness. However, these missing judgments may inadvertently introduce
biases into the evaluation as their prevalence for a retrieval model is heavily
contingent on the pooling process. Thus, filling holes becomes crucial in
ensuring reliable and accurate evaluation. Collecting human judgment for all
documents is cumbersome and impractical. In this paper, we aim at leveraging
large language models (LLMs) to automatically label unjudged documents. Our
goal is to instruct an LLM using detailed instructions to assign fine-grained
relevance judgments to holes. To this end, we systematically simulate scenarios
with varying degrees of holes by randomly dropping relevant documents from the
relevance judgment in TREC DL tracks. Our experiments reveal a strong
correlation between our LLM-based method and ground-truth relevance judgments.
Based on our simulation experiments conducted on three TREC DL datasets, in the
extreme scenario of retaining only 10% of judgments, our method achieves a
Kendall tau correlation of 0.87 and 0.92 on an average for Vicu\~na-7B and
GPT-3.5 Turbo respectively.</description><subject>Computer Science - Information Retrieval</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8luwjAUAH3hUEE_oCf8A0mfNxyLE4ropqBWiJ6jFy_UUnBRHKL276vSnuY2miHkjkEpK6XgHoevOJVcgipBaq5vyLppdpnWmOgbjvaDvp_pLuYc05Hufe8nTNbTl4s7nnwaM42JbifsLzjGz7Qgs4B99rf_nJPDw_ZQPxXN6-NzvWkKXGldOOjQWO264CSz0jguJRNMOaMCeIFsBQ6c10wHxYUxKDhwznVlmQ2-q8ScLP-01_r2PMQTDt_t70V7vRA_0pZA4w</recordid><startdate>20240507</startdate><enddate>20240507</enddate><creator>Upadhyay, Shivani</creator><creator>Kamalloo, Ehsan</creator><creator>Lin, Jimmy</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240507</creationdate><title>LLMs Can Patch Up Missing Relevance Judgments in Evaluation</title><author>Upadhyay, Shivani ; Kamalloo, Ehsan ; Lin, Jimmy</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a677-d0ba9c7dbfd41c49d2441315d95f0e3a160d0de717f52399a32022278c1cfeb83</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Information Retrieval</topic><toplevel>online_resources</toplevel><creatorcontrib>Upadhyay, Shivani</creatorcontrib><creatorcontrib>Kamalloo, Ehsan</creatorcontrib><creatorcontrib>Lin, Jimmy</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Upadhyay, Shivani</au><au>Kamalloo, Ehsan</au><au>Lin, Jimmy</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>LLMs Can Patch Up Missing Relevance Judgments in Evaluation</atitle><date>2024-05-07</date><risdate>2024</risdate><abstract>Unjudged documents or holes in information retrieval benchmarks are
considered non-relevant in evaluation, yielding no gains in measuring
effectiveness. However, these missing judgments may inadvertently introduce
biases into the evaluation as their prevalence for a retrieval model is heavily
contingent on the pooling process. Thus, filling holes becomes crucial in
ensuring reliable and accurate evaluation. Collecting human judgment for all
documents is cumbersome and impractical. In this paper, we aim at leveraging
large language models (LLMs) to automatically label unjudged documents. Our
goal is to instruct an LLM using detailed instructions to assign fine-grained
relevance judgments to holes. To this end, we systematically simulate scenarios
with varying degrees of holes by randomly dropping relevant documents from the
relevance judgment in TREC DL tracks. Our experiments reveal a strong
correlation between our LLM-based method and ground-truth relevance judgments.
Based on our simulation experiments conducted on three TREC DL datasets, in the
extreme scenario of retaining only 10% of judgments, our method achieves a
Kendall tau correlation of 0.87 and 0.92 on an average for Vicu\~na-7B and
GPT-3.5 Turbo respectively.</abstract><doi>10.48550/arxiv.2405.04727</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2405.04727 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2405_04727 |
source | arXiv.org |
subjects | Computer Science - Information Retrieval |
title | LLMs Can Patch Up Missing Relevance Judgments in Evaluation |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-05T00%3A12%3A00IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=LLMs%20Can%20Patch%20Up%20Missing%20Relevance%20Judgments%20in%20Evaluation&rft.au=Upadhyay,%20Shivani&rft.date=2024-05-07&rft_id=info:doi/10.48550/arxiv.2405.04727&rft_dat=%3Carxiv_GOX%3E2405_04727%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |