Generative and Pseudo-Relevant Feedback for Sparse, Dense and Learned Sparse Retrieval

Pseudo-relevance feedback (PRF) is a classical approach to address lexical mismatch by enriching the query using first-pass retrieval. Moreover, recent work on generative-relevance feedback (GRF) shows that query expansion models using text generated from large language models can improve sparse ret...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Mackie, Iain, Chatterjee, Shubham, Dalton, Jeffrey
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Mackie, Iain
Chatterjee, Shubham
Dalton, Jeffrey
description Pseudo-relevance feedback (PRF) is a classical approach to address lexical mismatch by enriching the query using first-pass retrieval. Moreover, recent work on generative-relevance feedback (GRF) shows that query expansion models using text generated from large language models can improve sparse retrieval without depending on first-pass retrieval effectiveness. This work extends GRF to dense and learned sparse retrieval paradigms with experiments over six standard document ranking benchmarks. We find that GRF improves over comparable PRF techniques by around 10% on both precision and recall-oriented measures. Nonetheless, query analysis shows that GRF and PRF have contrasting benefits, with GRF providing external context not present in first-pass retrieval, whereas PRF grounds the query to the information contained within the target corpus. Thus, we propose combining generative and pseudo-relevance feedback ranking signals to achieve the benefits of both feedback classes, which significantly increases recall over PRF methods on 95% of experiments.
doi_str_mv 10.48550/arxiv.2305.07477
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2305_07477</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2305_07477</sourcerecordid><originalsourceid>FETCH-LOGICAL-a677-7c887973b0b3dda762cee288ad252012487c28b88ddf34479c4971224cead3f33</originalsourceid><addsrcrecordid>eNotj8FuwjAQRH3hUEE_oKf6A0jq2A7rHBEttFIkKop6jTbejRQRDHLSqP37tsBpDjNvpCfEQ6ZS6_JcPWH8bsdUG5WnCizAnfjccOCIQzuyxEDyvecvOiU77njEMMg1M9XoD7I5RflxxtjzXD5z6K_zkjEGplsjdzzE9g_sZmLSYNfz_S2nYr9-2a9ek3K7eVstywQXAAl456AAU6vaECEstGfWziHpXKtMWwdeu9o5osZYC4W3BWRaW89IpjFmKh6vtxex6hzbI8af6l-wugiaX82DStU</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Generative and Pseudo-Relevant Feedback for Sparse, Dense and Learned Sparse Retrieval</title><source>arXiv.org</source><creator>Mackie, Iain ; Chatterjee, Shubham ; Dalton, Jeffrey</creator><creatorcontrib>Mackie, Iain ; Chatterjee, Shubham ; Dalton, Jeffrey</creatorcontrib><description>Pseudo-relevance feedback (PRF) is a classical approach to address lexical mismatch by enriching the query using first-pass retrieval. Moreover, recent work on generative-relevance feedback (GRF) shows that query expansion models using text generated from large language models can improve sparse retrieval without depending on first-pass retrieval effectiveness. This work extends GRF to dense and learned sparse retrieval paradigms with experiments over six standard document ranking benchmarks. We find that GRF improves over comparable PRF techniques by around 10% on both precision and recall-oriented measures. Nonetheless, query analysis shows that GRF and PRF have contrasting benefits, with GRF providing external context not present in first-pass retrieval, whereas PRF grounds the query to the information contained within the target corpus. Thus, we propose combining generative and pseudo-relevance feedback ranking signals to achieve the benefits of both feedback classes, which significantly increases recall over PRF methods on 95% of experiments.</description><identifier>DOI: 10.48550/arxiv.2305.07477</identifier><language>eng</language><subject>Computer Science - Information Retrieval</subject><creationdate>2023-05</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2305.07477$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2305.07477$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Mackie, Iain</creatorcontrib><creatorcontrib>Chatterjee, Shubham</creatorcontrib><creatorcontrib>Dalton, Jeffrey</creatorcontrib><title>Generative and Pseudo-Relevant Feedback for Sparse, Dense and Learned Sparse Retrieval</title><description>Pseudo-relevance feedback (PRF) is a classical approach to address lexical mismatch by enriching the query using first-pass retrieval. Moreover, recent work on generative-relevance feedback (GRF) shows that query expansion models using text generated from large language models can improve sparse retrieval without depending on first-pass retrieval effectiveness. This work extends GRF to dense and learned sparse retrieval paradigms with experiments over six standard document ranking benchmarks. We find that GRF improves over comparable PRF techniques by around 10% on both precision and recall-oriented measures. Nonetheless, query analysis shows that GRF and PRF have contrasting benefits, with GRF providing external context not present in first-pass retrieval, whereas PRF grounds the query to the information contained within the target corpus. Thus, we propose combining generative and pseudo-relevance feedback ranking signals to achieve the benefits of both feedback classes, which significantly increases recall over PRF methods on 95% of experiments.</description><subject>Computer Science - Information Retrieval</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8FuwjAQRH3hUEE_oKf6A0jq2A7rHBEttFIkKop6jTbejRQRDHLSqP37tsBpDjNvpCfEQ6ZS6_JcPWH8bsdUG5WnCizAnfjccOCIQzuyxEDyvecvOiU77njEMMg1M9XoD7I5RflxxtjzXD5z6K_zkjEGplsjdzzE9g_sZmLSYNfz_S2nYr9-2a9ek3K7eVstywQXAAl456AAU6vaECEstGfWziHpXKtMWwdeu9o5osZYC4W3BWRaW89IpjFmKh6vtxex6hzbI8af6l-wugiaX82DStU</recordid><startdate>20230512</startdate><enddate>20230512</enddate><creator>Mackie, Iain</creator><creator>Chatterjee, Shubham</creator><creator>Dalton, Jeffrey</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230512</creationdate><title>Generative and Pseudo-Relevant Feedback for Sparse, Dense and Learned Sparse Retrieval</title><author>Mackie, Iain ; Chatterjee, Shubham ; Dalton, Jeffrey</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a677-7c887973b0b3dda762cee288ad252012487c28b88ddf34479c4971224cead3f33</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Information Retrieval</topic><toplevel>online_resources</toplevel><creatorcontrib>Mackie, Iain</creatorcontrib><creatorcontrib>Chatterjee, Shubham</creatorcontrib><creatorcontrib>Dalton, Jeffrey</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Mackie, Iain</au><au>Chatterjee, Shubham</au><au>Dalton, Jeffrey</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Generative and Pseudo-Relevant Feedback for Sparse, Dense and Learned Sparse Retrieval</atitle><date>2023-05-12</date><risdate>2023</risdate><abstract>Pseudo-relevance feedback (PRF) is a classical approach to address lexical mismatch by enriching the query using first-pass retrieval. Moreover, recent work on generative-relevance feedback (GRF) shows that query expansion models using text generated from large language models can improve sparse retrieval without depending on first-pass retrieval effectiveness. This work extends GRF to dense and learned sparse retrieval paradigms with experiments over six standard document ranking benchmarks. We find that GRF improves over comparable PRF techniques by around 10% on both precision and recall-oriented measures. Nonetheless, query analysis shows that GRF and PRF have contrasting benefits, with GRF providing external context not present in first-pass retrieval, whereas PRF grounds the query to the information contained within the target corpus. Thus, we propose combining generative and pseudo-relevance feedback ranking signals to achieve the benefits of both feedback classes, which significantly increases recall over PRF methods on 95% of experiments.</abstract><doi>10.48550/arxiv.2305.07477</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2305.07477
ispartof
issn
language eng
recordid cdi_arxiv_primary_2305_07477
source arXiv.org
subjects Computer Science - Information Retrieval
title Generative and Pseudo-Relevant Feedback for Sparse, Dense and Learned Sparse Retrieval
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T13%3A25%3A01IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Generative%20and%20Pseudo-Relevant%20Feedback%20for%20Sparse,%20Dense%20and%20Learned%20Sparse%20Retrieval&rft.au=Mackie,%20Iain&rft.date=2023-05-12&rft_id=info:doi/10.48550/arxiv.2305.07477&rft_dat=%3Carxiv_GOX%3E2305_07477%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true