Incorporating Relevance Feedback for Information-Seeking Retrieval using Few-Shot Document Re-Ranking
Pairing a lexical retriever with a neural re-ranking model has set state-of-the-art performance on large-scale information retrieval datasets. This pipeline covers scenarios like question answering or navigational queries, however, for information-seeking scenarios, users often provide information o...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Baumgärtner, Tim Ribeiro, Leonardo F. R Reimers, Nils Gurevych, Iryna |
description | Pairing a lexical retriever with a neural re-ranking model has set
state-of-the-art performance on large-scale information retrieval datasets.
This pipeline covers scenarios like question answering or navigational queries,
however, for information-seeking scenarios, users often provide information on
whether a document is relevant to their query in form of clicks or explicit
feedback. Therefore, in this work, we explore how relevance feedback can be
directly integrated into neural re-ranking models by adopting few-shot and
parameter-efficient learning techniques. Specifically, we introduce a kNN
approach that re-ranks documents based on their similarity with the query and
the documents the user considers relevant. Further, we explore Cross-Encoder
models that we pre-train using meta-learning and subsequently fine-tune for
each query, training only on the feedback documents. To evaluate our different
integration strategies, we transform four existing information retrieval
datasets into the relevance feedback scenario. Extensive experiments
demonstrate that integrating relevance feedback directly in neural re-ranking
models improves their performance, and fusing lexical ranking with our best
performing neural re-ranker outperforms all other methods by 5.2 nDCG@20. |
doi_str_mv | 10.48550/arxiv.2210.10695 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2210_10695</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2210_10695</sourcerecordid><originalsourceid>FETCH-LOGICAL-a675-8c4edd78f072497f4891bb6f8bb43c81be62fea6cae683a0bfcf83eb057965bc3</originalsourceid><addsrcrecordid>eNotj0FOwzAQRb1hgQoHYIUv4OLEseMsUSEQqRJS2300dsYQNbErNy1we5yWzXzN19OXHiEPGV8WWkr-BPGnPy_zPBUZV5W8Jdh4G-IhRJh6_0k3OOAZvEVaI3YG7J66EGnj0x0TEjzbIu6v6BT7BA_0dJz_Gr_Z9itM9CXY04h-SgTbgJ_hO3LjYDji_X8uyK5-3a3e2frjrVk9rxmoUjJtC-y6Ujte5kVVukJXmTHKaWMKYXVmUOUOQVlApQVw46zTAg2XZaWksWJBHq-zF8_2EPsR4m87-7YXX_EHKb9Scw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Incorporating Relevance Feedback for Information-Seeking Retrieval using Few-Shot Document Re-Ranking</title><source>arXiv.org</source><creator>Baumgärtner, Tim ; Ribeiro, Leonardo F. R ; Reimers, Nils ; Gurevych, Iryna</creator><creatorcontrib>Baumgärtner, Tim ; Ribeiro, Leonardo F. R ; Reimers, Nils ; Gurevych, Iryna</creatorcontrib><description>Pairing a lexical retriever with a neural re-ranking model has set
state-of-the-art performance on large-scale information retrieval datasets.
This pipeline covers scenarios like question answering or navigational queries,
however, for information-seeking scenarios, users often provide information on
whether a document is relevant to their query in form of clicks or explicit
feedback. Therefore, in this work, we explore how relevance feedback can be
directly integrated into neural re-ranking models by adopting few-shot and
parameter-efficient learning techniques. Specifically, we introduce a kNN
approach that re-ranks documents based on their similarity with the query and
the documents the user considers relevant. Further, we explore Cross-Encoder
models that we pre-train using meta-learning and subsequently fine-tune for
each query, training only on the feedback documents. To evaluate our different
integration strategies, we transform four existing information retrieval
datasets into the relevance feedback scenario. Extensive experiments
demonstrate that integrating relevance feedback directly in neural re-ranking
models improves their performance, and fusing lexical ranking with our best
performing neural re-ranker outperforms all other methods by 5.2 nDCG@20.</description><identifier>DOI: 10.48550/arxiv.2210.10695</identifier><language>eng</language><subject>Computer Science - Computation and Language ; Computer Science - Information Retrieval</subject><creationdate>2022-10</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2210.10695$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2210.10695$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Baumgärtner, Tim</creatorcontrib><creatorcontrib>Ribeiro, Leonardo F. R</creatorcontrib><creatorcontrib>Reimers, Nils</creatorcontrib><creatorcontrib>Gurevych, Iryna</creatorcontrib><title>Incorporating Relevance Feedback for Information-Seeking Retrieval using Few-Shot Document Re-Ranking</title><description>Pairing a lexical retriever with a neural re-ranking model has set
state-of-the-art performance on large-scale information retrieval datasets.
This pipeline covers scenarios like question answering or navigational queries,
however, for information-seeking scenarios, users often provide information on
whether a document is relevant to their query in form of clicks or explicit
feedback. Therefore, in this work, we explore how relevance feedback can be
directly integrated into neural re-ranking models by adopting few-shot and
parameter-efficient learning techniques. Specifically, we introduce a kNN
approach that re-ranks documents based on their similarity with the query and
the documents the user considers relevant. Further, we explore Cross-Encoder
models that we pre-train using meta-learning and subsequently fine-tune for
each query, training only on the feedback documents. To evaluate our different
integration strategies, we transform four existing information retrieval
datasets into the relevance feedback scenario. Extensive experiments
demonstrate that integrating relevance feedback directly in neural re-ranking
models improves their performance, and fusing lexical ranking with our best
performing neural re-ranker outperforms all other methods by 5.2 nDCG@20.</description><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Information Retrieval</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj0FOwzAQRb1hgQoHYIUv4OLEseMsUSEQqRJS2300dsYQNbErNy1we5yWzXzN19OXHiEPGV8WWkr-BPGnPy_zPBUZV5W8Jdh4G-IhRJh6_0k3OOAZvEVaI3YG7J66EGnj0x0TEjzbIu6v6BT7BA_0dJz_Gr_Z9itM9CXY04h-SgTbgJ_hO3LjYDji_X8uyK5-3a3e2frjrVk9rxmoUjJtC-y6Ujte5kVVukJXmTHKaWMKYXVmUOUOQVlApQVw46zTAg2XZaWksWJBHq-zF8_2EPsR4m87-7YXX_EHKb9Scw</recordid><startdate>20221019</startdate><enddate>20221019</enddate><creator>Baumgärtner, Tim</creator><creator>Ribeiro, Leonardo F. R</creator><creator>Reimers, Nils</creator><creator>Gurevych, Iryna</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20221019</creationdate><title>Incorporating Relevance Feedback for Information-Seeking Retrieval using Few-Shot Document Re-Ranking</title><author>Baumgärtner, Tim ; Ribeiro, Leonardo F. R ; Reimers, Nils ; Gurevych, Iryna</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a675-8c4edd78f072497f4891bb6f8bb43c81be62fea6cae683a0bfcf83eb057965bc3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Information Retrieval</topic><toplevel>online_resources</toplevel><creatorcontrib>Baumgärtner, Tim</creatorcontrib><creatorcontrib>Ribeiro, Leonardo F. R</creatorcontrib><creatorcontrib>Reimers, Nils</creatorcontrib><creatorcontrib>Gurevych, Iryna</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Baumgärtner, Tim</au><au>Ribeiro, Leonardo F. R</au><au>Reimers, Nils</au><au>Gurevych, Iryna</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Incorporating Relevance Feedback for Information-Seeking Retrieval using Few-Shot Document Re-Ranking</atitle><date>2022-10-19</date><risdate>2022</risdate><abstract>Pairing a lexical retriever with a neural re-ranking model has set
state-of-the-art performance on large-scale information retrieval datasets.
This pipeline covers scenarios like question answering or navigational queries,
however, for information-seeking scenarios, users often provide information on
whether a document is relevant to their query in form of clicks or explicit
feedback. Therefore, in this work, we explore how relevance feedback can be
directly integrated into neural re-ranking models by adopting few-shot and
parameter-efficient learning techniques. Specifically, we introduce a kNN
approach that re-ranks documents based on their similarity with the query and
the documents the user considers relevant. Further, we explore Cross-Encoder
models that we pre-train using meta-learning and subsequently fine-tune for
each query, training only on the feedback documents. To evaluate our different
integration strategies, we transform four existing information retrieval
datasets into the relevance feedback scenario. Extensive experiments
demonstrate that integrating relevance feedback directly in neural re-ranking
models improves their performance, and fusing lexical ranking with our best
performing neural re-ranker outperforms all other methods by 5.2 nDCG@20.</abstract><doi>10.48550/arxiv.2210.10695</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2210.10695 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2210_10695 |
source | arXiv.org |
subjects | Computer Science - Computation and Language Computer Science - Information Retrieval |
title | Incorporating Relevance Feedback for Information-Seeking Retrieval using Few-Shot Document Re-Ranking |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-30T11%3A36%3A17IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Incorporating%20Relevance%20Feedback%20for%20Information-Seeking%20Retrieval%20using%20Few-Shot%20Document%20Re-Ranking&rft.au=Baumg%C3%A4rtner,%20Tim&rft.date=2022-10-19&rft_id=info:doi/10.48550/arxiv.2210.10695&rft_dat=%3Carxiv_GOX%3E2210_10695%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |