Retrieval for Extremely Long Queries and Documents with RPRS: A Highly Efficient and Effective Transformer-based Re-Ranker

Retrieval with extremely long queries and documents is a well-known and challenging task in information retrieval and is commonly known as Query-by-Document (QBD) retrieval. Specifically designed Transformer models that can handle long input sequences have not shown high effectiveness in QBD tasks i...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:ACM transactions on information systems 2024-09, Vol.42 (5), p.1-32, Article 115
Hauptverfasser: Askari, Arian, Verberne, Suzan, Abolghasemi, Amin, Kraaij, Wessel, Pasi, Gabriella
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 32
container_issue 5
container_start_page 1
container_title ACM transactions on information systems
container_volume 42
creator Askari, Arian
Verberne, Suzan
Abolghasemi, Amin
Kraaij, Wessel
Pasi, Gabriella
description Retrieval with extremely long queries and documents is a well-known and challenging task in information retrieval and is commonly known as Query-by-Document (QBD) retrieval. Specifically designed Transformer models that can handle long input sequences have not shown high effectiveness in QBD tasks in previous work. We propose a Re-Ranker based on the novel Proportional Relevance Score (RPRS) to compute the relevance score between a query and the top-k candidate documents. Our extensive evaluation shows RPRS obtains significantly better results than the state-of-the-art models on five different datasets. Furthermore, RPRS is highly efficient, since all documents can be pre-processed, embedded, and indexed before query time that gives our re-ranker the advantage of having a complexity of O(N), where N is the total number of sentences in the query and candidate documents. Furthermore, our method solves the problem of the low-resource training in QBD retrieval tasks as it does not need large amounts of training data and has only three parameters with a limited range that can be optimized with a grid search even if a small amount of labeled data is available. Our detailed analysis shows that RPRS benefits from covering the full length of candidate documents and queries.
doi_str_mv 10.1145/3631938
format Article
fullrecord <record><control><sourceid>acm_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1145_3631938</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3631938</sourcerecordid><originalsourceid>FETCH-LOGICAL-a848-e2b33900b21cf514ca893c960fe53365d8cea0f69c1b49f311e47227291fc5c53</originalsourceid><addsrcrecordid>eNo9kMFOAjEURRujiYjGvavuXFXb6bS07giimJCoI_tJp7xClZkx7YDi11sFXb13c07u4iJ0zugVY7m45pIzzdUB6jEhFMmUVIfpp7kkiil1jE5ifKU0ZUl76KuALnjYmBV2bcDjzy5ADastnrbNAj-vIcGITTPHt61d19B0EX_4bomLp-LlBg_xxC-WSR87561P-NdNCWznN4BnwTQxNdcQSGUizHEBpDDNG4RTdOTMKsLZ_vbR7G48G03I9PH-YTScEqNyRSCrONeUVhmzTrDcGqW51ZI6EJxLMVcWDHVSW1bl2nHGIB9k2SDTzFlhBe-jy12tDW2MAVz5HnxtwrZktPxZrNwvlsyLnWls_S_9wW8nEmXg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Retrieval for Extremely Long Queries and Documents with RPRS: A Highly Efficient and Effective Transformer-based Re-Ranker</title><source>ACM Digital Library</source><creator>Askari, Arian ; Verberne, Suzan ; Abolghasemi, Amin ; Kraaij, Wessel ; Pasi, Gabriella</creator><creatorcontrib>Askari, Arian ; Verberne, Suzan ; Abolghasemi, Amin ; Kraaij, Wessel ; Pasi, Gabriella</creatorcontrib><description>Retrieval with extremely long queries and documents is a well-known and challenging task in information retrieval and is commonly known as Query-by-Document (QBD) retrieval. Specifically designed Transformer models that can handle long input sequences have not shown high effectiveness in QBD tasks in previous work. We propose a Re-Ranker based on the novel Proportional Relevance Score (RPRS) to compute the relevance score between a query and the top-k candidate documents. Our extensive evaluation shows RPRS obtains significantly better results than the state-of-the-art models on five different datasets. Furthermore, RPRS is highly efficient, since all documents can be pre-processed, embedded, and indexed before query time that gives our re-ranker the advantage of having a complexity of O(N), where N is the total number of sentences in the query and candidate documents. Furthermore, our method solves the problem of the low-resource training in QBD retrieval tasks as it does not need large amounts of training data and has only three parameters with a limited range that can be optimized with a grid search even if a small amount of labeled data is available. Our detailed analysis shows that RPRS benefits from covering the full length of candidate documents and queries.</description><identifier>ISSN: 1046-8188</identifier><identifier>EISSN: 1558-2868</identifier><identifier>DOI: 10.1145/3631938</identifier><language>eng</language><publisher>New York, NY: ACM</publisher><subject>Information systems ; Novelty in information retrieval</subject><ispartof>ACM transactions on information systems, 2024-09, Vol.42 (5), p.1-32, Article 115</ispartof><rights>Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-a848-e2b33900b21cf514ca893c960fe53365d8cea0f69c1b49f311e47227291fc5c53</cites><orcidid>0000-0001-7797-619X ; 0000-0002-9609-9505 ; 0000-0002-6080-8170 ; 0000-0003-4712-832X ; 0009-0002-3725-7312</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://dl.acm.org/doi/pdf/10.1145/3631938$$EPDF$$P50$$Gacm$$H</linktopdf><link.rule.ids>314,776,780,2276,27901,27902,40172,75970</link.rule.ids></links><search><creatorcontrib>Askari, Arian</creatorcontrib><creatorcontrib>Verberne, Suzan</creatorcontrib><creatorcontrib>Abolghasemi, Amin</creatorcontrib><creatorcontrib>Kraaij, Wessel</creatorcontrib><creatorcontrib>Pasi, Gabriella</creatorcontrib><title>Retrieval for Extremely Long Queries and Documents with RPRS: A Highly Efficient and Effective Transformer-based Re-Ranker</title><title>ACM transactions on information systems</title><addtitle>ACM TOIS</addtitle><description>Retrieval with extremely long queries and documents is a well-known and challenging task in information retrieval and is commonly known as Query-by-Document (QBD) retrieval. Specifically designed Transformer models that can handle long input sequences have not shown high effectiveness in QBD tasks in previous work. We propose a Re-Ranker based on the novel Proportional Relevance Score (RPRS) to compute the relevance score between a query and the top-k candidate documents. Our extensive evaluation shows RPRS obtains significantly better results than the state-of-the-art models on five different datasets. Furthermore, RPRS is highly efficient, since all documents can be pre-processed, embedded, and indexed before query time that gives our re-ranker the advantage of having a complexity of O(N), where N is the total number of sentences in the query and candidate documents. Furthermore, our method solves the problem of the low-resource training in QBD retrieval tasks as it does not need large amounts of training data and has only three parameters with a limited range that can be optimized with a grid search even if a small amount of labeled data is available. Our detailed analysis shows that RPRS benefits from covering the full length of candidate documents and queries.</description><subject>Information systems</subject><subject>Novelty in information retrieval</subject><issn>1046-8188</issn><issn>1558-2868</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNo9kMFOAjEURRujiYjGvavuXFXb6bS07giimJCoI_tJp7xClZkx7YDi11sFXb13c07u4iJ0zugVY7m45pIzzdUB6jEhFMmUVIfpp7kkiil1jE5ifKU0ZUl76KuALnjYmBV2bcDjzy5ADastnrbNAj-vIcGITTPHt61d19B0EX_4bomLp-LlBg_xxC-WSR87561P-NdNCWznN4BnwTQxNdcQSGUizHEBpDDNG4RTdOTMKsLZ_vbR7G48G03I9PH-YTScEqNyRSCrONeUVhmzTrDcGqW51ZI6EJxLMVcWDHVSW1bl2nHGIB9k2SDTzFlhBe-jy12tDW2MAVz5HnxtwrZktPxZrNwvlsyLnWls_S_9wW8nEmXg</recordid><startdate>20240930</startdate><enddate>20240930</enddate><creator>Askari, Arian</creator><creator>Verberne, Suzan</creator><creator>Abolghasemi, Amin</creator><creator>Kraaij, Wessel</creator><creator>Pasi, Gabriella</creator><general>ACM</general><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0001-7797-619X</orcidid><orcidid>https://orcid.org/0000-0002-9609-9505</orcidid><orcidid>https://orcid.org/0000-0002-6080-8170</orcidid><orcidid>https://orcid.org/0000-0003-4712-832X</orcidid><orcidid>https://orcid.org/0009-0002-3725-7312</orcidid></search><sort><creationdate>20240930</creationdate><title>Retrieval for Extremely Long Queries and Documents with RPRS: A Highly Efficient and Effective Transformer-based Re-Ranker</title><author>Askari, Arian ; Verberne, Suzan ; Abolghasemi, Amin ; Kraaij, Wessel ; Pasi, Gabriella</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a848-e2b33900b21cf514ca893c960fe53365d8cea0f69c1b49f311e47227291fc5c53</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Information systems</topic><topic>Novelty in information retrieval</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Askari, Arian</creatorcontrib><creatorcontrib>Verberne, Suzan</creatorcontrib><creatorcontrib>Abolghasemi, Amin</creatorcontrib><creatorcontrib>Kraaij, Wessel</creatorcontrib><creatorcontrib>Pasi, Gabriella</creatorcontrib><collection>CrossRef</collection><jtitle>ACM transactions on information systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Askari, Arian</au><au>Verberne, Suzan</au><au>Abolghasemi, Amin</au><au>Kraaij, Wessel</au><au>Pasi, Gabriella</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Retrieval for Extremely Long Queries and Documents with RPRS: A Highly Efficient and Effective Transformer-based Re-Ranker</atitle><jtitle>ACM transactions on information systems</jtitle><stitle>ACM TOIS</stitle><date>2024-09-30</date><risdate>2024</risdate><volume>42</volume><issue>5</issue><spage>1</spage><epage>32</epage><pages>1-32</pages><artnum>115</artnum><issn>1046-8188</issn><eissn>1558-2868</eissn><abstract>Retrieval with extremely long queries and documents is a well-known and challenging task in information retrieval and is commonly known as Query-by-Document (QBD) retrieval. Specifically designed Transformer models that can handle long input sequences have not shown high effectiveness in QBD tasks in previous work. We propose a Re-Ranker based on the novel Proportional Relevance Score (RPRS) to compute the relevance score between a query and the top-k candidate documents. Our extensive evaluation shows RPRS obtains significantly better results than the state-of-the-art models on five different datasets. Furthermore, RPRS is highly efficient, since all documents can be pre-processed, embedded, and indexed before query time that gives our re-ranker the advantage of having a complexity of O(N), where N is the total number of sentences in the query and candidate documents. Furthermore, our method solves the problem of the low-resource training in QBD retrieval tasks as it does not need large amounts of training data and has only three parameters with a limited range that can be optimized with a grid search even if a small amount of labeled data is available. Our detailed analysis shows that RPRS benefits from covering the full length of candidate documents and queries.</abstract><cop>New York, NY</cop><pub>ACM</pub><doi>10.1145/3631938</doi><tpages>32</tpages><orcidid>https://orcid.org/0000-0001-7797-619X</orcidid><orcidid>https://orcid.org/0000-0002-9609-9505</orcidid><orcidid>https://orcid.org/0000-0002-6080-8170</orcidid><orcidid>https://orcid.org/0000-0003-4712-832X</orcidid><orcidid>https://orcid.org/0009-0002-3725-7312</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1046-8188
ispartof ACM transactions on information systems, 2024-09, Vol.42 (5), p.1-32, Article 115
issn 1046-8188
1558-2868
language eng
recordid cdi_crossref_primary_10_1145_3631938
source ACM Digital Library
subjects Information systems
Novelty in information retrieval
title Retrieval for Extremely Long Queries and Documents with RPRS: A Highly Efficient and Effective Transformer-based Re-Ranker
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-12T14%3A57%3A29IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-acm_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Retrieval%20for%20Extremely%20Long%20Queries%20and%20Documents%20with%20RPRS:%20A%20Highly%20Efficient%20and%20Effective%20Transformer-based%20Re-Ranker&rft.jtitle=ACM%20transactions%20on%20information%20systems&rft.au=Askari,%20Arian&rft.date=2024-09-30&rft.volume=42&rft.issue=5&rft.spage=1&rft.epage=32&rft.pages=1-32&rft.artnum=115&rft.issn=1046-8188&rft.eissn=1558-2868&rft_id=info:doi/10.1145/3631938&rft_dat=%3Cacm_cross%3E3631938%3C/acm_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true