MAIR: A Massive Benchmark for Evaluating Instructed Retrieval

Recent information retrieval (IR) models are pre-trained and instruction-tuned on massive datasets and tasks, enabling them to perform well on a wide range of tasks and potentially generalize to unseen tasks with instructions. However, existing IR benchmarks focus on a limited scope of tasks, making...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Sun, Weiwei, Shi, Zhengliang, Wu, Jiulong, Yan, Lingyong, Ma, Xinyu, Liu, Yiding, Cao, Min, Yin, Dawei, Ren, Zhaochun
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Sun, Weiwei
Shi, Zhengliang
Wu, Jiulong
Yan, Lingyong
Ma, Xinyu
Liu, Yiding
Cao, Min
Yin, Dawei
Ren, Zhaochun
description Recent information retrieval (IR) models are pre-trained and instruction-tuned on massive datasets and tasks, enabling them to perform well on a wide range of tasks and potentially generalize to unseen tasks with instructions. However, existing IR benchmarks focus on a limited scope of tasks, making them insufficient for evaluating the latest IR models. In this paper, we propose MAIR (Massive Instructed Retrieval Benchmark), a heterogeneous IR benchmark that includes 126 distinct IR tasks across 6 domains, collected from existing datasets. We benchmark state-of-the-art instruction-tuned text embedding models and re-ranking models. Our experiments reveal that instruction-tuned models generally achieve superior performance compared to non-instruction-tuned models on MAIR. Additionally, our results suggest that current instruction-tuned text embedding models and re-ranking models still lack effectiveness in specific long-tail tasks. MAIR is publicly available at https://github.com/sunnweiwei/Mair.
doi_str_mv 10.48550/arxiv.2410.10127
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2410_10127</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2410_10127</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2410_101273</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMgEKGBoYGplzMtj6OnoGWSk4KvgmFhdnlqUqOKXmJWfkJhZlK6TlFym4liXmlCaWZOalK3jmFZcUlSaXpKYoBKWWFGWmAqV4GFjTEnOKU3mhNDeDvJtriLOHLtii-IKiTKBJlfEgC-PBFhoTVgEAvCo0uA</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>MAIR: A Massive Benchmark for Evaluating Instructed Retrieval</title><source>arXiv.org</source><creator>Sun, Weiwei ; Shi, Zhengliang ; Wu, Jiulong ; Yan, Lingyong ; Ma, Xinyu ; Liu, Yiding ; Cao, Min ; Yin, Dawei ; Ren, Zhaochun</creator><creatorcontrib>Sun, Weiwei ; Shi, Zhengliang ; Wu, Jiulong ; Yan, Lingyong ; Ma, Xinyu ; Liu, Yiding ; Cao, Min ; Yin, Dawei ; Ren, Zhaochun</creatorcontrib><description>Recent information retrieval (IR) models are pre-trained and instruction-tuned on massive datasets and tasks, enabling them to perform well on a wide range of tasks and potentially generalize to unseen tasks with instructions. However, existing IR benchmarks focus on a limited scope of tasks, making them insufficient for evaluating the latest IR models. In this paper, we propose MAIR (Massive Instructed Retrieval Benchmark), a heterogeneous IR benchmark that includes 126 distinct IR tasks across 6 domains, collected from existing datasets. We benchmark state-of-the-art instruction-tuned text embedding models and re-ranking models. Our experiments reveal that instruction-tuned models generally achieve superior performance compared to non-instruction-tuned models on MAIR. Additionally, our results suggest that current instruction-tuned text embedding models and re-ranking models still lack effectiveness in specific long-tail tasks. MAIR is publicly available at https://github.com/sunnweiwei/Mair.</description><identifier>DOI: 10.48550/arxiv.2410.10127</identifier><language>eng</language><subject>Computer Science - Information Retrieval</subject><creationdate>2024-10</creationdate><rights>http://creativecommons.org/licenses/by-nc-nd/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,777,882</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2410.10127$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2410.10127$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Sun, Weiwei</creatorcontrib><creatorcontrib>Shi, Zhengliang</creatorcontrib><creatorcontrib>Wu, Jiulong</creatorcontrib><creatorcontrib>Yan, Lingyong</creatorcontrib><creatorcontrib>Ma, Xinyu</creatorcontrib><creatorcontrib>Liu, Yiding</creatorcontrib><creatorcontrib>Cao, Min</creatorcontrib><creatorcontrib>Yin, Dawei</creatorcontrib><creatorcontrib>Ren, Zhaochun</creatorcontrib><title>MAIR: A Massive Benchmark for Evaluating Instructed Retrieval</title><description>Recent information retrieval (IR) models are pre-trained and instruction-tuned on massive datasets and tasks, enabling them to perform well on a wide range of tasks and potentially generalize to unseen tasks with instructions. However, existing IR benchmarks focus on a limited scope of tasks, making them insufficient for evaluating the latest IR models. In this paper, we propose MAIR (Massive Instructed Retrieval Benchmark), a heterogeneous IR benchmark that includes 126 distinct IR tasks across 6 domains, collected from existing datasets. We benchmark state-of-the-art instruction-tuned text embedding models and re-ranking models. Our experiments reveal that instruction-tuned models generally achieve superior performance compared to non-instruction-tuned models on MAIR. Additionally, our results suggest that current instruction-tuned text embedding models and re-ranking models still lack effectiveness in specific long-tail tasks. MAIR is publicly available at https://github.com/sunnweiwei/Mair.</description><subject>Computer Science - Information Retrieval</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMgEKGBoYGplzMtj6OnoGWSk4KvgmFhdnlqUqOKXmJWfkJhZlK6TlFym4liXmlCaWZOalK3jmFZcUlSaXpKYoBKWWFGWmAqV4GFjTEnOKU3mhNDeDvJtriLOHLtii-IKiTKBJlfEgC-PBFhoTVgEAvCo0uA</recordid><startdate>20241013</startdate><enddate>20241013</enddate><creator>Sun, Weiwei</creator><creator>Shi, Zhengliang</creator><creator>Wu, Jiulong</creator><creator>Yan, Lingyong</creator><creator>Ma, Xinyu</creator><creator>Liu, Yiding</creator><creator>Cao, Min</creator><creator>Yin, Dawei</creator><creator>Ren, Zhaochun</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20241013</creationdate><title>MAIR: A Massive Benchmark for Evaluating Instructed Retrieval</title><author>Sun, Weiwei ; Shi, Zhengliang ; Wu, Jiulong ; Yan, Lingyong ; Ma, Xinyu ; Liu, Yiding ; Cao, Min ; Yin, Dawei ; Ren, Zhaochun</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2410_101273</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Information Retrieval</topic><toplevel>online_resources</toplevel><creatorcontrib>Sun, Weiwei</creatorcontrib><creatorcontrib>Shi, Zhengliang</creatorcontrib><creatorcontrib>Wu, Jiulong</creatorcontrib><creatorcontrib>Yan, Lingyong</creatorcontrib><creatorcontrib>Ma, Xinyu</creatorcontrib><creatorcontrib>Liu, Yiding</creatorcontrib><creatorcontrib>Cao, Min</creatorcontrib><creatorcontrib>Yin, Dawei</creatorcontrib><creatorcontrib>Ren, Zhaochun</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Sun, Weiwei</au><au>Shi, Zhengliang</au><au>Wu, Jiulong</au><au>Yan, Lingyong</au><au>Ma, Xinyu</au><au>Liu, Yiding</au><au>Cao, Min</au><au>Yin, Dawei</au><au>Ren, Zhaochun</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>MAIR: A Massive Benchmark for Evaluating Instructed Retrieval</atitle><date>2024-10-13</date><risdate>2024</risdate><abstract>Recent information retrieval (IR) models are pre-trained and instruction-tuned on massive datasets and tasks, enabling them to perform well on a wide range of tasks and potentially generalize to unseen tasks with instructions. However, existing IR benchmarks focus on a limited scope of tasks, making them insufficient for evaluating the latest IR models. In this paper, we propose MAIR (Massive Instructed Retrieval Benchmark), a heterogeneous IR benchmark that includes 126 distinct IR tasks across 6 domains, collected from existing datasets. We benchmark state-of-the-art instruction-tuned text embedding models and re-ranking models. Our experiments reveal that instruction-tuned models generally achieve superior performance compared to non-instruction-tuned models on MAIR. Additionally, our results suggest that current instruction-tuned text embedding models and re-ranking models still lack effectiveness in specific long-tail tasks. MAIR is publicly available at https://github.com/sunnweiwei/Mair.</abstract><doi>10.48550/arxiv.2410.10127</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2410.10127
ispartof
issn
language eng
recordid cdi_arxiv_primary_2410_10127
source arXiv.org
subjects Computer Science - Information Retrieval
title MAIR: A Massive Benchmark for Evaluating Instructed Retrieval
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-18T03%3A58%3A36IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=MAIR:%20A%20Massive%20Benchmark%20for%20Evaluating%20Instructed%20Retrieval&rft.au=Sun,%20Weiwei&rft.date=2024-10-13&rft_id=info:doi/10.48550/arxiv.2410.10127&rft_dat=%3Carxiv_GOX%3E2410_10127%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true