ResearchArena: Benchmarking LLMs' Ability to Collect and Organize Information as Research Agents

Large language models (LLMs) have exhibited remarkable performance across various tasks in natural language processing. Nevertheless, challenges still arise when these tasks demand domain-specific expertise and advanced analytical skills, such as conducting research surveys on a designated topic. In...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Kang, Hao, Xiong, Chenyan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Kang, Hao
Xiong, Chenyan
description Large language models (LLMs) have exhibited remarkable performance across various tasks in natural language processing. Nevertheless, challenges still arise when these tasks demand domain-specific expertise and advanced analytical skills, such as conducting research surveys on a designated topic. In this research, we develop ResearchArena, a benchmark that measures LLM agents' ability to conduct academic surveys, an initial step of academic research process. Specifically, we deconstructs the surveying process into three stages 1) information discovery: locating relevant papers, 2) information selection: assessing papers' importance to the topic, and 3) information organization: organizing papers into meaningful structures. In particular, we establish an offline environment comprising 12.0M full-text academic papers and 7.9K survey papers, which evaluates agents' ability to locate supporting materials for composing the survey on a topic, rank the located papers based on their impact, and organize these into a hierarchical knowledge mind-map. With this benchmark, we conduct preliminary evaluations of existing techniques and find that all LLM-based methods under-performing when compared to basic keyword-based retrieval techniques, highlighting substantial opportunities for future research.
doi_str_mv 10.48550/arxiv.2406.10291
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2406_10291</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2406_10291</sourcerecordid><originalsourceid>FETCH-LOGICAL-a671-83a85769d4d3f33e745d0c696a661837c56ef57e4da80794df5f8754cf61c6563</originalsourceid><addsrcrecordid>eNo1z7tOwzAUgGEvDKjwAEycjSnBrq9hCxGXSkGVUPdw8CW1SB3kRIjy9NAC07_90kfIBaOlMFLSa8yf8aNcCqpKRpcVOyUvz37ymO22zj7hDdz6ZLc7zG8x9dC2T9MV1K9xiPMe5hGacRi8nQGTg3XuMcUvD6sUxrzDOY4JcIL_IdS9T_N0Rk4CDpM__-uCbO7vNs1j0a4fVk3dFqg0KwxHI7WqnHA8cO61kI5aVSlUihmurVQ-SO2FQ0N1JVyQwWgpbFDMKqn4glz-bo_E7j3HH8S-O1C7I5V_Ax0ZThc</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>ResearchArena: Benchmarking LLMs' Ability to Collect and Organize Information as Research Agents</title><source>arXiv.org</source><creator>Kang, Hao ; Xiong, Chenyan</creator><creatorcontrib>Kang, Hao ; Xiong, Chenyan</creatorcontrib><description>Large language models (LLMs) have exhibited remarkable performance across various tasks in natural language processing. Nevertheless, challenges still arise when these tasks demand domain-specific expertise and advanced analytical skills, such as conducting research surveys on a designated topic. In this research, we develop ResearchArena, a benchmark that measures LLM agents' ability to conduct academic surveys, an initial step of academic research process. Specifically, we deconstructs the surveying process into three stages 1) information discovery: locating relevant papers, 2) information selection: assessing papers' importance to the topic, and 3) information organization: organizing papers into meaningful structures. In particular, we establish an offline environment comprising 12.0M full-text academic papers and 7.9K survey papers, which evaluates agents' ability to locate supporting materials for composing the survey on a topic, rank the located papers based on their impact, and organize these into a hierarchical knowledge mind-map. With this benchmark, we conduct preliminary evaluations of existing techniques and find that all LLM-based methods under-performing when compared to basic keyword-based retrieval techniques, highlighting substantial opportunities for future research.</description><identifier>DOI: 10.48550/arxiv.2406.10291</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computation and Language ; Computer Science - Information Retrieval</subject><creationdate>2024-06</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2406.10291$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2406.10291$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Kang, Hao</creatorcontrib><creatorcontrib>Xiong, Chenyan</creatorcontrib><title>ResearchArena: Benchmarking LLMs' Ability to Collect and Organize Information as Research Agents</title><description>Large language models (LLMs) have exhibited remarkable performance across various tasks in natural language processing. Nevertheless, challenges still arise when these tasks demand domain-specific expertise and advanced analytical skills, such as conducting research surveys on a designated topic. In this research, we develop ResearchArena, a benchmark that measures LLM agents' ability to conduct academic surveys, an initial step of academic research process. Specifically, we deconstructs the surveying process into three stages 1) information discovery: locating relevant papers, 2) information selection: assessing papers' importance to the topic, and 3) information organization: organizing papers into meaningful structures. In particular, we establish an offline environment comprising 12.0M full-text academic papers and 7.9K survey papers, which evaluates agents' ability to locate supporting materials for composing the survey on a topic, rank the located papers based on their impact, and organize these into a hierarchical knowledge mind-map. With this benchmark, we conduct preliminary evaluations of existing techniques and find that all LLM-based methods under-performing when compared to basic keyword-based retrieval techniques, highlighting substantial opportunities for future research.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Information Retrieval</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNo1z7tOwzAUgGEvDKjwAEycjSnBrq9hCxGXSkGVUPdw8CW1SB3kRIjy9NAC07_90kfIBaOlMFLSa8yf8aNcCqpKRpcVOyUvz37ymO22zj7hDdz6ZLc7zG8x9dC2T9MV1K9xiPMe5hGacRi8nQGTg3XuMcUvD6sUxrzDOY4JcIL_IdS9T_N0Rk4CDpM__-uCbO7vNs1j0a4fVk3dFqg0KwxHI7WqnHA8cO61kI5aVSlUihmurVQ-SO2FQ0N1JVyQwWgpbFDMKqn4glz-bo_E7j3HH8S-O1C7I5V_Ax0ZThc</recordid><startdate>20240612</startdate><enddate>20240612</enddate><creator>Kang, Hao</creator><creator>Xiong, Chenyan</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240612</creationdate><title>ResearchArena: Benchmarking LLMs' Ability to Collect and Organize Information as Research Agents</title><author>Kang, Hao ; Xiong, Chenyan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a671-83a85769d4d3f33e745d0c696a661837c56ef57e4da80794df5f8754cf61c6563</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Information Retrieval</topic><toplevel>online_resources</toplevel><creatorcontrib>Kang, Hao</creatorcontrib><creatorcontrib>Xiong, Chenyan</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Kang, Hao</au><au>Xiong, Chenyan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>ResearchArena: Benchmarking LLMs' Ability to Collect and Organize Information as Research Agents</atitle><date>2024-06-12</date><risdate>2024</risdate><abstract>Large language models (LLMs) have exhibited remarkable performance across various tasks in natural language processing. Nevertheless, challenges still arise when these tasks demand domain-specific expertise and advanced analytical skills, such as conducting research surveys on a designated topic. In this research, we develop ResearchArena, a benchmark that measures LLM agents' ability to conduct academic surveys, an initial step of academic research process. Specifically, we deconstructs the surveying process into three stages 1) information discovery: locating relevant papers, 2) information selection: assessing papers' importance to the topic, and 3) information organization: organizing papers into meaningful structures. In particular, we establish an offline environment comprising 12.0M full-text academic papers and 7.9K survey papers, which evaluates agents' ability to locate supporting materials for composing the survey on a topic, rank the located papers based on their impact, and organize these into a hierarchical knowledge mind-map. With this benchmark, we conduct preliminary evaluations of existing techniques and find that all LLM-based methods under-performing when compared to basic keyword-based retrieval techniques, highlighting substantial opportunities for future research.</abstract><doi>10.48550/arxiv.2406.10291</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2406.10291
ispartof
issn
language eng
recordid cdi_arxiv_primary_2406_10291
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computation and Language
Computer Science - Information Retrieval
title ResearchArena: Benchmarking LLMs' Ability to Collect and Organize Information as Research Agents
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-31T16%3A16%3A37IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=ResearchArena:%20Benchmarking%20LLMs'%20Ability%20to%20Collect%20and%20Organize%20Information%20as%20Research%20Agents&rft.au=Kang,%20Hao&rft.date=2024-06-12&rft_id=info:doi/10.48550/arxiv.2406.10291&rft_dat=%3Carxiv_GOX%3E2406_10291%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true