ResearchArena: Benchmarking LLMs' Ability to Collect and Organize Information as Research Agents
Large language models (LLMs) have exhibited remarkable performance across various tasks in natural language processing. Nevertheless, challenges still arise when these tasks demand domain-specific expertise and advanced analytical skills, such as conducting research surveys on a designated topic. In...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Large language models (LLMs) have exhibited remarkable performance across
various tasks in natural language processing. Nevertheless, challenges still
arise when these tasks demand domain-specific expertise and advanced analytical
skills, such as conducting research surveys on a designated topic. In this
research, we develop ResearchArena, a benchmark that measures LLM agents'
ability to conduct academic surveys, an initial step of academic research
process. Specifically, we deconstructs the surveying process into three stages
1) information discovery: locating relevant papers, 2) information selection:
assessing papers' importance to the topic, and 3) information organization:
organizing papers into meaningful structures. In particular, we establish an
offline environment comprising 12.0M full-text academic papers and 7.9K survey
papers, which evaluates agents' ability to locate supporting materials for
composing the survey on a topic, rank the located papers based on their impact,
and organize these into a hierarchical knowledge mind-map. With this benchmark,
we conduct preliminary evaluations of existing techniques and find that all
LLM-based methods under-performing when compared to basic keyword-based
retrieval techniques, highlighting substantial opportunities for future
research. |
---|---|
DOI: | 10.48550/arxiv.2406.10291 |