HammerBench: Fine-Grained Function-Calling Evaluation in Real Mobile Device Scenarios
Evaluating the capabilities of large language models (LLMs) in human-LLM interactions remains challenging due to the inherent complexity and openness of dialogue processes. This paper introduces HammerBench, a novel benchmarking framework designed to assess the function-calling ability of LLMs more...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Evaluating the capabilities of large language models (LLMs) in human-LLM
interactions remains challenging due to the inherent complexity and openness of
dialogue processes. This paper introduces HammerBench, a novel benchmarking
framework designed to assess the function-calling ability of LLMs more
effectively in such interactions. We model a wide range of real-world user
scenarios on mobile devices, encompassing imperfect instructions, diverse
question-answer trajectories, intent/argument shifts, and the use of external
individual information through pronouns. To construct the corresponding
datasets, we propose a comprehensive pipeline that involves LLM-generated data
and multiple rounds of human validation, ensuring high data quality.
Additionally, we decompose the conversations into function-calling snapshots,
enabling a fine-grained evaluation of each turn. We evaluate several popular
LLMs using HammerBench and highlight different performance aspects. Our
empirical findings reveal that errors in parameter naming constitute the
primary factor behind conversation failures across different data types. |
---|---|
DOI: | 10.48550/arxiv.2412.16516 |