HumanRankEval: Automatic Evaluation of LMs as Conversational Assistants
Language models (LMs) as conversational assistants recently became popular tools that help people accomplish a variety of tasks. These typically result from adapting LMs pretrained on general domain text sequences through further instruction-tuning and possibly preference optimisation methods. The e...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Language models (LMs) as conversational assistants recently became popular
tools that help people accomplish a variety of tasks. These typically result
from adapting LMs pretrained on general domain text sequences through further
instruction-tuning and possibly preference optimisation methods. The evaluation
of such LMs would ideally be performed using human judgement, however, this is
not scalable. On the other hand, automatic evaluation featuring auxiliary LMs
as judges and/or knowledge-based tasks is scalable but struggles with assessing
conversational ability and adherence to instructions. To help accelerate the
development of LMs as conversational assistants, we propose a novel automatic
evaluation task: HumanRankEval (HRE). It consists of a large-scale, diverse and
high-quality set of questions, each with several answers authored and scored by
humans. To perform evaluation, HRE ranks these answers based on their
log-likelihood under the LM's distribution, and subsequently calculates their
correlation with the corresponding human rankings. We support HRE's efficacy by
investigating how efficiently it separates pretrained and instruction-tuned LMs
of various sizes. We show that HRE correlates well with human judgements and is
particularly responsive to model changes following instruction-tuning. |
---|---|
DOI: | 10.48550/arxiv.2405.09186 |