Evaluation of Instruction-Following Ability for Large Language Models on Story-Ending Generation
Instruction-tuned Large Language Models (LLMs) have achieved remarkable performance across various benchmark tasks. While providing instructions to LLMs for guiding their generations is user-friendly, assessing their instruction-following capabilities is still unclarified due to a lack of evaluation...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Instruction-tuned Large Language Models (LLMs) have achieved remarkable
performance across various benchmark tasks. While providing instructions to
LLMs for guiding their generations is user-friendly, assessing their
instruction-following capabilities is still unclarified due to a lack of
evaluation metrics. In this paper, we focus on evaluating the
instruction-following ability of LLMs in the context of story-ending
generation, which requires diverse and context-specific instructions. We
propose an automatic evaluation pipeline that utilizes a machine reading
comprehension (MRC) model to determine whether the generated story-ending
reflects instruction. Our findings demonstrate that our proposed metric aligns
with human evaluation. Furthermore, our experiments confirm that recent
open-source LLMs can achieve instruction-following performance close to
GPT-3.5, as assessed through automatic evaluation. |
---|---|
DOI: | 10.48550/arxiv.2406.16356 |