Catwalk: A Unified Language Model Evaluation Framework for Many Datasets
The success of large language models has shifted the evaluation paradigms in natural language processing (NLP). The community's interest has drifted towards comparing NLP models across many tasks, domains, and datasets, often at an extreme scale. This imposes new engineering challenges: efforts...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The success of large language models has shifted the evaluation paradigms in
natural language processing (NLP). The community's interest has drifted towards
comparing NLP models across many tasks, domains, and datasets, often at an
extreme scale. This imposes new engineering challenges: efforts in constructing
datasets and models have been fragmented, and their formats and interfaces are
incompatible. As a result, it often takes extensive (re)implementation efforts
to make fair and controlled comparisons at scale.
Catwalk aims to address these issues. Catwalk provides a unified interface to
a broad range of existing NLP datasets and models, ranging from both canonical
supervised training and fine-tuning, to more modern paradigms like in-context
learning. Its carefully-designed abstractions allow for easy extensions to many
others. Catwalk substantially lowers the barriers to conducting controlled
experiments at scale. For example, we finetuned and evaluated over 64 models on
over 86 datasets with a single command, without writing any code. Maintained by
the AllenNLP team at the Allen Institute for Artificial Intelligence (AI2),
Catwalk is an ongoing open-source effort: https://github.com/allenai/catwalk. |
---|---|
DOI: | 10.48550/arxiv.2312.10253 |