Towards More Robust NLP System Evaluation: Handling Missing Scores in Benchmarks
The evaluation of natural language processing (NLP) systems is crucial for advancing the field, but current benchmarking approaches often assume that all systems have scores available for all tasks, which is not always practical. In reality, several factors such as the cost of running baseline, priv...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The evaluation of natural language processing (NLP) systems is crucial for
advancing the field, but current benchmarking approaches often assume that all
systems have scores available for all tasks, which is not always practical. In
reality, several factors such as the cost of running baseline, private systems,
computational limitations, or incomplete data may prevent some systems from
being evaluated on entire tasks. This paper formalize an existing problem in
NLP research: benchmarking when some systems scores are missing on the task,
and proposes a novel approach to address it. Our method utilizes a compatible
partial ranking approach to impute missing data, which is then aggregated using
the Borda count method. It includes two refinements designed specifically for
scenarios where either task-level or instance-level scores are available. We
also introduce an extended benchmark, which contains over 131 million scores,
an order of magnitude larger than existing benchmarks. We validate our methods
and demonstrate their effectiveness in addressing the challenge of missing
system evaluation on an entire task. This work highlights the need for more
comprehensive benchmarking approaches that can handle real-world scenarios
where not all systems are evaluated on the entire task. |
---|---|
DOI: | 10.48550/arxiv.2305.10284 |