A Unified View of Evaluation Metrics for Structured Prediction
We present a conceptual framework that unifies a variety of evaluation metrics for different structured prediction tasks (e.g. event and relation extraction, syntactic and semantic parsing). Our framework requires representing the outputs of these tasks as objects of certain data types, and derives...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We present a conceptual framework that unifies a variety of evaluation
metrics for different structured prediction tasks (e.g. event and relation
extraction, syntactic and semantic parsing). Our framework requires
representing the outputs of these tasks as objects of certain data types, and
derives metrics through matching of common substructures, possibly followed by
normalization. We demonstrate how commonly used metrics for a number of tasks
can be succinctly expressed by this framework, and show that new metrics can be
naturally derived in a bottom-up way based on an output structure. We release a
library that enables this derivation to create new metrics. Finally, we
consider how specific characteristics of tasks motivate metric design
decisions, and suggest possible modifications to existing metrics in line with
those motivations. |
---|---|
DOI: | 10.48550/arxiv.2310.13793 |