Scientific Statement Classification over arXiv.org
We introduce a new classification task for scientific statements and release a large-scale dataset for supervised learning. Our resource is derived from a machine-readable representation of the arXiv.org collection of preprint articles. We explore fifty author-annotated categories and empirically mo...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We introduce a new classification task for scientific statements and release
a large-scale dataset for supervised learning. Our resource is derived from a
machine-readable representation of the arXiv.org collection of preprint
articles. We explore fifty author-annotated categories and empirically motivate
a task design of grouping 10.5 million annotated paragraphs into thirteen
classes. We demonstrate that the task setup aligns with known success rates
from the state of the art, peaking at a 0.91 F1-score via a BiLSTM
encoder-decoder model. Additionally, we introduce a lexeme serialization for
mathematical formulas, and observe that context-aware models could improve when
also trained on the symbolic modality. Finally, we discuss the limitations of
both data and task design, and outline potential directions towards
increasingly complex models of scientific discourse, beyond isolated
statements. |
---|---|
DOI: | 10.48550/arxiv.1908.10993 |