WikiSQE: A Large-Scale Dataset for Sentence Quality Estimation in Wikipedia
Wikipedia can be edited by anyone and thus contains various quality sentences. Therefore, Wikipedia includes some poor-quality edits, which are often marked up by other editors. While editors' reviews enhance the credibility of Wikipedia, it is hard to check all edited text. Assisting in this p...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Wikipedia can be edited by anyone and thus contains various quality
sentences. Therefore, Wikipedia includes some poor-quality edits, which are
often marked up by other editors. While editors' reviews enhance the
credibility of Wikipedia, it is hard to check all edited text. Assisting in
this process is very important, but a large and comprehensive dataset for
studying it does not currently exist. Here, we propose WikiSQE, the first
large-scale dataset for sentence quality estimation in Wikipedia. Each sentence
is extracted from the entire revision history of English Wikipedia, and the
target quality labels were carefully investigated and selected. WikiSQE has
about 3.4 M sentences with 153 quality labels. In the experiment with automatic
classification using competitive machine learning models, sentences that had
problems with citation, syntax/semantics, or propositions were found to be more
difficult to detect. In addition, by performing human annotation, we found that
the model we developed performed better than the crowdsourced workers. WikiSQE
is expected to be a valuable resource for other tasks in NLP. |
---|---|
DOI: | 10.48550/arxiv.2305.05928 |