Development and Evaluation of Quality Control Methods in a Microtask Crowdsourcing Platform

Open Crowdsourcing platforms like Amazon Mechanical Turk provide an attractive solution for process of high volume tasks with low costs. However problems of quality control is still of major interest. In this paper, we design a private crowdsourcing system, where we can devise methods for the qualit...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Transactions of the Japanese Society for Artificial Intelligence 2014/11/01, Vol.29(6), pp.503-515
Hauptverfasser: Ashikawa, Masayuki, Kawamura, Takahiro, Ohsuga, Akihiko
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Open Crowdsourcing platforms like Amazon Mechanical Turk provide an attractive solution for process of high volume tasks with low costs. However problems of quality control is still of major interest. In this paper, we design a private crowdsourcing system, where we can devise methods for the quality control. For the quality control, we introduce four worker selection methods, each of which we call preprocessing filtering, real-time filtering, post processing filtering, and guess processing filtering. These methods include a novel approach, which utilizes a collaborative filtering technique in addition to a basic approach of initial training or gold standard data. For an use case, we have built a very large dictionary, which is necessary for Large Vocabulary Continuous Speech Recognition and Text-to-Speech. We show how the system yields high quality results for some difficult tasks of word extraction, part-of-speech tagging, and pronunciation prediction to build a large dictionary.
ISSN:1346-0714
1346-8030
DOI:10.1527/tjsai.29.503