Adaptive Result Inference for Collecting Quantitative Data With Crowdsourcing

In quantitative crowdsourcing, workers are asked to provide numerical answers. Different from categorical crowdsourcing, result aggregation in quantitative crowdsourcing is processed by combinatorially computing over all workers' answers instead of by merely choosing one from a set of candidate...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE internet of things journal 2017-10, Vol.4 (5), p.1389-1398
Hauptverfasser: Sun, Hailong, Hu, Kefan, Fang, Yili, Song, Yangqiu
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In quantitative crowdsourcing, workers are asked to provide numerical answers. Different from categorical crowdsourcing, result aggregation in quantitative crowdsourcing is processed by combinatorially computing over all workers' answers instead of by merely choosing one from a set of candidate answers. Therefore, existing result aggregation models for categorical crowdsourcing tasks cannot be used in quantitative crowdsourcing. Moreover, the worker ability often varies in the process of crowdsourcing with the changing of workers' skill, willingness, efforts, etc. In this paper, we propose a probabilistic model to characterize the quantitative crowdsourcing problem by considering the changing of worker ability so as to achieve better quality control. The dynamic worker ability is obtained with Kalman filtering and smoother. We design an expectationmaximization-based inference algorithm and a dynamic worker filtering algorithm to compute the aggregated crowdsourcing result. Finally, we conducted experiments with real data on CrowdFlower and the results showed that our approach can effectively rule out low-quality workers dynamically and obtain more accurate results with less costs.
ISSN:2327-4662
2327-4662
DOI:10.1109/JIOT.2017.2673958