Adaptive Crowdsourcing Algorithms for the Bandit Survey Problem

Very recently crowdsourcing has become the de facto platform for distributing and collecting human computation for a wide range of tasks and applications such as information retrieval, natural language processing and machine learning. Current crowdsourcing platforms have some limitations in the area...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2013-05
Hauptverfasser: Abraham, Ittai, Alonso, Omar, Kandylas, Vasilis, Slivkins, Aleksandrs
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Abraham, Ittai
Alonso, Omar
Kandylas, Vasilis
Slivkins, Aleksandrs
description Very recently crowdsourcing has become the de facto platform for distributing and collecting human computation for a wide range of tasks and applications such as information retrieval, natural language processing and machine learning. Current crowdsourcing platforms have some limitations in the area of quality control. Most of the effort to ensure good quality has to be done by the experimenter who has to manage the number of workers needed to reach good results. We propose a simple model for adaptive quality control in crowdsourced multiple-choice tasks which we call the \emph{bandit survey problem}. This model is related to, but technically different from the well-known multi-armed bandit problem. We present several algorithms for this problem, and support them with analysis and simulations. Our approach is based in our experience conducting relevance evaluation for a large commercial search engine.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2085272629</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2085272629</sourcerecordid><originalsourceid>FETCH-proquest_journals_20852726293</originalsourceid><addsrcrecordid>eNqNjMsKgkAUQIcgSMp_uNBamO74ahUmRcug9mI56og6dmc0-vtc9AGtzuIczoI5KMTOi33EFXONaTjnGEYYBMJhh6TIB6smCSnpd2H0SE_VV5C0lSZl685AqQlsLeGY94WycBtpkh-4kn60stuwZZm3Rro_rtn2fLqnF28g_RqlsVkzL_tZZcjjACMMcS_-q74X2Tjk</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2085272629</pqid></control><display><type>article</type><title>Adaptive Crowdsourcing Algorithms for the Bandit Survey Problem</title><source>Free E- Journals</source><creator>Abraham, Ittai ; Alonso, Omar ; Kandylas, Vasilis ; Slivkins, Aleksandrs</creator><creatorcontrib>Abraham, Ittai ; Alonso, Omar ; Kandylas, Vasilis ; Slivkins, Aleksandrs</creatorcontrib><description>Very recently crowdsourcing has become the de facto platform for distributing and collecting human computation for a wide range of tasks and applications such as information retrieval, natural language processing and machine learning. Current crowdsourcing platforms have some limitations in the area of quality control. Most of the effort to ensure good quality has to be done by the experimenter who has to manage the number of workers needed to reach good results. We propose a simple model for adaptive quality control in crowdsourced multiple-choice tasks which we call the \emph{bandit survey problem}. This model is related to, but technically different from the well-known multi-armed bandit problem. We present several algorithms for this problem, and support them with analysis and simulations. Our approach is based in our experience conducting relevance evaluation for a large commercial search engine.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Adaptive algorithms ; Adaptive control ; Algorithms ; Computer simulation ; Crowdsourcing ; Information retrieval ; Machine learning ; Natural language processing ; Quality control ; Search engines</subject><ispartof>arXiv.org, 2013-05</ispartof><rights>2013. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Abraham, Ittai</creatorcontrib><creatorcontrib>Alonso, Omar</creatorcontrib><creatorcontrib>Kandylas, Vasilis</creatorcontrib><creatorcontrib>Slivkins, Aleksandrs</creatorcontrib><title>Adaptive Crowdsourcing Algorithms for the Bandit Survey Problem</title><title>arXiv.org</title><description>Very recently crowdsourcing has become the de facto platform for distributing and collecting human computation for a wide range of tasks and applications such as information retrieval, natural language processing and machine learning. Current crowdsourcing platforms have some limitations in the area of quality control. Most of the effort to ensure good quality has to be done by the experimenter who has to manage the number of workers needed to reach good results. We propose a simple model for adaptive quality control in crowdsourced multiple-choice tasks which we call the \emph{bandit survey problem}. This model is related to, but technically different from the well-known multi-armed bandit problem. We present several algorithms for this problem, and support them with analysis and simulations. Our approach is based in our experience conducting relevance evaluation for a large commercial search engine.</description><subject>Adaptive algorithms</subject><subject>Adaptive control</subject><subject>Algorithms</subject><subject>Computer simulation</subject><subject>Crowdsourcing</subject><subject>Information retrieval</subject><subject>Machine learning</subject><subject>Natural language processing</subject><subject>Quality control</subject><subject>Search engines</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2013</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNjMsKgkAUQIcgSMp_uNBamO74ahUmRcug9mI56og6dmc0-vtc9AGtzuIczoI5KMTOi33EFXONaTjnGEYYBMJhh6TIB6smCSnpd2H0SE_VV5C0lSZl685AqQlsLeGY94WycBtpkh-4kn60stuwZZm3Rro_rtn2fLqnF28g_RqlsVkzL_tZZcjjACMMcS_-q74X2Tjk</recordid><startdate>20130520</startdate><enddate>20130520</enddate><creator>Abraham, Ittai</creator><creator>Alonso, Omar</creator><creator>Kandylas, Vasilis</creator><creator>Slivkins, Aleksandrs</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20130520</creationdate><title>Adaptive Crowdsourcing Algorithms for the Bandit Survey Problem</title><author>Abraham, Ittai ; Alonso, Omar ; Kandylas, Vasilis ; Slivkins, Aleksandrs</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_20852726293</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2013</creationdate><topic>Adaptive algorithms</topic><topic>Adaptive control</topic><topic>Algorithms</topic><topic>Computer simulation</topic><topic>Crowdsourcing</topic><topic>Information retrieval</topic><topic>Machine learning</topic><topic>Natural language processing</topic><topic>Quality control</topic><topic>Search engines</topic><toplevel>online_resources</toplevel><creatorcontrib>Abraham, Ittai</creatorcontrib><creatorcontrib>Alonso, Omar</creatorcontrib><creatorcontrib>Kandylas, Vasilis</creatorcontrib><creatorcontrib>Slivkins, Aleksandrs</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Abraham, Ittai</au><au>Alonso, Omar</au><au>Kandylas, Vasilis</au><au>Slivkins, Aleksandrs</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Adaptive Crowdsourcing Algorithms for the Bandit Survey Problem</atitle><jtitle>arXiv.org</jtitle><date>2013-05-20</date><risdate>2013</risdate><eissn>2331-8422</eissn><abstract>Very recently crowdsourcing has become the de facto platform for distributing and collecting human computation for a wide range of tasks and applications such as information retrieval, natural language processing and machine learning. Current crowdsourcing platforms have some limitations in the area of quality control. Most of the effort to ensure good quality has to be done by the experimenter who has to manage the number of workers needed to reach good results. We propose a simple model for adaptive quality control in crowdsourced multiple-choice tasks which we call the \emph{bandit survey problem}. This model is related to, but technically different from the well-known multi-armed bandit problem. We present several algorithms for this problem, and support them with analysis and simulations. Our approach is based in our experience conducting relevance evaluation for a large commercial search engine.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2013-05
issn 2331-8422
language eng
recordid cdi_proquest_journals_2085272629
source Free E- Journals
subjects Adaptive algorithms
Adaptive control
Algorithms
Computer simulation
Crowdsourcing
Information retrieval
Machine learning
Natural language processing
Quality control
Search engines
title Adaptive Crowdsourcing Algorithms for the Bandit Survey Problem
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-11T21%3A05%3A10IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Adaptive%20Crowdsourcing%20Algorithms%20for%20the%20Bandit%20Survey%20Problem&rft.jtitle=arXiv.org&rft.au=Abraham,%20Ittai&rft.date=2013-05-20&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2085272629%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2085272629&rft_id=info:pmid/&rfr_iscdi=true