Optimal Clustering from Noisy Binary Feedback

We study the problem of clustering a set of items from binary user feedback. Such a problem arises in crowdsourcing platforms solving large-scale labeling tasks with minimal effort put on the users. For example, in some of the recent reCAPTCHA systems, users clicks (binary answers) can be used to ef...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-02
Hauptverfasser: Ariu, Kaito, Jungseul Ok, Proutiere, Alexandre, Se-Young, Yun
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Ariu, Kaito
Jungseul Ok
Proutiere, Alexandre
Se-Young, Yun
description We study the problem of clustering a set of items from binary user feedback. Such a problem arises in crowdsourcing platforms solving large-scale labeling tasks with minimal effort put on the users. For example, in some of the recent reCAPTCHA systems, users clicks (binary answers) can be used to efficiently label images. In our inference problem, items are grouped into initially unknown non-overlapping clusters. To recover these clusters, the learner sequentially presents to users a finite list of items together with a question with a binary answer selected from a fixed finite set. For each of these items, the user provides a noisy answer whose expectation is determined by the item cluster and the question and by an item-specific parameter characterizing the {\it hardness} of classifying the item. The objective is to devise an algorithm with a minimal cluster recovery error rate. We derive problem-specific information-theoretical lower bounds on the error rate satisfied by any algorithm, for both uniform and adaptive (list, question) selection strategies. For uniform selection, we present a simple algorithm built upon the K-means algorithm and whose performance almost matches the fundamental limits. For adaptive selection, we develop an adaptive algorithm that is inspired by the derivation of the information-theoretical error lower bounds, and in turn allocates the budget in an efficient way. The algorithm learns to select items hard to cluster and relevant questions more often. We compare the performance of our algorithms with or without the adaptive selection strategy numerically and illustrate the gain achieved by being adaptive.
doi_str_mv 10.48550/arxiv.1910.06002
format Article
fullrecord <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_1910_06002</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2305673912</sourcerecordid><originalsourceid>FETCH-LOGICAL-a952-2e6243ef0d059e7b926dd696f7184fac2852e5322d3fe6751d309c52b7095b663</originalsourceid><addsrcrecordid>eNotj1FLwzAUhYMgOOZ-gE8WfO5MbnqT5lGLc8JwL3sPaZNIZ9fWpBX7762bTwc-DofzEXLH6DrLEemjCT_195qpGVBBKVyRBXDO0jwDuCGrGI90pkICIl-QdN8P9ck0SdGMcXChbj8SH7pT8t7VcUqe69aEKdk4Z0tTfd6Sa2-a6Fb_uSSHzcuh2Ka7_etb8bRLjUJIwQnIuPPUUlROlgqEtUIJL1meeVNBjuCQA1junZDILKeqQiglVVgKwZfk_jJ7dtF9mB-GSf856bPT3Hi4NPrQfY0uDvrYjaGdP2ngFIXkigH_BTrfTJc</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2305673912</pqid></control><display><type>article</type><title>Optimal Clustering from Noisy Binary Feedback</title><source>arXiv.org</source><source>Free E- Journals</source><creator>Ariu, Kaito ; Jungseul Ok ; Proutiere, Alexandre ; Se-Young, Yun</creator><creatorcontrib>Ariu, Kaito ; Jungseul Ok ; Proutiere, Alexandre ; Se-Young, Yun</creatorcontrib><description>We study the problem of clustering a set of items from binary user feedback. Such a problem arises in crowdsourcing platforms solving large-scale labeling tasks with minimal effort put on the users. For example, in some of the recent reCAPTCHA systems, users clicks (binary answers) can be used to efficiently label images. In our inference problem, items are grouped into initially unknown non-overlapping clusters. To recover these clusters, the learner sequentially presents to users a finite list of items together with a question with a binary answer selected from a fixed finite set. For each of these items, the user provides a noisy answer whose expectation is determined by the item cluster and the question and by an item-specific parameter characterizing the {\it hardness} of classifying the item. The objective is to devise an algorithm with a minimal cluster recovery error rate. We derive problem-specific information-theoretical lower bounds on the error rate satisfied by any algorithm, for both uniform and adaptive (list, question) selection strategies. For uniform selection, we present a simple algorithm built upon the K-means algorithm and whose performance almost matches the fundamental limits. For adaptive selection, we develop an adaptive algorithm that is inspired by the derivation of the information-theoretical error lower bounds, and in turn allocates the budget in an efficient way. The algorithm learns to select items hard to cluster and relevant questions more often. We compare the performance of our algorithms with or without the adaptive selection strategy numerically and illustrate the gain achieved by being adaptive.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.1910.06002</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Adaptive algorithms ; Algorithms ; Clustering ; Computer Science - Learning ; Errors ; Feedback ; Lower bounds ; Optimization ; Questions ; Statistics - Machine Learning</subject><ispartof>arXiv.org, 2024-02</ispartof><rights>2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,784,885,27925</link.rule.ids><backlink>$$Uhttps://doi.org/10.1007/s10994-024-06532-z$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.48550/arXiv.1910.06002$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Ariu, Kaito</creatorcontrib><creatorcontrib>Jungseul Ok</creatorcontrib><creatorcontrib>Proutiere, Alexandre</creatorcontrib><creatorcontrib>Se-Young, Yun</creatorcontrib><title>Optimal Clustering from Noisy Binary Feedback</title><title>arXiv.org</title><description>We study the problem of clustering a set of items from binary user feedback. Such a problem arises in crowdsourcing platforms solving large-scale labeling tasks with minimal effort put on the users. For example, in some of the recent reCAPTCHA systems, users clicks (binary answers) can be used to efficiently label images. In our inference problem, items are grouped into initially unknown non-overlapping clusters. To recover these clusters, the learner sequentially presents to users a finite list of items together with a question with a binary answer selected from a fixed finite set. For each of these items, the user provides a noisy answer whose expectation is determined by the item cluster and the question and by an item-specific parameter characterizing the {\it hardness} of classifying the item. The objective is to devise an algorithm with a minimal cluster recovery error rate. We derive problem-specific information-theoretical lower bounds on the error rate satisfied by any algorithm, for both uniform and adaptive (list, question) selection strategies. For uniform selection, we present a simple algorithm built upon the K-means algorithm and whose performance almost matches the fundamental limits. For adaptive selection, we develop an adaptive algorithm that is inspired by the derivation of the information-theoretical error lower bounds, and in turn allocates the budget in an efficient way. The algorithm learns to select items hard to cluster and relevant questions more often. We compare the performance of our algorithms with or without the adaptive selection strategy numerically and illustrate the gain achieved by being adaptive.</description><subject>Adaptive algorithms</subject><subject>Algorithms</subject><subject>Clustering</subject><subject>Computer Science - Learning</subject><subject>Errors</subject><subject>Feedback</subject><subject>Lower bounds</subject><subject>Optimization</subject><subject>Questions</subject><subject>Statistics - Machine Learning</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GOX</sourceid><recordid>eNotj1FLwzAUhYMgOOZ-gE8WfO5MbnqT5lGLc8JwL3sPaZNIZ9fWpBX7762bTwc-DofzEXLH6DrLEemjCT_195qpGVBBKVyRBXDO0jwDuCGrGI90pkICIl-QdN8P9ck0SdGMcXChbj8SH7pT8t7VcUqe69aEKdk4Z0tTfd6Sa2-a6Fb_uSSHzcuh2Ka7_etb8bRLjUJIwQnIuPPUUlROlgqEtUIJL1meeVNBjuCQA1junZDILKeqQiglVVgKwZfk_jJ7dtF9mB-GSf856bPT3Hi4NPrQfY0uDvrYjaGdP2ngFIXkigH_BTrfTJc</recordid><startdate>20240205</startdate><enddate>20240205</enddate><creator>Ariu, Kaito</creator><creator>Jungseul Ok</creator><creator>Proutiere, Alexandre</creator><creator>Se-Young, Yun</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20240205</creationdate><title>Optimal Clustering from Noisy Binary Feedback</title><author>Ariu, Kaito ; Jungseul Ok ; Proutiere, Alexandre ; Se-Young, Yun</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a952-2e6243ef0d059e7b926dd696f7184fac2852e5322d3fe6751d309c52b7095b663</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Adaptive algorithms</topic><topic>Algorithms</topic><topic>Clustering</topic><topic>Computer Science - Learning</topic><topic>Errors</topic><topic>Feedback</topic><topic>Lower bounds</topic><topic>Optimization</topic><topic>Questions</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Ariu, Kaito</creatorcontrib><creatorcontrib>Jungseul Ok</creatorcontrib><creatorcontrib>Proutiere, Alexandre</creatorcontrib><creatorcontrib>Se-Young, Yun</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Ariu, Kaito</au><au>Jungseul Ok</au><au>Proutiere, Alexandre</au><au>Se-Young, Yun</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Optimal Clustering from Noisy Binary Feedback</atitle><jtitle>arXiv.org</jtitle><date>2024-02-05</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>We study the problem of clustering a set of items from binary user feedback. Such a problem arises in crowdsourcing platforms solving large-scale labeling tasks with minimal effort put on the users. For example, in some of the recent reCAPTCHA systems, users clicks (binary answers) can be used to efficiently label images. In our inference problem, items are grouped into initially unknown non-overlapping clusters. To recover these clusters, the learner sequentially presents to users a finite list of items together with a question with a binary answer selected from a fixed finite set. For each of these items, the user provides a noisy answer whose expectation is determined by the item cluster and the question and by an item-specific parameter characterizing the {\it hardness} of classifying the item. The objective is to devise an algorithm with a minimal cluster recovery error rate. We derive problem-specific information-theoretical lower bounds on the error rate satisfied by any algorithm, for both uniform and adaptive (list, question) selection strategies. For uniform selection, we present a simple algorithm built upon the K-means algorithm and whose performance almost matches the fundamental limits. For adaptive selection, we develop an adaptive algorithm that is inspired by the derivation of the information-theoretical error lower bounds, and in turn allocates the budget in an efficient way. The algorithm learns to select items hard to cluster and relevant questions more often. We compare the performance of our algorithms with or without the adaptive selection strategy numerically and illustrate the gain achieved by being adaptive.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.1910.06002</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-02
issn 2331-8422
language eng
recordid cdi_arxiv_primary_1910_06002
source arXiv.org; Free E- Journals
subjects Adaptive algorithms
Algorithms
Clustering
Computer Science - Learning
Errors
Feedback
Lower bounds
Optimization
Questions
Statistics - Machine Learning
title Optimal Clustering from Noisy Binary Feedback
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-05T00%3A08%3A45IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Optimal%20Clustering%20from%20Noisy%20Binary%20Feedback&rft.jtitle=arXiv.org&rft.au=Ariu,%20Kaito&rft.date=2024-02-05&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.1910.06002&rft_dat=%3Cproquest_arxiv%3E2305673912%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2305673912&rft_id=info:pmid/&rfr_iscdi=true