Explainable $k$-Means and $k$-Medians Clustering

Clustering is a popular form of unsupervised learning for geometric data. Unfortunately, many clustering algorithms lead to cluster assignments that are hard to explain, partially because they depend on all the features of the data in a complicated way. To improve interpretability, we consider using...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Dasgupta, Sanjoy, Frost, Nave, Moshkovitz, Michal, Rashtchian, Cyrus
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Dasgupta, Sanjoy
Frost, Nave
Moshkovitz, Michal
Rashtchian, Cyrus
description Clustering is a popular form of unsupervised learning for geometric data. Unfortunately, many clustering algorithms lead to cluster assignments that are hard to explain, partially because they depend on all the features of the data in a complicated way. To improve interpretability, we consider using a small decision tree to partition a data set into clusters, so that clusters can be characterized in a straightforward manner. We study this problem from a theoretical viewpoint, measuring cluster quality by the $k$-means and $k$-medians objectives: Must there exist a tree-induced clustering whose cost is comparable to that of the best unconstrained clustering, and if so, how can it be found? In terms of negative results, we show, first, that popular top-down decision tree algorithms may lead to clusterings with arbitrarily large cost, and second, that any tree-induced clustering must in general incur an $\Omega(\log k)$ approximation factor compared to the optimal clustering. On the positive side, we design an efficient algorithm that produces explainable clusters using a tree with $k$ leaves. For two means/medians, we show that a single threshold cut suffices to achieve a constant factor approximation, and we give nearly-matching lower bounds. For general $k \geq 2$, our algorithm is an $O(k)$ approximation to the optimal $k$-medians and an $O(k^2)$ approximation to the optimal $k$-means. Prior to our work, no algorithms were known with provable guarantees independent of dimension and input size.
doi_str_mv 10.48550/arxiv.2002.12538
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2002_12538</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2002_12538</sourcerecordid><originalsourceid>FETCH-LOGICAL-a678-ef8c5137ba6f949d28ec51fd646a742c6d962350aafd146b0dbe8e71c1e482ec3</originalsourceid><addsrcrecordid>eNotzrsOgkAURdFpLAz6AVZa2ILzZigNwUeisbEnF-aOmYjEgBr8ewWtTlZzsgmZMRpJoxRdQdP5V8Qp5RHjSpgxoVl3r8DXUFS4WF6X4RGhbhdQ27-s751Wz_aBja8vEzJyULU4_W9AzpvsnO7Cw2m7T9eHEHRsQnSmVEzEBWiXyMRyg187q6WGWPJS20RzoSiAs0zqgtoCDcasZCgNx1IEZP67HZLze-Nv0LzzPj0f0sUHYc488Q</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Explainable $k$-Means and $k$-Medians Clustering</title><source>arXiv.org</source><creator>Dasgupta, Sanjoy ; Frost, Nave ; Moshkovitz, Michal ; Rashtchian, Cyrus</creator><creatorcontrib>Dasgupta, Sanjoy ; Frost, Nave ; Moshkovitz, Michal ; Rashtchian, Cyrus</creatorcontrib><description>Clustering is a popular form of unsupervised learning for geometric data. Unfortunately, many clustering algorithms lead to cluster assignments that are hard to explain, partially because they depend on all the features of the data in a complicated way. To improve interpretability, we consider using a small decision tree to partition a data set into clusters, so that clusters can be characterized in a straightforward manner. We study this problem from a theoretical viewpoint, measuring cluster quality by the $k$-means and $k$-medians objectives: Must there exist a tree-induced clustering whose cost is comparable to that of the best unconstrained clustering, and if so, how can it be found? In terms of negative results, we show, first, that popular top-down decision tree algorithms may lead to clusterings with arbitrarily large cost, and second, that any tree-induced clustering must in general incur an $\Omega(\log k)$ approximation factor compared to the optimal clustering. On the positive side, we design an efficient algorithm that produces explainable clusters using a tree with $k$ leaves. For two means/medians, we show that a single threshold cut suffices to achieve a constant factor approximation, and we give nearly-matching lower bounds. For general $k \geq 2$, our algorithm is an $O(k)$ approximation to the optimal $k$-medians and an $O(k^2)$ approximation to the optimal $k$-means. Prior to our work, no algorithms were known with provable guarantees independent of dimension and input size.</description><identifier>DOI: 10.48550/arxiv.2002.12538</identifier><language>eng</language><subject>Computer Science - Computational Geometry ; Computer Science - Data Structures and Algorithms ; Computer Science - Learning ; Statistics - Machine Learning</subject><creationdate>2020-02</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2002.12538$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2002.12538$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Dasgupta, Sanjoy</creatorcontrib><creatorcontrib>Frost, Nave</creatorcontrib><creatorcontrib>Moshkovitz, Michal</creatorcontrib><creatorcontrib>Rashtchian, Cyrus</creatorcontrib><title>Explainable $k$-Means and $k$-Medians Clustering</title><description>Clustering is a popular form of unsupervised learning for geometric data. Unfortunately, many clustering algorithms lead to cluster assignments that are hard to explain, partially because they depend on all the features of the data in a complicated way. To improve interpretability, we consider using a small decision tree to partition a data set into clusters, so that clusters can be characterized in a straightforward manner. We study this problem from a theoretical viewpoint, measuring cluster quality by the $k$-means and $k$-medians objectives: Must there exist a tree-induced clustering whose cost is comparable to that of the best unconstrained clustering, and if so, how can it be found? In terms of negative results, we show, first, that popular top-down decision tree algorithms may lead to clusterings with arbitrarily large cost, and second, that any tree-induced clustering must in general incur an $\Omega(\log k)$ approximation factor compared to the optimal clustering. On the positive side, we design an efficient algorithm that produces explainable clusters using a tree with $k$ leaves. For two means/medians, we show that a single threshold cut suffices to achieve a constant factor approximation, and we give nearly-matching lower bounds. For general $k \geq 2$, our algorithm is an $O(k)$ approximation to the optimal $k$-medians and an $O(k^2)$ approximation to the optimal $k$-means. Prior to our work, no algorithms were known with provable guarantees independent of dimension and input size.</description><subject>Computer Science - Computational Geometry</subject><subject>Computer Science - Data Structures and Algorithms</subject><subject>Computer Science - Learning</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzrsOgkAURdFpLAz6AVZa2ILzZigNwUeisbEnF-aOmYjEgBr8ewWtTlZzsgmZMRpJoxRdQdP5V8Qp5RHjSpgxoVl3r8DXUFS4WF6X4RGhbhdQ27-s751Wz_aBja8vEzJyULU4_W9AzpvsnO7Cw2m7T9eHEHRsQnSmVEzEBWiXyMRyg187q6WGWPJS20RzoSiAs0zqgtoCDcasZCgNx1IEZP67HZLze-Nv0LzzPj0f0sUHYc488Q</recordid><startdate>20200227</startdate><enddate>20200227</enddate><creator>Dasgupta, Sanjoy</creator><creator>Frost, Nave</creator><creator>Moshkovitz, Michal</creator><creator>Rashtchian, Cyrus</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20200227</creationdate><title>Explainable $k$-Means and $k$-Medians Clustering</title><author>Dasgupta, Sanjoy ; Frost, Nave ; Moshkovitz, Michal ; Rashtchian, Cyrus</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a678-ef8c5137ba6f949d28ec51fd646a742c6d962350aafd146b0dbe8e71c1e482ec3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Computer Science - Computational Geometry</topic><topic>Computer Science - Data Structures and Algorithms</topic><topic>Computer Science - Learning</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Dasgupta, Sanjoy</creatorcontrib><creatorcontrib>Frost, Nave</creatorcontrib><creatorcontrib>Moshkovitz, Michal</creatorcontrib><creatorcontrib>Rashtchian, Cyrus</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Dasgupta, Sanjoy</au><au>Frost, Nave</au><au>Moshkovitz, Michal</au><au>Rashtchian, Cyrus</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Explainable $k$-Means and $k$-Medians Clustering</atitle><date>2020-02-27</date><risdate>2020</risdate><abstract>Clustering is a popular form of unsupervised learning for geometric data. Unfortunately, many clustering algorithms lead to cluster assignments that are hard to explain, partially because they depend on all the features of the data in a complicated way. To improve interpretability, we consider using a small decision tree to partition a data set into clusters, so that clusters can be characterized in a straightforward manner. We study this problem from a theoretical viewpoint, measuring cluster quality by the $k$-means and $k$-medians objectives: Must there exist a tree-induced clustering whose cost is comparable to that of the best unconstrained clustering, and if so, how can it be found? In terms of negative results, we show, first, that popular top-down decision tree algorithms may lead to clusterings with arbitrarily large cost, and second, that any tree-induced clustering must in general incur an $\Omega(\log k)$ approximation factor compared to the optimal clustering. On the positive side, we design an efficient algorithm that produces explainable clusters using a tree with $k$ leaves. For two means/medians, we show that a single threshold cut suffices to achieve a constant factor approximation, and we give nearly-matching lower bounds. For general $k \geq 2$, our algorithm is an $O(k)$ approximation to the optimal $k$-medians and an $O(k^2)$ approximation to the optimal $k$-means. Prior to our work, no algorithms were known with provable guarantees independent of dimension and input size.</abstract><doi>10.48550/arxiv.2002.12538</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2002.12538
ispartof
issn
language eng
recordid cdi_arxiv_primary_2002_12538
source arXiv.org
subjects Computer Science - Computational Geometry
Computer Science - Data Structures and Algorithms
Computer Science - Learning
Statistics - Machine Learning
title Explainable $k$-Means and $k$-Medians Clustering
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-23T00%3A00%3A14IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Explainable%20$k$-Means%20and%20$k$-Medians%20Clustering&rft.au=Dasgupta,%20Sanjoy&rft.date=2020-02-27&rft_id=info:doi/10.48550/arxiv.2002.12538&rft_dat=%3Carxiv_GOX%3E2002_12538%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true