Clustering and Model Selection via Penalized Likelihood for Different-sized Categorical Data Vectors
In this study, we consider unsupervised clustering of categorical vectors that can be of different size using mixture. We use likelihood maximization to estimate the parameters of the underlying mixture model and a penalization technique to select the number of mixture components. Regardless of the...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In this study, we consider unsupervised clustering of categorical vectors
that can be of different size using mixture. We use likelihood maximization to
estimate the parameters of the underlying mixture model and a penalization
technique to select the number of mixture components. Regardless of the true
distribution that generated the data, we show that an explicit penalty, known
up to a multiplicative constant, leads to a non-asymptotic oracle inequality
with the Kullback-Leibler divergence on the two sides of the inequality. This
theoretical result is illustrated by a document clustering application. To this
aim a novel robust expectation-maximization algorithm is proposed to estimate
the mixture parameters that best represent the different topics. Slope
heuristics are used to calibrate the penalty and to select a number of
clusters. |
---|---|
DOI: | 10.48550/arxiv.1709.02294 |