Optimal Kullback–Leibler aggregation in mixture density estimation by maximum likelihood
We study the maximum likelihood estimator of density of n independent observations, under the assumption that it is well approximated by a mixture with a large number of components. The main focus is on statistical properties with respect to the Kullback–Leibler loss. We establish risk bounds taking...
Gespeichert in:
Veröffentlicht in: | Mathematical statistics and learning (Online) 2018-04, Vol.1 (1), p.1-35 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We study the maximum likelihood estimator of density of n independent observations, under the assumption that it is well approximated by a mixture with a large number of components. The main focus is on statistical properties with respect to the Kullback–Leibler loss. We establish risk bounds taking the form of sharp oracle inequalities both in deviation and in expectation. A simple consequence of these bounds is that the maximum likelihood estimator attains the optimal rate ((\mathrm {log} K)/n)^{1/2} , up to a possible logarithmic correction, in the problem of convex aggregation when the number K of components is larger than n^{1/2} . More importantly, under the additional assumption that the Gram matrix of the components satisfies the compatibility condition, the obtained oracle inequalities yield the optimal rate in the sparsity scenario. That is, if the weight vector is (nearly) D -sparse, we get the rate (D\mathrm {log} K)/n . As a natural complement to our oracle inequalities, we introduce the notion of nearly- D -sparse aggregation and establish matching lower bounds for this type of aggregation. |
---|---|
ISSN: | 2520-2316 2520-2324 |
DOI: | 10.4171/msl/1-1-1 |