Learning Poisson Binomial Distributions
We consider a basic problem in unsupervised learning: learning an unknown Poisson binomial distribution . A Poisson binomial distribution (PBD) over { 0 , 1 , … , n } is the distribution of a sum of n independent Bernoulli random variables which may have arbitrary, potentially non-equal, expectation...
Gespeichert in:
Veröffentlicht in: | Algorithmica 2015-05, Vol.72 (1), p.316-357 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We consider a basic problem in unsupervised learning: learning an unknown
Poisson binomial distribution
. A Poisson binomial distribution (PBD) over
{
0
,
1
,
…
,
n
}
is the distribution of a sum of
n
independent Bernoulli random variables which may have arbitrary, potentially non-equal, expectations. These distributions were first studied by Poisson (Recherches sur la Probabilitè des jugements en matié criminelle et en matiére civile. Bachelier, Paris,
1837
) and are a natural
n
-parameter generalization of the familiar Binomial Distribution. Surprisingly, prior to our work this basic learning problem was poorly understood, and known results for it were far from optimal. We essentially settle the complexity of the learning problem for this basic class of distributions. As our first main result we give a highly efficient algorithm which learns to
ϵ
-accuracy (with respect to the total variation distance) using
O
~
(
1
/
ϵ
3
)
samples
independent of
n
. The running time of the algorithm is
quasilinear
in the size of its input data, i.e.,
O
~
(
log
(
n
)
/
ϵ
3
)
bit-operations (we write
O
~
(
·
)
to hide factors which are polylogarithmic in the argument to
O
~
(
·
)
; thus, for example,
O
~
(
a
log
b
)
denotes a quantity which is
O
(
a
log
b
·
log
c
(
a
log
b
)
)
for some absolute constant
c
. Observe that each draw from the distribution is a
log
(
n
)
-bit string). Our second main result is a
proper
learning algorithm that learns to
ϵ
-accuracy using
O
~
(
1
/
ϵ
2
)
samples, and runs in time
(
1
/
ϵ
)
poly
(
log
(
1
/
ϵ
)
)
·
log
n
. This sample complexity is nearly optimal, since any algorithm for this problem must use
Ω
(
1
/
ϵ
2
)
samples. We also give positive and negative results for some extensions of this learning problem to weighted sums of independent Bernoulli random variables. |
---|---|
ISSN: | 0178-4617 1432-0541 |
DOI: | 10.1007/s00453-015-9971-3 |