On learning k-parities with and without noise
We first consider the problem of learning $k$-parities in the on-line mistake-bound model: given a hidden vector $x \in \{0,1\}^n$ with $|x|=k$ and a sequence of "questions" $a_1, a_2, ...\in \{0,1\}^n$, where the algorithm must reply to each question with $< a_i, x> \pmod 2$, what i...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We first consider the problem of learning $k$-parities in the on-line
mistake-bound model: given a hidden vector $x \in \{0,1\}^n$ with $|x|=k$ and a
sequence of "questions" $a_1, a_2, ...\in \{0,1\}^n$, where the algorithm must
reply to each question with $< a_i, x> \pmod 2$, what is the best tradeoff
between the number of mistakes made by the algorithm and its time complexity?
We improve the previous best result of Buhrman et al. by an $\exp(k)$ factor in
the time complexity.
Second, we consider the problem of learning $k$-parities in the presence of
classification noise of rate $\eta \in (0,1/2)$. A polynomial time algorithm
for this problem (when $\eta > 0$ and $k = \omega(1)$) is a longstanding
challenge in learning theory. Grigorescu et al. showed an algorithm running in
time ${n \choose k/2}^{1 + 4\eta^2 +o(1)}$. Note that this algorithm inherently
requires time ${n \choose k/2}$ even when the noise rate $\eta$ is polynomially
small. We observe that for sufficiently small noise rate, it is possible to
break the $n \choose k/2$ barrier. In particular, if for some function $f(n) =
\omega(1)$ and $\alpha \in [1/2, 1)$, $k = n/f(n)$ and $\eta = o(f(n)^{-
\alpha}/\log n)$, then there is an algorithm for the problem with running time
$poly(n)\cdot {n \choose k}^{1-\alpha} \cdot e^{-k/4.01}$. |
---|---|
DOI: | 10.48550/arxiv.1502.05375 |