Algorithms for Sparse LPN and LSPN Against Low-noise

We study learning and distinguishing algorithms for two sparse variants of the classical learning parity with noise (LPN) problem. We provide a new algorithmic framework for the sparse variants that improves the state of the art for a wide range of parameters. Different from previous approaches, thi...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-12
Hauptverfasser: Chen, Xue, Shu, Wenxuan, Zhou, Zhaienhe
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:We study learning and distinguishing algorithms for two sparse variants of the classical learning parity with noise (LPN) problem. We provide a new algorithmic framework for the sparse variants that improves the state of the art for a wide range of parameters. Different from previous approaches, this framework has a simple structure whose first step is a domain reduction via the knowledge of sparsity. Let \(n\) be the dimension, \(k\) be the sparsity parameter, and \(\eta\) be the noise rate such that each label gets flipped with probability \(\eta\). The learning sparse parity with noise (LSPN) problem assumes the hidden parity is \(k\)-sparse. LSPN has been extensively studied in both learning theory and cryptography. However, the state-of-the-art needs \({n \choose k/2} = \Omega(n/k)^{k/2}\) time for a wide range of parameters while the simple enumeration algorithm takes \({n \choose k}=O(n/k)^k\) time. Our LSPN algorithm runs in time \(O(\eta \cdot n/k)^k\) for any \(\eta\) and \(k\). The sparse LPN problem has wide applications in cryptography. For \(m=n^{1+(\frac{k}{2}-1)(1-\delta)}\) with \(\delta\in (0,1)\), the best known algorithm has running time \(\min\{e^{\eta n}, e^{\tilde{O}(n^{\delta})}\}\) for a wide range of parameters (except for \(\eta < n^{-(1+\delta)/2}\)).We present a distinguishing algorithm for sparse LPN with time complexity \(e^{O(\eta\cdot n^{\frac{1+\delta}{2}})}\) and sample complexity \(m=n^{1+(\frac{k-1}{2})(1-\delta)}\) given \(\eta < \min\{n^{-\frac{1+\delta}{4}},n^{-\frac{1-\delta}{2}}\}\). Furthermore, we show a learning algorithm for sparse LPN in time complexity \(e^{\tilde{O}(\eta\cdot n^{\frac{1+\delta}{2}})}\) and \(m=\max\{1,\frac{\eta\cdot n^{\frac{1+\delta}{2}}}{k^2}\} \cdot \tilde{O}(n)^{1+(\frac{k-1}{2})(1-\delta)}\) samples. Since all these algorithms are based on one algorithmic framework, our conceptual contribution is a connection between sparse LPN and LSPN.
ISSN:2331-8422