Worst-Case to Average-Case Reductions via Additive Combinatorics
We present a new framework for designing worst-case to average-case reductions. For a large class of problems, it provides an explicit transformation of algorithms running in time $T$ that are only correct on a small (subconstant) fraction of their inputs into algorithms running in time $\widetilde{...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We present a new framework for designing worst-case to average-case
reductions. For a large class of problems, it provides an explicit
transformation of algorithms running in time $T$ that are only correct on a
small (subconstant) fraction of their inputs into algorithms running in time
$\widetilde{O}(T)$ that are correct on all inputs.
Using our framework, we obtain such efficient worst-case to average-case
reductions for fundamental problems in a variety of computational models;
namely, algorithms for matrix multiplication, streaming algorithms for the
online matrix-vector multiplication problem, and static data structures for all
linear problems as well as for the multivariate polynomial evaluation problem.
Our techniques crucially rely on additive combinatorics. In particular, we
show a local correction lemma that relies on a new probabilistic version of the
quasi-polynomial Bogolyubov-Ruzsa lemma. |
---|---|
DOI: | 10.48550/arxiv.2202.08996 |