Small Width, Low Distortions: Quantized Random Embeddings of Low-Complexity Sets

Under which conditions and with which distortions can we preserve the pairwise distances of low-complexity vectors, e.g., for structured sets, such as the set of sparse vectors or the one of low-rank matrices, when these are mapped (or embedded) in a finite set of vectors? This work addresses this g...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on information theory 2017-09, Vol.63 (9), p.5477-5495
1. Verfasser: Jacques, Laurent
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Under which conditions and with which distortions can we preserve the pairwise distances of low-complexity vectors, e.g., for structured sets, such as the set of sparse vectors or the one of low-rank matrices, when these are mapped (or embedded) in a finite set of vectors? This work addresses this general question through the specific use of a quantized and dithered random linear mapping, which combines, in the following order, a subGaussian random projection in R M of vectors in R N , a random translation, or dither, of the projected vectors, and a uniform scalar quantizer of resolution δ > 0 applied componentwise. Thanks to this quantized mapping, we are first able to show that, with high probability, an embedding of a bounded set K ⊂ R N in δZ M can be achieved when distances in the quantized and in the original domains are measured with the l 1 - and l 2 -norm, respectively, and provided the number of quantized observations M is large before the square of the "Gaussian mean width" of K. In this case, we show that the embedding is actually quasi-isometric and only suffers from both multiplicative and additive distortions whose magnitudes decrease as M -1/5 for general sets, and as M -1/2 for structured set, when M increases. Second, when one is only interested in characterizing the maximal distance separating two elements of K mapped to the same quantized vector, i.e., the "consistency width" of the mapping, we show that for a similar number of measurements and with high probability, this width decays as M -1/4 for general sets and as 1/M for structured ones when M increases. Finally, as an important aspect of this paper, we also establish how the non-Gaussianity of sub-Gaussian random projections inserted in the quantized mapping (e.g., for Bernoulli random matrices) impacts the class of vectors that can be embedded or whose consistency width provably decays when M increases.
ISSN:0018-9448
1557-9654
DOI:10.1109/TIT.2017.2717583