Fundamental Performance Limits for Ideal Decoders in High-Dimensional Linear Inverse Problems

The primary challenge in linear inverse problems is to design stable and robust decoders to reconstruct high-dimensional vectors from a low-dimensional observation through a linear operator. Sparsity, low-rank, and related assumptions are typically exploited to design decoders, whose performance is...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on information theory 2014-12, Vol.60 (12), p.7928-7946
Hauptverfasser: Bourrier, Anthony, Davies, Mike E., Peleg, Tomer, Perez, Patrick, Gribonval, Remi
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The primary challenge in linear inverse problems is to design stable and robust decoders to reconstruct high-dimensional vectors from a low-dimensional observation through a linear operator. Sparsity, low-rank, and related assumptions are typically exploited to design decoders, whose performance is then bounded based on some measure of deviation from the idealized model, typically using a norm. This paper focuses on characterizing the fundamental performance limits that can be expected from an ideal decoder given a general model, i.e., a general subset of simple vectors of interest. First, we extend the so-called notion of instance optimality of a decoder to settings where one only wishes to reconstruct some part of the original high-dimensional vector from a low-dimensional observation. This covers practical settings, such as medical imaging of a region of interest, or audio source separation, when one is only interested in estimating the contribution of a specific instrument to a musical recording. We define instance optimality relatively to a model much beyond the traditional framework of sparse recovery, and characterize the existence of an instance optimal decoder in terms of joint properties of the model and the considered linear operator. Noiseless and noise-robust settings are both considered. We show somewhat surprisingly that the existence of noise-aware instance optimal decoders for all noise levels implies the existence of a noise-blind decoder. A consequence of our results is that for models that are rich enough to contain an orthonormal basis, the existence of an ℓ 2 /ℓ 2 instance optimal decoder is only possible when the linear operator is not substantially dimension-reducing. This covers well-known cases (sparse vectors, low-rank matrices) as well as a number of seemingly new situations (structured sparsity and sparse inverse covariance matrices for instance). We exhibit an operator-dependent norm which, under a model-specific generalization of the restricted isometry property, always yields a feasible instance optimality property. This norm can be upper bounded by an atomic norm relative to the considered model.
ISSN:0018-9448
1557-9654
DOI:10.1109/TIT.2014.2364403