On the Best Choice of Lasso Program Given Data Parameters

Compressed sensing (CS) is a paradigm in which a structured high-dimensional signal may be recovered from random, under-determined, and corrupted linear measurements. Lasso programs are effective for solving CS problems due to their proven ability to leverage underlying signal structure. Three popul...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on information theory 2022-04, Vol.68 (4), p.2573-2603
Hauptverfasser: Berk, Aaron, Plan, Yaniv, Yilmaz, Ozgur
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Compressed sensing (CS) is a paradigm in which a structured high-dimensional signal may be recovered from random, under-determined, and corrupted linear measurements. Lasso programs are effective for solving CS problems due to their proven ability to leverage underlying signal structure. Three popular Lasso programs are equivalent in a sense and sometimes used interchangeably. Tuned by a governing parameter, each admit an optimal parameter choice. For sparse or low-rank signal structures, this choice yields minimax order-optimal error. While CS is well-studied, existing theory for Lasso programs typically concerns this optimally tuned setting. However, the optimal parameter value for a Lasso program depends on properties of the data, and is typically unknown in practical settings. Performance in empirical problems thus hinges on a program's parameter sensitivity: it is desirable that small variation about the optimal parameter choice begets small variation about the optimal risk. We examine the risk for these three programs and demonstrate that their parameter sensitivity can differ for the same data. We prove a gauge-constrained Lasso program admits asymptotic cusp-like behaviour of its risk in the limiting low-noise regime. We prove that a residual-constrained Lasso program has asymptotically suboptimal risk for very sparse vectors. These results contrast observations about an unconstrained Lasso program, which is relatively less sensitive to its parameter choice. We support the asymptotic theory with numerical simulations, demonstrating that parameter sensitivity of Lasso programs is readily observed for even modest dimensional parameters. Importantly, these simulations demonstrate regimes in which a Lasso program exhibits sensitivity to its parameter choice, though the other two do not. We hope this work aids practitioners in selecting a Lasso program for their problem.
ISSN:0018-9448
1557-9654
DOI:10.1109/TIT.2021.3138772