Subset Selection with Shrinkage: Sparse Linear Modeling When the SNR Is Low

Learning Compact High-Dimensional Models in Noisy Environments Building compact, interpretable statistical models where the output depends upon a small number of input features is a well-known problem in modern analytics applications. A fundamental tool used in this context is the prominent best sub...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Operations research 2023-01, Vol.71 (1), p.129-147
1. Verfasser: Mazumder, Rahul
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Learning Compact High-Dimensional Models in Noisy Environments Building compact, interpretable statistical models where the output depends upon a small number of input features is a well-known problem in modern analytics applications. A fundamental tool used in this context is the prominent best subset selection (BSS) procedure, which seeks to obtain the best linear fit to data subject to a constraint on the number of nonzero features. Whereas the BSS procedure works exceptionally well in some regimes, it performs pretty poorly in out-of-sample predictive performance when the underlying data are noisy, which is quite common in practice. In this paper, we explore this relatively less-understood overfitting behavior of BSS in low-signal noisy environments and propose alternatives that appear to mitigate such shortcomings. We study the theoretical statistical properties of our proposed regularized BSS procedure and show promising computational results on various data sets, using tools from integer programming and first-order methods. We study a seemingly unexpected and relatively less understood overfitting aspect of a fundamental tool in sparse linear modeling—best subset selection—which minimizes the residual sum of squares subject to a constraint on the number of nonzero coefficients. Whereas the best subset selection procedure is often perceived as the “gold standard” in sparse learning when the signal-to-noise ratio (SNR) is high, its predictive performance deteriorates when the SNR is low. In particular, it is outperformed by continuous shrinkage methods, such as ridge regression and the Lasso. We investigate the behavior of best subset selection in the high-noise regimes and propose an alternative approach based on a regularized version of the least-squares criterion. Our proposed estimators (a) mitigate, to a large extent, the poor predictive performance of best subset selection in the high-noise regimes; and (b) perform favorably, while generally delivering substantially sparser models, relative to the best predictive models available via ridge regression and the Lasso. We conduct an extensive theoretical analysis of the predictive properties of the proposed approach and provide justification for its superior predictive performance relative to best subset selection when the noise level is high. Our estimators can be expressed as solutions to mixed-integer second-order conic optimization problems and, hence, are amenable to modern computational tools
ISSN:0030-364X
1526-5463
DOI:10.1287/opre.2022.2276