Robust Variable Selection and Regularization in Quantile Regression Based on Adaptive-LASSO and Adaptive E-NET

Although the variable selection and regularization procedures have been extensively considered in the literature for the quantile regression (QR) scenario via penalization, many such procedures fail to deal with data aberrations in the design space, namely, high leverage points (X-space outliers) an...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Computation 2022-11, Vol.10 (11), p.203
Hauptverfasser: Mudhombo, Innocent, Ranganai, Edmore
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Although the variable selection and regularization procedures have been extensively considered in the literature for the quantile regression (QR) scenario via penalization, many such procedures fail to deal with data aberrations in the design space, namely, high leverage points (X-space outliers) and collinearity challenges simultaneously. Some high leverage points referred to as collinearity influential observations tend to adversely alter the eigenstructure of the design matrix by inducing or masking collinearity. Therefore, in the literature, it is recommended that the problems of collinearity and high leverage points should be dealt with simultaneously. In this article, we suggest adaptive LASSO and adaptive E-NET penalized QR (QR-ALASSO and QR-AE-NET) procedures where the weights are based on a QR estimator as remedies. We extend this methodology to their penalized weighted QR versions of WQR-LASSO, WQR-E-NET procedures we had suggested earlier. In the literature, adaptive weights are based on the RIDGE regression (RR) parameter estimator. Although the use of this estimator may be plausible at the ℓ1 estimator (QR at τ=0.5) for the symmetrical distribution, it may not be so at extreme quantile levels. Therefore, we use a QR-based estimator to derive adaptive weights. We carried out a comparative study of QR-LASSO, QR-E-NET, and the ones we suggest here, viz., QR-ALASSO, QR-AE-NET, weighted QRALASSO penalized and weighted QR adaptive AE-NET penalized (WQR-ALASSO and WQR-AE-NET) procedures. The simulation study results show that QR-ALASSO, QR-AE-NET, WQR-ALASSO and WQR-AE-NET generally outperform their nonadaptive counterparts. At predictor matrices with collinearity inducing points under normality, the QR-ALASSO and QR-AE-NET, respectively, outperform the non-adaptive procedures in the unweighted scenarios, as follows: in all 16 cases (100%) with respect to correctly selected (shrunk) zero coefficients; in 88% with respect to correctly fitted models; and in 81% with respect to prediction. In the weighted penalized WQR scenarios, WQR-ALASSO and WQR-AE-NET outperform their non-adaptive versions as follows: in 75% of the time with respect to both correctly fitted models and correctly shrunk zero coefficients and in 63% with respect to prediction. At predictor matrices with collinearity masking points under normality, the QR-ALASSO and QR-AE-NET, respectively, outperform the non-adaptive procedures in the unweighted scenarios as follows: in prediction, in
ISSN:2079-3197
2079-3197
DOI:10.3390/computation10110203