Bayesian Regularized Regression Based on Composite Quantile Method
Recently, variable selection based on penalized regression methods has received a great deal of attention, mostly through frequentist's models. This paper investigates regularization regression from Bayesian perspective. Our new method extends the Bayesian Lasso regression (Park and Casella, 2008) t...
Gespeichert in:
Veröffentlicht in: | Acta Mathematicae Applicatae Sinica 2016-06, Vol.32 (2), p.495-512 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recently, variable selection based on penalized regression methods has received a great deal of attention, mostly through frequentist's models. This paper investigates regularization regression from Bayesian perspective. Our new method extends the Bayesian Lasso regression (Park and Casella, 2008) through replacing the least square loss and Lasso penalty by composite quantile loss function and adaptive Lasso penalty, which allows different penalization parameters for different regression coefficients. Based on the Bayesian hierarchical model framework, an efficient Gibbs sampler is derived to simulate the parameters from posterior distributions. Furthermore, we study the Bayesian composite quantile regression with adaptive group Lasso penalty. The distinguishing characteristic of the newly proposed method is completely data adaptive without requiring prior knowledge of the error distribution. Extensive simulations and two real data examples are used to examine the good performance of the proposed method. All results confirm that our novel method has both robustness and high efficiency and often outperforms other approaches. |
---|---|
ISSN: | 0168-9673 1618-3932 |
DOI: | 10.1007/s10255-016-0579-4 |