BAYESIAN VARIABLE SELECTION WITH SHRINKING AND DIFFUSING PRIORS

We consider a Bayesian approach to variable selection in the presence of high dimensional covariates based on a hierarchical model that places prior distributions on the regression coefficients as well as on the model space. We adopt the well-known spike and slab Gaussian priors with a distinct feat...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:The Annals of statistics 2014-04, Vol.42 (2), p.789-817
Hauptverfasser: Narisetty, Naveen Naidu, He, Xuming
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 817
container_issue 2
container_start_page 789
container_title The Annals of statistics
container_volume 42
creator Narisetty, Naveen Naidu
He, Xuming
description We consider a Bayesian approach to variable selection in the presence of high dimensional covariates based on a hierarchical model that places prior distributions on the regression coefficients as well as on the model space. We adopt the well-known spike and slab Gaussian priors with a distinct feature, that is, the prior variances depend on the sample size through which appropriate shrinkage can be achieved. We show the strong selection consistency of the proposed method in the sense that the posterior probability of the true model converges to one even when the number of covariates grows nearly exponentially with the sample size. This is arguably the strongest selection consistency result that has been available in the Bayesian variable selection literature; yet the proposed method can be carried out through posterior sampling with a simple Gibbs sampler. Furthermore, we argue that the proposed method is asymptotically similar to model selection with the L₀ penalty. We also demonstrate through empirical work the fine performance of the proposed approach relative to some state of the art alternatives.
doi_str_mv 10.1214/14-AOS1207
format Article
fullrecord <record><control><sourceid>jstor_proje</sourceid><recordid>TN_cdi_projecteuclid_primary_oai_CULeuclid_euclid_aos_1400592178</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><jstor_id>43556304</jstor_id><sourcerecordid>43556304</sourcerecordid><originalsourceid>FETCH-LOGICAL-c415t-c605bde6b082a3d895364c7e388d3973a335ea8a66fb333c6d10ca77fd43f20b3</originalsourceid><addsrcrecordid>eNo9kEFPg0AQhTdGE2v14t2ExJsJusvsLnBSSmm7kYCBVuNpsyxLUqJSgR7890La9DSZmS9v3jyEbgl-JA6hT4TaQZoTB7tnaOIQ7tmez_k5mmDsY5sBp5foqutqjDHzKUzQ8yz4jHIRJNZ7kIlgFkdWHsVRuBZpYn2I9crKV5lIXkWytIJkbs3FYrHJx-4tE2mWX6OLSn115uZYp2iziNbhyo7TpQiD2NaUsN7WHLOiNLzAnqOg9PzRinYNeF4JvgsKgBnlKc6rAgA0LwnWynWrkkLl4AKm6OWgu2ub2uje7PXXtpS7dvut2j_ZqK0MN_Fxeiyq6SSh46cOcb1B4v4k8bs3XS_rZt_-DK7lsHYxcGAj9XCgdNt0XWuq0w2C5ZjxICmPGQ_w3QGuu75pTyQFxjhgCv_wLHFu</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1787036358</pqid></control><display><type>article</type><title>BAYESIAN VARIABLE SELECTION WITH SHRINKING AND DIFFUSING PRIORS</title><source>Jstor Complete Legacy</source><source>EZB-FREE-00999 freely available EZB journals</source><source>Project Euclid Complete</source><source>JSTOR Mathematics &amp; Statistics</source><creator>Narisetty, Naveen Naidu ; He, Xuming</creator><creatorcontrib>Narisetty, Naveen Naidu ; He, Xuming</creatorcontrib><description>We consider a Bayesian approach to variable selection in the presence of high dimensional covariates based on a hierarchical model that places prior distributions on the regression coefficients as well as on the model space. We adopt the well-known spike and slab Gaussian priors with a distinct feature, that is, the prior variances depend on the sample size through which appropriate shrinkage can be achieved. We show the strong selection consistency of the proposed method in the sense that the posterior probability of the true model converges to one even when the number of covariates grows nearly exponentially with the sample size. This is arguably the strongest selection consistency result that has been available in the Bayesian variable selection literature; yet the proposed method can be carried out through posterior sampling with a simple Gibbs sampler. Furthermore, we argue that the proposed method is asymptotically similar to model selection with the L₀ penalty. We also demonstrate through empirical work the fine performance of the proposed approach relative to some state of the art alternatives.</description><identifier>ISSN: 0090-5364</identifier><identifier>EISSN: 2168-8966</identifier><identifier>DOI: 10.1214/14-AOS1207</identifier><language>eng</language><publisher>Hayward: Institute of Mathematical Statistics</publisher><subject>62F12 ; 62F15 ; 62J05 ; Bayes factor ; Bayesian analysis ; Eigenvalues ; Gaussian distributions ; hierarchical model ; high dimensional data ; Linear regression ; Mathematical models ; Mathematical vectors ; Matrices ; Modeling ; Normal distribution ; Oracles ; Parametric models ; Probabilities ; Probability ; Sample size ; shrinkage ; Studies ; variable selection</subject><ispartof>The Annals of statistics, 2014-04, Vol.42 (2), p.789-817</ispartof><rights>Copyright © 2014 Institute of Mathematical Statistics</rights><rights>Copyright Institute of Mathematical Statistics Apr 2014</rights><rights>Copyright 2014 Institute of Mathematical Statistics</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c415t-c605bde6b082a3d895364c7e388d3973a335ea8a66fb333c6d10ca77fd43f20b3</citedby><cites>FETCH-LOGICAL-c415t-c605bde6b082a3d895364c7e388d3973a335ea8a66fb333c6d10ca77fd43f20b3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.jstor.org/stable/pdf/43556304$$EPDF$$P50$$Gjstor$$H</linktopdf><linktohtml>$$Uhttps://www.jstor.org/stable/43556304$$EHTML$$P50$$Gjstor$$H</linktohtml><link.rule.ids>230,314,776,780,799,828,881,921,27901,27902,57992,57996,58225,58229</link.rule.ids></links><search><creatorcontrib>Narisetty, Naveen Naidu</creatorcontrib><creatorcontrib>He, Xuming</creatorcontrib><title>BAYESIAN VARIABLE SELECTION WITH SHRINKING AND DIFFUSING PRIORS</title><title>The Annals of statistics</title><description>We consider a Bayesian approach to variable selection in the presence of high dimensional covariates based on a hierarchical model that places prior distributions on the regression coefficients as well as on the model space. We adopt the well-known spike and slab Gaussian priors with a distinct feature, that is, the prior variances depend on the sample size through which appropriate shrinkage can be achieved. We show the strong selection consistency of the proposed method in the sense that the posterior probability of the true model converges to one even when the number of covariates grows nearly exponentially with the sample size. This is arguably the strongest selection consistency result that has been available in the Bayesian variable selection literature; yet the proposed method can be carried out through posterior sampling with a simple Gibbs sampler. Furthermore, we argue that the proposed method is asymptotically similar to model selection with the L₀ penalty. We also demonstrate through empirical work the fine performance of the proposed approach relative to some state of the art alternatives.</description><subject>62F12</subject><subject>62F15</subject><subject>62J05</subject><subject>Bayes factor</subject><subject>Bayesian analysis</subject><subject>Eigenvalues</subject><subject>Gaussian distributions</subject><subject>hierarchical model</subject><subject>high dimensional data</subject><subject>Linear regression</subject><subject>Mathematical models</subject><subject>Mathematical vectors</subject><subject>Matrices</subject><subject>Modeling</subject><subject>Normal distribution</subject><subject>Oracles</subject><subject>Parametric models</subject><subject>Probabilities</subject><subject>Probability</subject><subject>Sample size</subject><subject>shrinkage</subject><subject>Studies</subject><subject>variable selection</subject><issn>0090-5364</issn><issn>2168-8966</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2014</creationdate><recordtype>article</recordtype><recordid>eNo9kEFPg0AQhTdGE2v14t2ExJsJusvsLnBSSmm7kYCBVuNpsyxLUqJSgR7890La9DSZmS9v3jyEbgl-JA6hT4TaQZoTB7tnaOIQ7tmez_k5mmDsY5sBp5foqutqjDHzKUzQ8yz4jHIRJNZ7kIlgFkdWHsVRuBZpYn2I9crKV5lIXkWytIJkbs3FYrHJx-4tE2mWX6OLSn115uZYp2iziNbhyo7TpQiD2NaUsN7WHLOiNLzAnqOg9PzRinYNeF4JvgsKgBnlKc6rAgA0LwnWynWrkkLl4AKm6OWgu2ub2uje7PXXtpS7dvut2j_ZqK0MN_Fxeiyq6SSh46cOcb1B4v4k8bs3XS_rZt_-DK7lsHYxcGAj9XCgdNt0XWuq0w2C5ZjxICmPGQ_w3QGuu75pTyQFxjhgCv_wLHFu</recordid><startdate>20140401</startdate><enddate>20140401</enddate><creator>Narisetty, Naveen Naidu</creator><creator>He, Xuming</creator><general>Institute of Mathematical Statistics</general><general>The Institute of Mathematical Statistics</general><scope>AAYXX</scope><scope>CITATION</scope><scope>JQ2</scope></search><sort><creationdate>20140401</creationdate><title>BAYESIAN VARIABLE SELECTION WITH SHRINKING AND DIFFUSING PRIORS</title><author>Narisetty, Naveen Naidu ; He, Xuming</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c415t-c605bde6b082a3d895364c7e388d3973a335ea8a66fb333c6d10ca77fd43f20b3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2014</creationdate><topic>62F12</topic><topic>62F15</topic><topic>62J05</topic><topic>Bayes factor</topic><topic>Bayesian analysis</topic><topic>Eigenvalues</topic><topic>Gaussian distributions</topic><topic>hierarchical model</topic><topic>high dimensional data</topic><topic>Linear regression</topic><topic>Mathematical models</topic><topic>Mathematical vectors</topic><topic>Matrices</topic><topic>Modeling</topic><topic>Normal distribution</topic><topic>Oracles</topic><topic>Parametric models</topic><topic>Probabilities</topic><topic>Probability</topic><topic>Sample size</topic><topic>shrinkage</topic><topic>Studies</topic><topic>variable selection</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Narisetty, Naveen Naidu</creatorcontrib><creatorcontrib>He, Xuming</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Computer Science Collection</collection><jtitle>The Annals of statistics</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Narisetty, Naveen Naidu</au><au>He, Xuming</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>BAYESIAN VARIABLE SELECTION WITH SHRINKING AND DIFFUSING PRIORS</atitle><jtitle>The Annals of statistics</jtitle><date>2014-04-01</date><risdate>2014</risdate><volume>42</volume><issue>2</issue><spage>789</spage><epage>817</epage><pages>789-817</pages><issn>0090-5364</issn><eissn>2168-8966</eissn><abstract>We consider a Bayesian approach to variable selection in the presence of high dimensional covariates based on a hierarchical model that places prior distributions on the regression coefficients as well as on the model space. We adopt the well-known spike and slab Gaussian priors with a distinct feature, that is, the prior variances depend on the sample size through which appropriate shrinkage can be achieved. We show the strong selection consistency of the proposed method in the sense that the posterior probability of the true model converges to one even when the number of covariates grows nearly exponentially with the sample size. This is arguably the strongest selection consistency result that has been available in the Bayesian variable selection literature; yet the proposed method can be carried out through posterior sampling with a simple Gibbs sampler. Furthermore, we argue that the proposed method is asymptotically similar to model selection with the L₀ penalty. We also demonstrate through empirical work the fine performance of the proposed approach relative to some state of the art alternatives.</abstract><cop>Hayward</cop><pub>Institute of Mathematical Statistics</pub><doi>10.1214/14-AOS1207</doi><tpages>29</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0090-5364
ispartof The Annals of statistics, 2014-04, Vol.42 (2), p.789-817
issn 0090-5364
2168-8966
language eng
recordid cdi_projecteuclid_primary_oai_CULeuclid_euclid_aos_1400592178
source Jstor Complete Legacy; EZB-FREE-00999 freely available EZB journals; Project Euclid Complete; JSTOR Mathematics & Statistics
subjects 62F12
62F15
62J05
Bayes factor
Bayesian analysis
Eigenvalues
Gaussian distributions
hierarchical model
high dimensional data
Linear regression
Mathematical models
Mathematical vectors
Matrices
Modeling
Normal distribution
Oracles
Parametric models
Probabilities
Probability
Sample size
shrinkage
Studies
variable selection
title BAYESIAN VARIABLE SELECTION WITH SHRINKING AND DIFFUSING PRIORS
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-01T03%3A48%3A35IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-jstor_proje&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=BAYESIAN%20VARIABLE%20SELECTION%20WITH%20SHRINKING%20AND%20DIFFUSING%20PRIORS&rft.jtitle=The%20Annals%20of%20statistics&rft.au=Narisetty,%20Naveen%20Naidu&rft.date=2014-04-01&rft.volume=42&rft.issue=2&rft.spage=789&rft.epage=817&rft.pages=789-817&rft.issn=0090-5364&rft.eissn=2168-8966&rft_id=info:doi/10.1214/14-AOS1207&rft_dat=%3Cjstor_proje%3E43556304%3C/jstor_proje%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=1787036358&rft_id=info:pmid/&rft_jstor_id=43556304&rfr_iscdi=true