How vague is vague? A simulation study of the impact of the use of vague prior distributions in MCMC using WinBUGS

There has been a recent growth in the use of Bayesian methods in medical research. The main reasons for this are the development of computer intensive simulation based methods such as Markov chain Monte Carlo (MCMC), increases in computing power and the introduction of powerful software such as WinB...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Statistics in medicine 2005-08, Vol.24 (15), p.2401-2428
Hauptverfasser: Lambert, Paul C., Sutton, Alex J., Burton, Paul R., Abrams, Keith R., Jones, David R.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:There has been a recent growth in the use of Bayesian methods in medical research. The main reasons for this are the development of computer intensive simulation based methods such as Markov chain Monte Carlo (MCMC), increases in computing power and the introduction of powerful software such as WinBUGS. This has enabled increasingly complex models to be fitted. The ability to fit these complex models has led to MCMC methods being used as a convenient tool by frequentists, who may have no desire to be fully Bayesian. Often researchers want ‘the data to dominate’ when there is no prior information and thus attempt to use vague prior distributions. However, with small amounts of data the use of vague priors can be problematic. The results are potentially sensitive to the choice of prior distribution. In general there are fewer problems with location parameters. The main problem is with scale parameters. With scale parameters, not only does one have to decide the distributional form of the prior distribution, but also whether to put the prior distribution on the variance, standard deviation or precision. We have conducted a simulation study comparing the effects of 13 different prior distributions for the scale parameter on simulated random effects meta‐analysis data. We varied the number of studies (5, 10 and 30) and compared three different between‐study variances to give nine different simulation scenarios. One thousand data sets were generated for each scenario and each data set was analysed using the 13 different prior distributions. The frequentist properties of bias and coverage were investigated for the between‐study variance and the effect size. The choice of prior distribution was crucial when there were just five studies. There was a large variation in the estimates of the between‐study variance for the 13 different prior distributions. With a large number of studies the choice of prior distribution was less important. The effect size estimated was not biased, but the precision with which it was estimated varied with the choice of prior distribution leading to varying coverage intervals and, potentially, to different statistical inferences. Again there was less of a problem with a larger number of studies. There is a particular problem if the between‐study variance is close to the boundary at zero, as MCMC results tend to produce upwardly biased estimates of the between‐study variance, particularly if inferences are based on the posterior mean. The
ISSN:0277-6715
1097-0258
DOI:10.1002/sim.2112