p-Hacking and Publication Bias Interact to Distort Meta-Analytic Effect Size Estimates

Science depends on trustworthy evidence. Thus, a biased scientific record is of questionable value because it impedes scientific progress, and the public receives advice on the basis of unreliable evidence that has the potential to have far-reaching detrimental consequences. Meta-analysis is a techn...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Psychological methods 2020-08, Vol.25 (4), p.456-471
Hauptverfasser: Friese, Malte, Frankenbach, Julius
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Science depends on trustworthy evidence. Thus, a biased scientific record is of questionable value because it impedes scientific progress, and the public receives advice on the basis of unreliable evidence that has the potential to have far-reaching detrimental consequences. Meta-analysis is a technique that can be used to summarize research evidence. However, meta-analytic effect size estimates may themselves be biased, threatening the validity and usefulness of meta-analyses to promote scientific progress. Here, we offer a large-scale simulation study to elucidate how p-hacking and publication bias distort meta-analytic effect size estimates under a broad array of circumstances that reflect the reality that exists across a variety of research areas. The results revealed that, first, very high levels of publication bias can severely distort the cumulative evidence. Second, p-hacking and publication bias interact: At relatively high and low levels of publication bias, p-hacking does comparatively little harm, but at medium levels of publication bias, p-hacking can considerably contribute to bias, especially when the true effects are very small or are approaching zero. Third, p-hacking can severely increase the rate of false positives. A key implication is that, in addition to preventing p-hacking, policies in research institutions, funding agencies, and scientific journals need to make the prevention of publication bias a top priority to ensure a trustworthy base of evidence. Translational Abstract In recent years, the trustworthiness of psychological science has been questioned. A major concern is that many research findings are less robust than the published evidence suggests. Several reasons may contribute to this state of affairs. Two prominently discussed reasons are that (a) researchers use questionable research practices (so called p-hacking) when they analyze the data of their empirical studies, and (b) studies that revealed results consistent with expectations are more likely published than studies that "failed" (publication bias). The present large-scale simulation study estimates the extent to which meta-analytic effect sizes are biased by different degrees of p-hacking and publication bias, considering several factors of influence that may impact on this bias (e.g., the true effect of the phenomenon of interest). Results show that both p-hacking and publication bias contribute to a potentially severely biased impression of the overall evidence.
ISSN:1082-989X
1939-1463
DOI:10.1037/met0000246