Evaluating Meta-Analytic Methods to Detect Selective Reporting in the Presence of Dependent Effect Sizes
Selective reporting of results based on their statistical significance threatens the validity of meta-analytic findings. A variety of techniques for detecting selective reporting, publication bias, or small-study effects are available and are routinely used in research syntheses. Most such technique...
Gespeichert in:
Veröffentlicht in: | Psychological methods 2021-04, Vol.26 (2), p.141-160 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Selective reporting of results based on their statistical significance threatens the validity of meta-analytic findings. A variety of techniques for detecting selective reporting, publication bias, or small-study effects are available and are routinely used in research syntheses. Most such techniques are univariate, in that they assume that each study contributes a single, independent effect size estimate to the meta-analysis. In practice, however, studies often contribute multiple, statistically dependent effect size estimates, such as for multiple measures of a common outcome construct. Many methods are available for meta-analyzing dependent effect sizes, but methods for investigating selective reporting while also handling effect size dependencies require further investigation. Using Monte Carlo simulations, we evaluate three available univariate tests for small-study effects or selective reporting, including the trim and fill test, Egger's regression test, and a likelihood ratio test from a three-parameter selection model (3PSM), when dependence is ignored or handled using ad hoc techniques. We also examine two variants of Egger's regression test that incorporate robust variance estimation (RVE) or multilevel meta-analysis (MLMA) to handle dependence. Simulation results demonstrate that ignoring dependence inflates Type I error rates for all univariate tests. Variants of Egger's regression maintain Type I error rates when dependent effect sizes are sampled or handled using RVE or MLMA. The 3PSM likelihood ratio test does not fully control Type I error rates. With the exception of the 3PSM, all methods have limited power to detect selection bias except under strong selection for statistically significant effects.
Translational Abstract
When researchers or journals prefer to publish mostly or solely primary studies with statistically significant results, this creates selective reporting biases that can distort the findings of meta-analyses. A variety of statistical methods for detecting this problem are available and routinely used in meta-analysis, but existing methods do not account for the fact that primary studies often contribute multiple, statistically dependent effect size estimates to a meta-analytic dataset. Thus, there is a need to further investigate tests for detecting selective reporting while also accounting for statistically dependent effect size estimates. We evaluated the performance of several tests for detecting selective reporting in |
---|---|
ISSN: | 1082-989X 1939-1463 |
DOI: | 10.1037/met0000300 |