THE EFFECT DIRECTION SHOULD BE TAKEN INTO ACCOUNT WHEN ASSESSING SMALL-STUDY EFFECTS

Studies with statistically significant results are frequently more likely to be published than those with non-significant results. This phenomenon leads to publication bias or small-study effects and can seriously affect the validity of the conclusion from systematic reviews and meta-analyses. Small...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:The journal of evidence-based dental practice 2023-03, Vol.23 (1), p.101830-101830, Article 101830
Hauptverfasser: Meng, Zhuo, Wu, Chong, Lin, Lifeng
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Studies with statistically significant results are frequently more likely to be published than those with non-significant results. This phenomenon leads to publication bias or small-study effects and can seriously affect the validity of the conclusion from systematic reviews and meta-analyses. Small-study effects typically appear in a specific direction, depending on whether the outcome of interest is beneficial or harmful, but this direction is rarely taken into account in conventional methods. We propose to use directional tests to assess potential small-study effects. The tests are built on a one-sided testing framework based on the existing Egger's regression test. We performed simulation studies to compare the proposed one-sided regression tests, conventional two-sided regression tests, as well as two other competitive methods (Begg's rank test and the trim-and-fill method). Their performance was measured by type I error rates and statistical power. Three real-world meta-analyses on measurements of infrabony periodontal defects were also used to examine the various methods’ performance. Based on simulation studies, the one-sided tests could have considerably higher statistical power than competing methods, particularly their two-sided counterparts. Their type I error rates were generally controlled well. In the case study of the three real-world meta-analyses, by accounting for the favored direction of effects, the one-sided tests could rule out potential false-positive conclusions about small-study effects. They also are more powerful in assessing small-study effects than the conventional two-sided tests when true small-study effects likely exist. We recommend researchers incorporate the potential favored direction of effects into the assessment of small-study effects.
ISSN:1532-3382
1532-3390
DOI:10.1016/j.jebdp.2022.101830