Increasing the replicability for linear models via adaptive significance levels

We put forward an adaptive α (type I error) that decreases as the information grows for hypothesis tests comparing nested linear models. A less elaborate adaptation was presented in Pérez and Pericchi (Stat Probab Lett 85:20–24, 2014) for general i.i.d. models. The calibration proposed in this paper...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Test (Madrid, Spain) Spain), 2022-09, Vol.31 (3), p.771-789
Hauptverfasser: Vélez, D., Pérez, M. E., Pericchi, L. R.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:We put forward an adaptive α (type I error) that decreases as the information grows for hypothesis tests comparing nested linear models. A less elaborate adaptation was presented in Pérez and Pericchi (Stat Probab Lett 85:20–24, 2014) for general i.i.d. models. The calibration proposed in this paper may be interpreted as a Bayes–non-Bayes compromise, of a simple translation of a Bayes factor on frequentist terms that leads to statistical consistency, and most importantly, it is a step toward statistics that promotes replicable scientific findings.
ISSN:1133-0686
1863-8260
DOI:10.1007/s11749-022-00803-4