The Sequential Scale-Up of an Evidence-Based Intervention: A Case Study

Policymakers face dilemmas when choosing a policy, program, or practice to implement. Researchers in education, public health, and other fields have proposed a sequential approach to identifying interventions worthy of broader adoption, involving pilot, efficacy, effectiveness, and scale-up studies....

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Grantee Submission 2018
Hauptverfasser: Thomas, Jaime, Cook, Thomas D, Klein, Alice, Starkey, Prentice, DeFlorio, Lydia
Format: Report
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Policymakers face dilemmas when choosing a policy, program, or practice to implement. Researchers in education, public health, and other fields have proposed a sequential approach to identifying interventions worthy of broader adoption, involving pilot, efficacy, effectiveness, and scale-up studies. In this paper, we examine a scale-up of an early math intervention to the state level, using a cluster randomized controlled trial. The intervention, "Pre-K Mathematics," has produced robust positive effects on children's math ability in prior pilot, efficacy, and effectiveness studies. In the current study, we ask if it remains effective at a larger scale in a heterogeneous collection of pre-K programs that plausibly represent all low-income families with a child of pre-K age who live in California. We find that "Pre-K Mathematics" remains effective at the state level, with positive and statistically significant effects (effect size = 0.30, p < 0.01). In addition, we develop a framework of the dimensions of scale-up to explain why effect sizes might decrease as scale increases. Using this framework, we compare the causal estimates from the present study to those from earlier, smaller studies. Consistent with our framework, we find that effect sizes have decreased over time. We conclude with a discussion of the implications of our study for how we think about the external validity of causal relationships. [This is the online version of an article published in "Evaluation Review."]
DOI:10.1177/0193841X18786818