Nonexperimental Designs and Program Evaluation

Sometimes implementing evaluation designs that involve random assignment or strong nonrandomized comparison group designs is not feasible. When this is the case, several other types of comparisons may offer credible, though not conclusive, evidence on program effects. This article describes three ap...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Children and youth services review 1997-11, Vol.19 (7), p.541-566
Hauptverfasser: Kisker, Ellen Eliason, Brown, Randall S.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Sometimes implementing evaluation designs that involve random assignment or strong nonrandomized comparison group designs is not feasible. When this is the case, several other types of comparisons may offer credible, though not conclusive, evidence on program effects. This article describes three approaches—comparisons of the outcomes of the treatment group with a national sample, comparisons of the outcomes of program participants and nonparticipants, and dose-response analyses—and illustrates their use in the evaluation of the School-Based Adolescent Health Care Program. The findings suggest that if outcomes are measured before and after the intervention, comparisons of the treatment group outcomes to outcomes for a national sample may provide valid estimates of program effects. The other two types of comparisons produced implausible and unstable estimates of program effects. Because of the selection bias inherent in these two methods, researchers cannot count on being able to produce plausible estimates of program effects with such comparisons.
ISSN:0190-7409
1873-7765
DOI:10.1016/S0190-7409(97)00045-5