Why reports of outcome evaluations are often biased or uninterpretable: Examples from evaluations of drug abuse prevention programs
This paper examines why the conclusions of many outcome evaluations do not stand up to scrutiny drawing upon examples from evaluations of drug abuse prevention programs. It is argued that the factors that undermine the integrity of these studies are not simply due to limited means or resources in co...
Gespeichert in:
Veröffentlicht in: | Evaluation and program planning 1993, Vol.16 (1), p.1-9 |
---|---|
1. Verfasser: | |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This paper examines why the conclusions of many outcome evaluations do not stand up to scrutiny drawing upon examples from evaluations of drug abuse prevention programs. It is argued that the factors that undermine the integrity of these studies are not simply due to limited means or resources in conducting such research, but they are in large part due to social-structural problems that influence the design and implementation of the research. These include institutional pressures involved in conducting “soft money” research as well as academic pressures to publish or perish and conflict of interest. Some potential solutions are proposed that may reduce the institutional pressures and constraints that undermine evaluation studies. |
---|---|
ISSN: | 0149-7189 |
DOI: | 10.1016/0149-7189(93)90032-4 |