Biases in research evaluation: Inflated assessment, oversight, or error-type weighting?
Reviewers of research are more lenient when evaluating studies on important topics [Wilson, T. D., Depaulo, B. M., Mook, D. G., & Klaaren, K. J. (1993). Scientists’ evaluations of research: the biasing effects of the importance of the topic. Psychological Science, 4(5), 323–325]. Three experimen...
Gespeichert in:
Veröffentlicht in: | Journal of experimental social psychology 2007-07, Vol.43 (4), p.633-640 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Reviewers of research are more lenient when evaluating studies on important topics [Wilson, T. D., Depaulo, B. M., Mook, D. G., & Klaaren, K. J. (1993). Scientists’ evaluations of research: the biasing effects of the importance of the topic.
Psychological Science, 4(5), 323–325]. Three experiments (
N
=
145, 36, and 91 psychologists) investigated different explanations of leniency, including inflation of assessments (applying a heuristic associating importance with quality), oversight (failing to detect flaws), and error-weighting (prioritizing Type II error avoidance). In Experiment 1, psychologists evaluated the publishability and rigor of studies in a 2 (topic importance)
×
2 (accuracy motivation)
×
2 (research domain) design. Experiment 2 featured an exact replication of Wilson et al. and suggested that report length moderated the effects of importance on perceived rigor, but not on publishability. In Experiment 3, a manipulation of error-weighting replaced the manipulation of domain (Experiment 1). Results favored error-weighting, rather than inflation or oversight. Perceived seriousness of Type II error (in Experiments 1 and 3) and the error-weighting manipulation (in Experiment 3) predicted study evaluations. |
---|---|
ISSN: | 0022-1031 1096-0465 |
DOI: | 10.1016/j.jesp.2006.06.001 |