Validity of content‐based techniques for credibility assessment—How telling is an extended meta‐analysis taking research bias into account?
Summary Content‐based techniques for credibility assessment (Criteria‐Based Content Analysis [CBCA], Reality Monitoring [RM]) have been shown to distinguish between experience‐based and fabricated statements in previous meta‐analyses. New simulations raised the question whether these results are rel...
Gespeichert in:
Veröffentlicht in: | Applied cognitive psychology 2021-03, Vol.35 (2), p.393-410 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Summary
Content‐based techniques for credibility assessment (Criteria‐Based Content Analysis [CBCA], Reality Monitoring [RM]) have been shown to distinguish between experience‐based and fabricated statements in previous meta‐analyses. New simulations raised the question whether these results are reliable revealing that using meta‐analytic methods on biased datasets lead to false‐positive rates of up to 100%. By assessing the performance of and applying different bias‐correcting meta‐analytic methods on a set of 71 studies we aimed for more precise effect size estimates. According to the sole bias‐correcting meta‐analytic method that performed well under a priori specified boundary conditions, CBCA and RM distinguished between experience‐based and fabricated statements. However, great heterogeneity limited precise point estimation (i.e., moderate to large effects). In contrast, Scientific Content Analysis (SCAN)—another content‐based technique tested—failed to discriminate between truth and lies. It is discussed how the gap between research on and forensic application of content‐based credibility assessment may be narrowed. |
---|---|
ISSN: | 0888-4080 1099-0720 |
DOI: | 10.1002/acp.3776 |