Do PCL-R Scores from State or Defense Experts Best Predict Future Misconduct Among Civilly Committed Sex Offenders?
In a recent study of sex offender civil commitment proceedings, Murrie et al. (Psychol Public Policy Law 15:19-53, 2009) found that state-retained experts consistently assigned higher PCL-R total scores than defense-retained experts for the same offenders (Cohen's d > .83). This finding rais...
Gespeichert in:
Veröffentlicht in: | Law and human behavior 2012-06, Vol.36 (3), p.159-169 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In a recent study of sex offender civil commitment proceedings, Murrie et al. (Psychol Public Policy Law 15:19-53, 2009) found that state-retained experts consistently assigned higher PCL-R total scores than defense-retained experts for the same offenders (Cohen's d > .83). This finding raises an important question about the validity of these discrepant scores: Which type of score, state or defense evaluator, provides the most useful information about risk? We examined the ability of PCL-R total scores from state and defense evaluators to predict future misconduct among civilly committed sex offenders (N = 38). For comparison, we also examined predictive validity when two state experts evaluated the same offender (N = 32). Agreement between evaluators was low for cases with opposing experts (ICCA,1 = .43 to .52) and for cases with two state experts (ICCA,1 = .40). Nevertheless, scores from state and defense experts demonstrated similar levels of predictive validity (AUC values in the .70 range), although scores from different types of state evaluators (corrections-contracted vs. prosecution-retained) did not. The finding of mean differences between opposing evaluator scores, but similar levels of predictive validity, suggests that scores from opposing experts in SVP cases may need to be interpreted differently depending on who assigned them. Findings have important implications for understanding how rater disagreement may relate to predictive validity. |
---|---|
ISSN: | 0147-7307 1573-661X |
DOI: | 10.1037/h0093949 |