Psychometric Properties of Automated Video Interview Competency Assessments
Interviews are one of the most widely used selection methods, but their reliability and validity can vary substantially. Further, using human evaluators to rate an interview can be expensive and time consuming. Interview scoring models have been proposed as a mechanism for reliably, accurately, and...
Gespeichert in:
Veröffentlicht in: | Journal of applied psychology 2024-06, Vol.109 (6), p.921-948 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Interviews are one of the most widely used selection methods, but their reliability and validity can vary substantially. Further, using human evaluators to rate an interview can be expensive and time consuming. Interview scoring models have been proposed as a mechanism for reliably, accurately, and efficiently scoring video-based interviews. Yet, there is a lack of clarity and consensus around their psychometric characteristics, primarily driven by a dearth of published empirical research. The goal of this study was to examine the psychometric properties of automated video interview competency assessments (AVI-CAs), which were designed to be highly generalizable (i.e., apply across job roles and organizations). The AVI-CAs developed demonstrated high levels of convergent validity (average r value of .66), moderate discriminant relationships (average r value of .58), good test-retest reliability (average r value of .72), and minimal levels of subgroup differences (Cohen's ds ≥ −.14). Further, criterion-related validity (uncorrected sample-weighted r¯ = .24) was demonstrated by applying these AVI-CAs to five organizational samples. Strengths, weaknesses, and future directions for building interview scoring models are also discussed. |
---|---|
ISSN: | 0021-9010 1939-1854 |
DOI: | 10.1037/apl0001173 |