Supervised Construct Scoring to Reduce Personality Assessment Length: A Field Study and Introduction to the Short 10

Personality assessments help identify qualified job applicants when making hiring decisions and are used broadly in the organizational sciences. However, many existing personality measures are quite lengthy, and companies and researchers frequently seek ways to shorten personality scales. The curren...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Organizational research methods 2024-04, Vol.27 (2), p.223-264
Hauptverfasser: Speer, Andrew B., Perrotta, James, Jacobs, Rick R.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Personality assessments help identify qualified job applicants when making hiring decisions and are used broadly in the organizational sciences. However, many existing personality measures are quite lengthy, and companies and researchers frequently seek ways to shorten personality scales. The current research investigated the effectiveness of a new scale-shortening method called supervised construct scoring (SCS), testing the efficacy of this method across two applied samples. Using a combination of machine learning with content validity considerations, we show that multidimensional personality scales can be significantly shortened while maintaining reliability and validity, and especially when compared to traditional shortening methods. In Study 1, we shortened a 100-item personality assessment of DeYoung et al.'s 10 facets, producing a scale 26% the original length. SCS scores exhibited strong evidence of reliability, convergence with full scale scores, and criterion-related validity. This measure, labeled the Short 10, is made freely available. In Study 2, we applied SCS to shorten an operational police personality assessment. By using SCS, we reduced test length to 25% of the original length while maintaining similar levels of reliability and criterion-related validity when predicting job performance ratings.
ISSN:1094-4281
1552-7425
DOI:10.1177/10944281221145694