Can Large Language Model Predict Employee Attrition?
Employee attrition poses significant costs for organizations, with traditional statistical prediction methods often struggling to capture modern workforce complexities. Machine learning (ML) advancements offer more scalable and accurate solutions, but large language models (LLMs) introduce new poten...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Employee attrition poses significant costs for organizations, with
traditional statistical prediction methods often struggling to capture modern
workforce complexities. Machine learning (ML) advancements offer more scalable
and accurate solutions, but large language models (LLMs) introduce new
potential in human resource management by interpreting nuanced employee
communication and detecting subtle turnover cues. This study leverages the IBM
HR Analytics Attrition dataset to compare the predictive accuracy and
interpretability of a fine-tuned GPT-3.5 model against traditional ML
classifiers, including Logistic Regression, k-Nearest Neighbors (KNN), Support
Vector Machine (SVM), Decision Tree, Random Forest, AdaBoost, and XGBoost.
While traditional models are easier to use and interpret, LLMs can reveal
deeper patterns in employee behavior. Our findings show that the fine-tuned
GPT-3.5 model outperforms traditional methods with a precision of 0.91, recall
of 0.94, and an F1-score of 0.92, while the best traditional model, SVM,
achieved an F1-score of 0.82, with Random Forest and XGBoost reaching 0.80.
These results highlight GPT-3.5's ability to capture complex patterns in
attrition risk, offering organizations improved insights for retention
strategies and underscoring the value of LLMs in HR applications. |
---|---|
DOI: | 10.48550/arxiv.2411.01353 |