Using a natural language processing toolkit to classify electronic health records by psychiatric diagnosis
Objective: We analyzed a natural language processing (NLP) toolkit’s ability to classify unstructured EHR data by psychiatric diagnosis. Expertise can be a barrier to using NLP. We employed an NLP toolkit (CLARK) created to support studies led by investigators with a range of informatics knowledge....
Gespeichert in:
Veröffentlicht in: | Health informatics journal 2024-10, Vol.30 (4), p.14604582241296411-14604582241296411 |
---|---|
Hauptverfasser: | , , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Objective: We analyzed a natural language processing (NLP) toolkit’s ability to classify unstructured EHR data by psychiatric diagnosis. Expertise can be a barrier to using NLP. We employed an NLP toolkit (CLARK) created to support studies led by investigators with a range of informatics knowledge. Methods: The EHR of 652 patients were manually reviewed to establish Depression and Substance Use Disorder (SUD) labeled datasets, which were split into training and evaluation datasets. We used CLARK to train depression and SUD classification models using training datasets; model performance was analyzed against evaluation datasets. Results: The depression model accurately classified 69% of records (sensitivity = 0.68, specificity = 0.70, F1 = 0.68). The SUD model accurately classified 84% of records (sensitivity = 0.56, specificity = 0.92, F1 = 0.57). Conclusion: The depression model performed a more balanced job, while the SUD model’s high specificity was paired with a low sensitivity. NLP applications may be especially helpful when combined with a confidence threshold for manual review. |
---|---|
ISSN: | 1460-4582 1741-2811 1741-2811 |
DOI: | 10.1177/14604582241296411 |