Bridging the Gap: Can Large Language Models Match Human Expertise in Writing Neurosurgical Operative Notes?
Proper documentation is essential for patient care. The popularity of artificial intelligence (AI) offers the potential for improvements in neurosurgical note-writing. This study aimed to assess how AI can optimize documentation in neurosurgical procedures. Thirty-six operative notes were included....
Gespeichert in:
Veröffentlicht in: | World neurosurgery 2024-12, Vol.192, p.e34-e41 |
---|---|
Hauptverfasser: | , , , , , , , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Proper documentation is essential for patient care. The popularity of artificial intelligence (AI) offers the potential for improvements in neurosurgical note-writing. This study aimed to assess how AI can optimize documentation in neurosurgical procedures.
Thirty-six operative notes were included. All identifiable data were removed. Essential information, such as perioperative data and diagnosis, was sourced from these notes. ChatGPT 4.0 was trained to draft notes from surgical vignettes using each surgeon's note template. One hundred forty-four surveys with a surgeon or AI note were shared with 3 surgeons to evaluate accuracy, content, and organization using a 5-point scale. Accuracy was defined as the factual correctness; content, as the comprehensiveness; and organization, as the arrangement of the note. Flesch-Kincaid Grade Level (FKGL) and Flesch Reading Ease (FRE) scores quantified each note's readability.
The mean AI accuracy was not different from the mean surgeon accuracy (4.44 vs. 4.33; P = 0.512), the mean AI content was lower than the mean surgeon content (3.73 vs. 4.42; P |
---|---|
ISSN: | 1878-8750 1878-8769 1878-8769 |
DOI: | 10.1016/j.wneu.2024.08.062 |