Utilizing deep neural networks and electroencephalogram for objective evaluation of surgeon’s distraction during robot-assisted surgery
[Display omitted] •In this study Deep Convolutional Neural Network (CNN) algorithm was trained utilizing EEG recordings to classify distraction level of surgeons during robot-assisted surgery (RAS) in an objective way.•The accuracy of the model was determined by comparing the subjective distraction...
Gespeichert in:
Veröffentlicht in: | Brain research 2021-10, Vol.1769, p.147607-147607, Article 147607 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | [Display omitted]
•In this study Deep Convolutional Neural Network (CNN) algorithm was trained utilizing EEG recordings to classify distraction level of surgeons during robot-assisted surgery (RAS) in an objective way.•The accuracy of the model was determined by comparing the subjective distraction scores on SURG-TLX and the results from the proposed classification algorithm.•The accuracy of the model was 94%, 89%, and 95% for discriminating low, intermediate, and high distraction levels, respectively.•The results of this pilot study, if validated in clinical layout, will be applicable for objective assessment of distraction during RAS which can be crucial for enhancing patient safety.
To develop an algorithm for objective evaluation of distraction of surgeons during robot-assisted surgery (RAS).
Electroencephalogram (EEG) of 22 medical students was recorded while performing five key tasks on the robotic surgical simulator: Instrument Control, Ball Placement, Spatial Control II, Fourth Arm Tissue Retraction, and Hands-on Surgical Training Tasks. All students completed the Surgery Task Load Index (SURG-TLX), which includes one domain for subjective assessment of distraction (scale: 1–20). Scores were divided into low (score 1–6, subjective label: 1), intermediate (score 7–12, subjective label: 2), and high distraction (score 13–20, subjective label: 3). These cut-off values were arbitrarily considered based on a verbal assessment of participants and experienced surgeons. A Deep Convolutional Neural Network (CNN) algorithm was trained utilizing EEG recordings from the medical students and used to classify their distraction levels. The accuracy of our method was determined by comparing the subjective distraction scores on SURG-TLX and the results from the proposed classification algorithm. Also, Pearson correlation was utilized to assess the relationship between performance scores (generated by the simulator) and distraction (Subjective assessment scores).
The proposed end-to-end model classified distraction into low, intermediate, and high with 94%, 89%, and 95% accuracy, respectively. We found a significant negative correlation (r = -0.21; p = 0.003) between performance and SURG-TLX distraction scores.
Herein we report, to our knowledge, the first objective method to assess and quantify distraction while performing robotic surgical tasks on the robotic simulator, which may improve patient safety. Validation in the clinical setting is required. |
---|---|
ISSN: | 0006-8993 1872-6240 |
DOI: | 10.1016/j.brainres.2021.147607 |