Towards near real-time assessment of surgical skills: A comparison of feature extraction techniques
•Feature extraction techniques play an important role in an automated surgical skill assessment system.•We proposed a framework to rigorously compare the performance of different feature extraction techniques on automated surgical skill assessment in short time interval manner.•A comparative analysi...
Gespeichert in:
Veröffentlicht in: | Computer methods and programs in biomedicine 2020-04, Vol.187, p.105234-105234, Article 105234 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | •Feature extraction techniques play an important role in an automated surgical skill assessment system.•We proposed a framework to rigorously compare the performance of different feature extraction techniques on automated surgical skill assessment in short time interval manner.•A comparative analysis was carried out on nine well-known feature extraction techniques.•The CNN deep learning technique outperforms all other techniques with an overall accuracy of 96.84, 92.75 and 95.36% for suturing, knot tying and needle passing, respectively.
Surgical skill assessment aims to objectively evaluate and provide constructive feedback for trainee surgeons. Conventional methods require direct observation with assessment from surgical experts which are both unscalable and subjective. The recent involvement of surgical robotic systems in the operating room has facilitated the ability of automated evaluation of the expertise level of trainees for certain representative maneuvers by using machine learning for motion analysis. The features extraction technique plays a critical role in such an automated surgical skill assessment system.
We present a direct comparison of nine well-known feature extraction techniques which are statistical features, principal component analysis, discrete Fourier/Cosine transform, codebook, deep learning models and auto-encoder for automated surgical skills evaluation. Towards near real-time evaluation, we also investigate the effect of time interval on the classification accuracy and efficiency.
We validate the study on the benchmark robotic surgical training JIGSAWS dataset. An accuracy of 95.63, 90.17 and 90.26% by the Principal Component Analysis and 96.84, 92.75 and 95.36% by the deep Convolutional Neural Network for suturing, knot tying and needle passing, respectively, highlighted the effectiveness of these two techniques in extracting the most discriminative features among different surgical skill levels.
This study contributes toward the development of an online automated and efficient surgical skills assessment technique. |
---|---|
ISSN: | 0169-2607 1872-7565 |
DOI: | 10.1016/j.cmpb.2019.105234 |