Adaptation of the fundamentals of laparoscopic surgery box for endoscopic simulation: performance evaluation of the first 100 participants
Background The paucity of readily accessible, cost-effective models for the simulation, practice, and evaluation of endoscopic skills present an ongoing barrier for resident training. We have previously described a system for conversion of the Fundamentals of Laparoscopic Surgery box (FLS) for flexi...
Gespeichert in:
Veröffentlicht in: | Surgical endoscopy 2019-10, Vol.33 (10), p.3444-3450 |
---|---|
Hauptverfasser: | , , , , , , , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Background
The paucity of readily accessible, cost-effective models for the simulation, practice, and evaluation of endoscopic skills present an ongoing barrier for resident training. We have previously described a system for conversion of the Fundamentals of Laparoscopic Surgery box (FLS) for flexible endoscopic simulation. Six endoscopic tasks focusing on scope manipulation, and other clinically relevant endoscopic skills are performed within a 5-min time limit per task. This study describes our experience and validation results with the first 100 participants.
Methods
A total of 100 participants were evaluated on the simulator. Thirty individuals were classified as experts (having done over 200 endoscopic procedures), and 70 were classified as trainees (39 individuals reported having no prior endoscopy experience). Of the 100 participants, 55 individuals were retested on the simulator within a period of 4 months. These 55 individuals were also evaluated using the “Global Assessment of Gastrointestinal Endoscopic Skills” (GAGES).
T
-tests and Pearson correlations were used where appropriate, values less than 0.05 were considered significant.
Results
Experts completed all six tasks significantly faster than trainees. For the 55 participants who were retested on the simulator, all tasks demonstrated evidence of test–retest reliability for both experts and trainees who did not practice in between tests. Moderate correlations between lower completion times and higher GAGES scores were observed for all tasks except the clipping task.
Conclusions
The results from the first 100 participants provide evidence for the simulator’s validity. Based on task completion times, we found that experts perform significantly better than trainees. Additionally, preliminary data demonstrate evidence of test–retest reliability, as well as GAGES score correlation. Additional studies to determine and validate a scoring system for this simulator are ongoing.
Graphical abstract |
---|---|
ISSN: | 0930-2794 1432-2218 |
DOI: | 10.1007/s00464-018-06617-6 |