Competency assessment tool for laparoscopic suturing: development and reliability evaluation

Background Laparoscopic suturing can be technically challenging and requires extensive training to achieve competency. To date no specific and objective assessment method for laparoscopic suturing and knot tying is available that can guide training and monitor performance in these complex surgical s...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Surgical endoscopy 2020-07, Vol.34 (7), p.2947-2953
Hauptverfasser: IJgosse, Wouter M., Leijte, Erik, Ganni, Sandeep, Luursema, Jan-Maarten, Francis, Nader K., Jakimowicz, Jack J., Botden, Sanne M. B. I.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Background Laparoscopic suturing can be technically challenging and requires extensive training to achieve competency. To date no specific and objective assessment method for laparoscopic suturing and knot tying is available that can guide training and monitor performance in these complex surgical skills. In this study we aimed to develop a laparoscopic suturing competency assessment tool (LS-CAT) and assess its inter-observer reliability. Methods We developed a bespoke CAT tool for laparoscopic suturing through a structured, mixed methodology approach, overseen by a steering committee with experience in developing surgical assessment tools. A wide Delphi consultation with over twelve experts in laparoscopic surgery guided the development stages of the tool. Following, subjects with different levels of laparoscopic expertise were included to evaluate this tool, using a simulated laparoscopic suturing task which involved placing of two surgical knots. A research assistant video recorded and anonymised each performance. Two blinded expert surgeons assessed the anonymised videos using the developed LS-CAT. The LS-CAT scores of the two experts were compared to assess the inter-observer reliability. Lastly, we compared the subjects’ LS-CAT performance scores at the beginning and end of their learning curve. Results This study evaluated a novel LS-CAT performance tool, comprising of four tasks. Thirty-six complete videos were analysed and evaluated with the LS-CAT, of which the scores demonstrated excellent inter-observer reliability. Cohen’s Kappa analysis revealed good to excellent levels of agreement for almost all tasks of both instrument handling and tissue handling (0.87; 0.77; 0.75; 0.86; 0.85, all with p 
ISSN:0930-2794
1432-2218
DOI:10.1007/s00464-019-07077-2