How Reliable are Single-Question Workplace-Based Assessments in Surgery?

•Single-item WBA scales have low to moderate inter-rater reliability.•WBA ratings should be collected from many raters and viewed in aggregate.•Faculty should be educated about the intended use of WBAs. Workplace-based assessments (WBAs) play an important role in the assessment of surgical trainees....

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of surgical education 2024-07, Vol.81 (7), p.967-972
Hauptverfasser: Gates, Rebecca S., Krumm, Andrew E., Cate, Olle ten, Chen, Xilin, Marcotte, Kayla, Thelen, Angela E., Deal, Shanley B., Alseidi, Adnan, Swanson, David, George, Brian C.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:•Single-item WBA scales have low to moderate inter-rater reliability.•WBA ratings should be collected from many raters and viewed in aggregate.•Faculty should be educated about the intended use of WBAs. Workplace-based assessments (WBAs) play an important role in the assessment of surgical trainees. Because these assessment tools are utilized by a multitude of faculty, inter-rater reliability is important to consider when interpreting WBA data. Although there is evidence supporting the validity of many of these tools, inter-reliability evidence is lacking. This study aimed to evaluate the inter-rater reliability of multiple operative WBA tools utilized in general surgery residency. General surgery residents and teaching faculty were recorded during 6 general surgery operations. Nine faculty raters each reviewed 6 videos and rated each resident on performance (using the Society for Improving Medical Professional Learning, or SIMPL, Performance Scale as well as the operative performance rating system (OPRS) Scale), entrustment (using the ten Cate Entrustment-Supervision Scale), and autonomy (using the Zwisch Scale). The ratings were reviewed for inter-rater reliability using percent agreement and intraclass correlations. Nine faculty members viewed the videos and assigned ratings for multiple WBAs. Absolute intraclass correlation coefficients for each scale ranged from 0.33 to 0.47. All single-item WBA scales had low to moderate inter-rater reliability. While rater training may improve inter-rater reliability for single observations, many observations by many raters are needed to reliably assess trainee performance in the workplace.
ISSN:1931-7204
1878-7452
DOI:10.1016/j.jsurg.2024.03.015