Evaluation of a new e‐learning resource for calibrating OSCE examiners on the use of rating scales

Introduction Rating scales have been described as better at assessing behaviours such as professionalism during Objective Structured Clinical Examinations (OSCEs). However, there is an increased need to train and calibrate staff on their use prior to student assessment. Material and methods An onlin...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:European journal of dental education 2020-05, Vol.24 (2), p.276-281
Hauptverfasser: Moreno‐López, Rosa, Sinclair, Serena
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Introduction Rating scales have been described as better at assessing behaviours such as professionalism during Objective Structured Clinical Examinations (OSCEs). However, there is an increased need to train and calibrate staff on their use prior to student assessment. Material and methods An online e‐learning package was developed and made available to all examiners at the Institute of Dentistry at the University of Aberdeen. The package included videos of three OSCE stations (medical emergency, rubber dam placement and handling a complaint) which were recorded in two different scenarios; (excellent and unsatisfactory candidate). These videos were recorded to meet a pre‐defined marking score. The examiners were required to mark the six videos using pre‐set marking criteria (checklist and rating scales). The rating scales included professionalism, general clinical ability and/or communication skills. For each video, examiners were given four possible options (unsatisfactory, borderline, satisfactory or excellent), and they were provided with a description for each domain. They were also required to complete a questionnaire to gather their views on the use of this e‐learning environment. Results Fifteen examiners completed the task. The total scores given were very similar to the expected scores for the medical emergency and complaint stations; however, this was not the case for the rubber dam station (P‐value .017 and .036). This could be attributed to some aspects of the placement of the rubber dam being unclear as commented on in the examiners questionnaires. There was consistency in the selection of marks on the rating scales (inter‐examiner correlation ranged between 0.916 and 0.979). Conclusion Further studies are required on the field of e‐learning training to calibrate examiners for practical assessment; however, this study provides preliminary evidence to support the use of videos as part of an online training package to calibrate OSCE examiners on the use of rating scales.
ISSN:1396-5883
1600-0579
DOI:10.1111/eje.12495