Variation in passing standards for graduation‐level knowledge items at UK medical schools

Objectives Given the absence of a common passing standard for students at UK medical schools, this paper compares independently set standards for common ‘one from five’ single‐best‐answer (multiple‐choice) items used in graduation‐level applied knowledge examinations and explores potential reasons f...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Medical education 2017-06, Vol.51 (6), p.612-620
Hauptverfasser: Taylor, Celia A, Gurnell, Mark, Melville, Colin R, Kluth, David C, Johnson, Neil, Wass, Val
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Objectives Given the absence of a common passing standard for students at UK medical schools, this paper compares independently set standards for common ‘one from five’ single‐best‐answer (multiple‐choice) items used in graduation‐level applied knowledge examinations and explores potential reasons for any differences. Methods A repeated cross‐sectional study was conducted. Participating schools were sent a common set of graduation‐level items (55 in 2013–2014; 60 in 2014–2015). Items were selected against a blueprint and subjected to a quality review process. Each school employed its own standard‐setting process for the common items. The primary outcome was the passing standard for the common items by each medical school set using the Angoff or Ebel methods. Results Of 31 invited medical schools, 22 participated in 2013–2014 (71%) and 30 (97%) in 2014–2015. Schools used a mean of 49 and 53 common items in 2013–2014 and 2014–2015, respectively, representing around one‐third of the items in the examinations in which they were embedded. Data from 19 (61%) and 26 (84%) schools, respectively, met the inclusion criteria for comparison of standards. There were statistically significant differences in the passing standards set by schools in both years (effect sizes (f2): 0.041 in 2013–2014 and 0.218 in 2014–2015; both p < 0.001). The interquartile range of standards was 5.7 percentage points in 2013–2014 and 6.5 percentage points in 2014–2015. There was a positive correlation between the relative standards set by schools in the 2 years (Pearson's r = 0.57, n = 18, p = 0.014). Time allowed per item, method of standard setting and timing of examination in the curriculum did not have a statistically significant impact on standards. Conclusions Independently set standards for common single‐best‐answer items used in graduation‐level examinations vary across UK medical schools. Further work to examine standard‐setting processes in more detail is needed to help explain this variability and develop methods to reduce it.
ISSN:0308-0110
1365-2923
DOI:10.1111/medu.13240