Leveling the field: Development of reliable scoring rubrics for quantitative and qualitative medical education research abstracts

Background Research s are submitted for presentation at scientific conferences; however, criteria for judging s are variable. We sought to develop two rigorous scoring rubrics for education research submissions reporting (1) quantitative data and (2) qualitative data and then to collect validity evi...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:AEM education and training 2021-10, Vol.5 (4), p.e10654-n/a
Hauptverfasser: Jordan, Jaime, Hopson, Laura R., Molins, Caroline, Bentley, Suzanne K., Deiorio, Nicole M., Santen, Sally A., Yarris, Lalena M., Coates, Wendy C., Gisondi, Michael A.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Background Research s are submitted for presentation at scientific conferences; however, criteria for judging s are variable. We sought to develop two rigorous scoring rubrics for education research submissions reporting (1) quantitative data and (2) qualitative data and then to collect validity evidence to support score interpretation. Methods We used a modified Delphi method to achieve expert consensus for scoring rubric items to optimize content validity. Eight education research experts participated in two separate modified Delphi processes, one to generate quantitative research items and one for qualitative. Modifications were made between rounds based on item scores and expert feedback. Homogeneity of ratings in the Delphi process was calculated using Cronbach's alpha, with increasing homogeneity considered an indication of consensus. Rubrics were piloted by scoring s from 22 quantitative publications from AEM Education and Training “Critical Appraisal of Emergency Medicine Education Research” (11 highlighted for excellent methodology and 11 that were not) and 10 qualitative publications (five highlighted for excellent methodology and five that were not). Intraclass correlation coefficient (ICC) estimates of reliability were calculated. Results Each rubric required three rounds of a modified Delphi process. The resulting quantitative rubric contained nine items: quality of objectives, appropriateness of methods, outcomes, data analysis, generalizability, importance to medical education, innovation, quality of writing, and strength of conclusions (Cronbach's α for the third round = 0.922, ICC for total scores during piloting = 0.893). The resulting qualitative rubric contained seven items: quality of study aims, general methods, data collection, sampling, data analysis, writing quality, and strength of conclusions (Cronbach's α for the third round = 0.913, ICC for the total scores during piloting = 0.788). Conclusion We developed scoring rubrics to assess quality in quantitative and qualitative medical education research s to aid in selection for presentation at scientific meetings. Our tools demonstrated high reliability.
ISSN:2472-5390
2472-5390
DOI:10.1002/aet2.10654