Evaluation of academic integrity of online open book assessments implemented in an undergraduate medical radiation science course during COVID-19 pandemic

Online open book assessment has been a common alternative to a traditional invigilated test or examination during the COVID-19 pandemic. However, its unsupervised nature increases ease of cheating, which is an academic integrity concern. This study's purpose was to evaluate the integrity of two...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of medical imaging and radiation sciences 2020-12, Vol.51 (4), p.610-616
1. Verfasser: Ng, Curtise Kin Cheung
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Online open book assessment has been a common alternative to a traditional invigilated test or examination during the COVID-19 pandemic. However, its unsupervised nature increases ease of cheating, which is an academic integrity concern. This study's purpose was to evaluate the integrity of two online open book assessments with different formats (1. Tightly time restricted - 50 min for mid-semester and 2. Take home - any 4 h within a 24-h window for end of semester) implemented in a radiologic pathology unit of a Bachelor of Science (Medical Radiation Science) course during the pandemic. This was a retrospective study involving a review and analysis of existing information related to the integrity of the two radiologic pathology assessments. Three integrity evaluation approaches were employed. The first approach was to review all the Turnitin plagiarism detection software reports with use of ‘seven-words-in-a-row’ criterion to identify any potential collusion. The second approach was to search for highly irrelevant assessment answers during marking for detection of other cheating types. Examples of highly irrelevant answers included those not addressing question requirements and stating patients' clinical information not from given patient histories. The third approach was an assessment score statistical analysis through descriptive and inferential statistics to identify any abnormal patterns that might suggest cheating occurred. An abnormal pattern example was high assessment scores. The descriptive statistics used were minimum, maximum, range, first quartile, median, third quartile, interquartile range, mean, standard deviation, fail and full mark rates. T-test was employed to compare mean scores between the two assessments in this year (2020), between the two assessments in the last year (2019), between the two mid-semester assessments in 2019 and 2020, and between this and last years' end of semester assessments. A p-value of less than 0.05 was considered statistically significant. No cheating evidence was found in all Turnitin reports and assessment answers. The mean scores of the end of semester assessments in 2019 (88.2%) and 2020 (90.9%) were similar (p = 0.098). However, the mean score of the online open book mid-semester assessment in 2020 (62.8%) was statistically significantly lower than that of the traditional invigilated mid-semester assessment in 2019 (71.8%) with p 
ISSN:1939-8654
1876-7982
DOI:10.1016/j.jmir.2020.09.009