Sustainable assessment for large science classes : non-multiple choice, randomised assignments through a Learning Management System
This article reports on the development of a tool that generates randomised, non-multiple choice assessment within the BlackBoard Learning Management System (LMS) interface. The aim of the project was to use the full capability of the LMS to develop non-multiple choice, randomised assignments in a f...
Gespeichert in:
Veröffentlicht in: | Journal of learning design 2011-01, Vol.4 (3), p.50-62 |
---|---|
1. Verfasser: | |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This article reports on the development of a tool that generates randomised, non-multiple choice assessment within the BlackBoard Learning Management System (LMS) interface. The aim of the project was to use the full capability of the LMS to develop non-multiple choice, randomised assignments in a first year chemistry for a new unit that was run for the first time in 2008. Feedback from students was used to improve the assignments in subsequent semesters and the details are presented in the article. An accepted weakness of multiple-choice assessment is that it cannot elicit learning outcomes from upper levels of Biggs' SOLO taxonomy. However, written assessment items require extensive resources for marking, and are susceptible to copying as well as marking inconsistencies for large classes. This project developed an assessment tool which is valid, reliable and sustainable and that addresses the issues identified above. The tool provides each student with an assignment assessing the same learning outcomes, but containing different questions, with responses in the form of words or numbers. Practice questions are available, enabling students to obtain feedback on their approach before submitting their assignment. Thus, the tool incorporates automatic marking (essential for large classes), randomised tasks to each student (reducing copying), the capacity to give credit for working (feedback on the application of theory), and the capacity to target higher order learning outcomes by requiring students to derive their answers rather than choosing them. Results and feedback from students are presented, along with technical implementation details. [Author abstract, ed] |
---|---|
ISSN: | 1832-8342 1832-8342 |
DOI: | 10.5204/jld.v4i3.80 |