Using Case-Based Reasoning to Improve the Quality of Feedback Provided by Automated Grading Systems

Information technology is now ubiquitous in higher education institutions worldwide. More than 85% of American universities use e-learning systems to supplement traditional classroom activities while some have started offering Massive Online Open Courses (MOOCs), which are completely online. An obvi...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:International Association for Development of the Information Society 2014
Hauptverfasser: Kyrilov, Angelo, Noelle, David C
Format: Report
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Information technology is now ubiquitous in higher education institutions worldwide. More than 85% of American universities use e-learning systems to supplement traditional classroom activities while some have started offering Massive Online Open Courses (MOOCs), which are completely online. An obvious benefit of these online tools is their ability to automatically grade exercises submitted by students and provide immediate feedback. Most of these systems, however, provide binary ("Correct/Incorrect") feedback to students. While such feedback is useful, some students may need additional guidance in order to successfully overcome obstacles to understanding. We propose using a Case-Based Reasoning (CBR) approach to improve the quality of feedback Computer Science students receive on their programming exercises. CBR is a machine learning technique that attempts to solve problems based on previous experiences (cases). The basic idea is that every time the instructor provides feedback to a student on a particular exercise, the information is stored in a database system as a past case. When student experience similar problems in the future, knowledge contained in past cases is used to guide the students to a solution. While the system will provide detailed feedback automatically, this feedback will have been previously crafted by human instructors, leveraging their pedagogical expertise. We describe a system of this kind, which is currently under development, and we report results from a preliminary experiment. [For full proceedings, see ED557189.]