Evaluating Predictive Models of Student Success: Closing the Methodological Gap

Model evaluation – the process of making inferences about the performance of predictive models – is a critical component of predictive model-ing research in learning analytics. In this work, we present an overview of the state-of-the-practice of model evaluation in learning analytics, which overwhel...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of Learning Analytics 2018-01, Vol.5 (2), p.105-125
Hauptverfasser: Gardner, Joshua Patrick, Brooks, Christopher
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Model evaluation – the process of making inferences about the performance of predictive models – is a critical component of predictive model-ing research in learning analytics. In this work, we present an overview of the state-of-the-practice of model evaluation in learning analytics, which overwhelmingly uses only na ̈ıve methods for model evaluation or, less commonly, statistical tests which are not appropriate for predictive model evaluation. We then provide an overview of more appropriate methods for model evaluation, presenting both frequentist and a preferred Bayesian method. Finally, we apply three methods – the na ̈ıve average commonly used in learning analytics, frequentist null hypothesis significance test(NHST), and hierarchical Bayesian model evaluation – to a large set ofMOOC data. We compare 96 different predictive modeling techniques,including different feature sets, statistical modeling algorithms, and tuning hyperparameters for each, using this case study to demonstrate the different experimental conclusions these evaluation techniques provide.
ISSN:1929-7750
1929-7750
DOI:10.18608/jla.2018.52.7