Automated test design using swarm and evolutionary intelligence algorithms

The world's increasing dependence on computer‐assisted education systems has raised significant challenges about student assessment methods, such as automated test design. The exam questions should test the students' potential from various aspects, such as their intellectual and cognitive...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Expert systems 2022-05, Vol.39 (4), p.n/a
Hauptverfasser: Aktaş, Muhammet, Yetgin, Zeki, Kılıç, Fatih, Sünbül, Önder
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The world's increasing dependence on computer‐assisted education systems has raised significant challenges about student assessment methods, such as automated test design. The exam questions should test the students' potential from various aspects, such as their intellectual and cognitive levels, which can be defined as attributes of the questions to assess student knowledge. Test design is challenging when various question attributes, such as category, learning outcomes, difficulty, and so forth, are considered with the exam constraints, such as exam difficulty and duration. In this paper, four contributions are provided to overcome test design challenges for the student assessment. First, a tool is developed to generate a synthetic question pool. Second, an objective function is designed based on the considered attributes. Third, the popular swarm and evolutionary optimization methods, namely particle swarm optimization, genetic algorithm, artificial bee colony, differential search algorithm are comparatively studied with novel methodologies applied to them. Finally, as the state of the art methods, artificial bee colony, and differential search algorithm are further modified to improve the solution of the test design. To perform the proposed algorithms, a dataset of 1000 questions is built with the proposed question attributes of the test design. Algorithms are evaluated in terms of their successes in both minimizing the objective function and running time. Additionally, Friedman's test and Wilcoxon rank‐sum statistical tests are applied to statistically compare the algorithms' performances. The results show that the improved artificial bee colony and the improved differential search provide better results than others in terms of optimization error and running time.
ISSN:0266-4720
1468-0394
DOI:10.1111/exsy.12918