Design of experiments and response surface methodology to tune machine learning hyperparameters, with a random forest case-study

•Design of experiments identified significant hyperparameters in the random forest.•No. of features and sampling with replacement were discarded in the screening.•Interaction between class weights and cutoff had the largest effect on the response.•Response surface methodology correctly tuned random...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Expert systems with applications 2018-11, Vol.109, p.195-205
Hauptverfasser: Lujan-Moreno, Gustavo A., Howard, Phillip R., Rojas, Omar G., Montgomery, Douglas C.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:•Design of experiments identified significant hyperparameters in the random forest.•No. of features and sampling with replacement were discarded in the screening.•Interaction between class weights and cutoff had the largest effect on the response.•Response surface methodology correctly tuned random forest hyperparameters.•The methodology achieved an outstanding 0.81 cross-validated BACC vs default of 0.64. Most machine learning algorithms possess hyperparameters. For example, an artificial neural network requires the determination of the number of hidden layers, nodes, and many other parameters related to the model fitting process. Despite this, there is still no clear consensus on how to tune them. The most popular methodology is an exhaustive grid search, which can be highly inefficient and sometimes infeasible. Another common solution is to change one hyperparameter at a time and measure its effect on the model’s performance. However, this can also be inefficient and does not guarantee optimal results since it ignores interactions between the hyperparameters. In this paper, we propose to use the Design of Experiments (DOE) methodology (factorial designs) for screening and Response Surface Methodology (RSM) to tune a machine learning algorithm’s hyperparameters. An application of our methodology is presented with a detailed discussion of the results of a random forest case-study using a publicly available dataset. Benefits include fewer training runs, better parameter selection, and a disciplined approach based on statistical theory.
ISSN:0957-4174
1873-6793
DOI:10.1016/j.eswa.2018.05.024