Comparison of Ensemble Techniques for Incremental Learning of New Concept Classes under Hostile Non-stationary Environments
We have recently introduced Learn ++ , an incremental learning algorithm, inspired by the multiple classifiers structure of AdaBoost. Both algorithms generate an ensemble of classifiers trained on bootstrapped replicates of the training data, and the classifiers are then combined through a voting pr...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We have recently introduced Learn ++ , an incremental learning algorithm, inspired by the multiple classifiers structure of AdaBoost. Both algorithms generate an ensemble of classifiers trained on bootstrapped replicates of the training data, and the classifiers are then combined through a voting process. Learn ++ , however, generates additional ensembles as new data become available, and uses a different distribution update rule to resample the data. While AdaBoost was originally designed to improve the performance of a weak classifier, whether it can still achieve incremental learning through its ensemble structure is still an open question. In this paper, we compare the incremental learning ability of AdaBoost.M1 and Learn ++ under very hostile nonstationary learning environments, which may introduce new concept classes. We also compare the algorithms under several combination rules to determine which of the three key components -ensemble structure, resampling procedure, or the combination rule -has the primary impact on incremental learning in nonstationary environments. |
---|---|
ISSN: | 1062-922X 2577-1655 |
DOI: | 10.1109/ICSMC.2006.385071 |