Learning Automata Based Incremental Learning Method for Deep Neural Networks
Deep learning methods have got fantastic performance on lots of large-scale datasets for machine learning tasks, such as visual recognition and neural language processing. Most of the progress on deep learning in recent years lied on supervised learning, for which the whole dataset with respect to a...
Gespeichert in:
Veröffentlicht in: | IEEE access 2019, Vol.7, p.41164-41171 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Deep learning methods have got fantastic performance on lots of large-scale datasets for machine learning tasks, such as visual recognition and neural language processing. Most of the progress on deep learning in recent years lied on supervised learning, for which the whole dataset with respect to a specific task should be well-prepared before training. However, in the real-world scenario, the labeled data associated with the assigned classes are always gathered incrementally over time, since it is cumbersome work to collect and annotate the training data manually. This suggests the manner of sequentially training on a series of datasets with gradually added training samples belonging to new classes, which is called incremental learning. In this paper, we proposed an effective incremental training method based on learning automata for deep neural networks. The main thought is to train a deep model with dynamic connections which can be either "activated" or "deactivated" on different datasets of the incremental training stages. Our proposed method can relieve the destruction of old features while learning new features for the newly added training samples, which can lead to better training performance on the incremental learning stage. The experiments on MNIST and CIFAR-100 demonstrated that our method can be implemented for deep neural models in a long sequence of incremental training stages and can achieve superior performance than training from scratch and the fine-tuning method. |
---|---|
ISSN: | 2169-3536 2169-3536 |
DOI: | 10.1109/ACCESS.2019.2907645 |