FTBME: feature transferring based multi-model ensemble
Multi-model ensemble is an important fundamental technique of practical value for many artificial intelligence applications. However, the usage for multi-model ensemble has been limited when it is combined with deep neural networks to construct ensemble of deep neural networks. Due to the big time a...
Gespeichert in:
Veröffentlicht in: | Multimedia tools and applications 2020-07, Vol.79 (25-26), p.18767-18799 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Multi-model ensemble is an important fundamental technique of practical value for many artificial intelligence applications. However, the usage for multi-model ensemble has been limited when it is combined with deep neural networks to construct ensemble of deep neural networks. Due to the big time and computing resources required to train and to integrate multiple deep neural networks for the achievement of multi-model ensemble, the engineering application field where developing time and computing resources are usually restricted, has not yet widespreadly benefited from ensemble of deep neural networks. To alleviate this situation, we present a new multi-model ensemble approach entitled feature transferring based multi-model ensemble (FTBME), for ensemble of deep neural networks. Primarily, we propose a feature transferring based multi-model training strategy to more affordably find multiple extra models based on a given previously optimized deep neural network model. Sequentially, to develop better ensemble solutions, we design a more effective random greedy based ensemble selection strategy to filter out models non-positive to ensemble generalization. Finally, inspired by the idea of averaging parameter points, we propose to fuse the obtained models in weight space which eventually reduces the expense of ensemble at the testing stage to a single deep neural network model while retaining the generalization. These three advances constitute the resulting technique FTBME. We conducted extensive experiments using deep neural networks, from light weight to complex, on ImageNet, CIFAR-10 and CIFAR-100. Results show that, given a deep neural network model which has been well-optimized and reaching its limit, FTBME can obtain better generalization with minor extra training expense while maintaining the expense to a single model at ensemble testing. This promising property of FTBME make us believe that it could be leveraged to broaden the usage for ensemble of deep neural networks, alleviating the situation that the engineering application field has not yet widespreadly benefited from ensemble of deep neural networks. |
---|---|
ISSN: | 1380-7501 1573-7721 |
DOI: | 10.1007/s11042-020-08746-4 |