Deep Cascade AdaBoost with Unsupervised Clustering in Autonomous Vehicles
In recent years, deep learning has achieved excellent performance in a growing number of application fields. With the help of high computation and large-scale datasets, deep learning models with huge parameters constantly enhance the performance of traditional algorithms. Additionally, the AdaBoost...
Gespeichert in:
Veröffentlicht in: | Electronics (Basel) 2023-01, Vol.12 (1), p.44 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In recent years, deep learning has achieved excellent performance in a growing number of application fields. With the help of high computation and large-scale datasets, deep learning models with huge parameters constantly enhance the performance of traditional algorithms. Additionally, the AdaBoost algorithm, as one of the traditional machine learning algorithms, has a minimal model and performs well on small datasets. However, it is still challenging to select the optimal classification feature template from a large pool of features in any scene quickly and efficiently. Especially in the field of autonomous vehicles, images taken by onboard cameras contain all kinds of targets on the road, which means the images are full of multiple features. In this paper, we propose a novel Deep Cascade AdaBoost model, which effectively combines the unsupervised clustering algorithm based on deep learning and the traditional AdaBoost algorithm. First, we use the unsupervised clustering algorithm to classify the sample data automatically. We can obtain classification subsets with small intra-class and large inter-class errors by specifying positive and negative samples. Next, we design a training framework for Cascade-AdaBoost based on clustering and mathematically demonstrate that our framework has better detection performance than the traditional Cascade-AdaBoost framework. Finally, experiments on the KITTI dataset demonstrate that our model performs better than the traditional Cascade-AdaBoost algorithm in terms of accuracy and time. The detection time was shortened by 30%, and the false detection rate was reduced by 20%. Meanwhile, the training time of our model is significantly shorter than the traditional Cascade-AdaBoost algorithm. |
---|---|
ISSN: | 2079-9292 2079-9292 |
DOI: | 10.3390/electronics12010044 |