Deep Architectures and Ensembles for Semantic Video Classification

This paper addresses the problem of accurate semantic labeling of short videos. To this end, a multitude of three different deep nets, ranging from traditional recurrent neural 4 networks (LSTM, GRU), temporal agnostic networks (FV, VLAD, BoW), fully connected neural networks mid-stage AV fusion, an...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on circuits and systems for video technology 2019-12, Vol.29 (12), p.3568-3582
Hauptverfasser: Ong, Eng-Jon, Husain, Syed Sameed, Bober-Irizar, Mikel, Bober, Miroslaw
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:This paper addresses the problem of accurate semantic labeling of short videos. To this end, a multitude of three different deep nets, ranging from traditional recurrent neural 4 networks (LSTM, GRU), temporal agnostic networks (FV, VLAD, BoW), fully connected neural networks mid-stage AV fusion, and others were considered. Additionally, we also propose a residual architecture-based deep neural network (DNN) for video classification, with state-of-the-art classification performance at significantly reduced complexity. Furthermore, we propose four new approaches to diversity-driven multi-net ensembling, one based on fast correlation measure and three incorporating a DNN-based combiner. We show that significant performance gains can be achieved by ensembling diverse nets and we investigate factors contributing to high diversity. Based on the extensive YouTube8M dataset, we provide an in-depth evaluation and analysis of their behavior. We show that the performance of the ensemble is state-of-the-art achieving the highest accuracy on the YouTube8M Kaggle test data. The performance of the ensemble of classifiers was also evaluated on the HMDB51 and UCF101 datasets, and show that the resulting method achieves comparable accuracy with the state-of-the-art methods using similar input features.
ISSN:1051-8215
1558-2205
DOI:10.1109/TCSVT.2018.2881842