Movie trailer genre classification using multimodal pretrained features
We introduce a novel method for movie genre classification, capitalizing on a diverse set of readily accessible pretrained models. These models extract high-level features related to visual scenery, objects, characters, text, speech, music, and audio effects. To intelligently fuse these pretrained f...
Gespeichert in:
Veröffentlicht in: | Expert systems with applications 2024-12, Vol.258, p.125209, Article 125209 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We introduce a novel method for movie genre classification, capitalizing on a diverse set of readily accessible pretrained models. These models extract high-level features related to visual scenery, objects, characters, text, speech, music, and audio effects. To intelligently fuse these pretrained features, we train small classifier models with low time and memory requirements. Employing the transformer model, our approach utilizes all video and audio frames of movie trailers without performing any temporal pooling, efficiently exploiting the correspondence between all elements, as opposed to the fixed and low number of frames typically used by traditional methods. Our approach fuses features originating from different tasks and modalities, with different dimensionalities, different temporal lengths, and complex dependencies as opposed to current approaches. Our method outperforms state-of-the-art movie genre classification models in terms of precision, recall, and mean average precision (mAP). To foster future research, we make the pretrained features for the entire MovieNet dataset, along with our genre classification code and the trained models, publicly available.
[Display omitted]
•Pretrained features enable shallow neural networks for improved video classification.•We propose and compare novel architectures including a raw input-based baseline.•Transformers remove the limitation on the number of input video frames.•We set a new state-of-the-art for video genre classification on MovieNet.•We make available the pretrained features, our code, and trained models. |
---|---|
ISSN: | 0957-4174 |
DOI: | 10.1016/j.eswa.2024.125209 |