A movie box office revenue prediction model based on deep multimodal features

Demand forecasting a film’s opening weekend box office revenue is a difficult and complex task that decision-makers face due to a lack of historical data and various complex factors. We proposed a novel Deep Multimodal Feature Classifier Neural Network model (DMFCNN) for predicting a film’s opening...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Multimedia tools and applications 2023-09, Vol.82 (21), p.31981-32009
Hauptverfasser: Madongo, Canaan Tinotenda, Zhongjun, Tang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Demand forecasting a film’s opening weekend box office revenue is a difficult and complex task that decision-makers face due to a lack of historical data and various complex factors. We proposed a novel Deep Multimodal Feature Classifier Neural Network model (DMFCNN) for predicting a film’s opening weekend box office revenue using deep multimodal visual features extracted from movie posters and movie metadata. DMFCNN is an end-to-end predictive model that fuses two different feature classifiers’ predictive power in estimating the movie box office revenue. Initially, a pre-trained residual convolutional neural network (ResNet50) architecture using transfer learning techniques extracts visual, and object representations learned from movie posters. The movie posters’ discriminative and financial success-related features are combined with other movie metadata to classify the movie box office revenue. The proposed DMFCNN aided in developing a robust predictive model that jointly learns and defines useful revenue-related poster features and objects semantics, which strongly correlates with movie box office revenue and aesthetic appearance. Although our main task was classification, we also analyzed regressions between our exogenous variables as a regularizer to avoid the risk of overfitting. We evaluated DMFCNN’s performance and compared it to various state-of-the-art models on the Internet Movie Database by collecting 49,857 movies metadata and posters from 2006 to 2019. The learned information on movie posters and predicted outcomes outperformed existing models, achieving 59.30% prediction accuracy. The proposed fusion strategy outperformed the existing fusion schemes in precision, Area Under Cover, sensitivity, and specificity by achieving 80%, 81%, 79%, and 78%, respectively.
ISSN:1380-7501
1573-7721
DOI:10.1007/s11042-023-14456-4