EmoWare: A Context-Aware Framework for Personalized Video Recommendation Using Affective Video Sequences

With the exponential growth in areas of machine intelligence, the world has witnessed promising solutions to the personalized content recommendation. The ability of interactive learning agents to make optimal decisions in dynamic environments has been proven and very well conceptualized by reinforce...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2019, Vol.7, p.51185-51200
Hauptverfasser: Tripathi, Abhishek, Ashwin, T. S., Guddeti, Ram Mohana Reddy
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:With the exponential growth in areas of machine intelligence, the world has witnessed promising solutions to the personalized content recommendation. The ability of interactive learning agents to make optimal decisions in dynamic environments has been proven and very well conceptualized by reinforcement learning (RL). The learning characteristics of deep-bidirectional recurrent neural networks (DBRNN) in both positive and negative time directions has shown exceptional performance as generative models to generate sequential data in supervised learning tasks. In this paper, we harness the potential of the said two techniques and propose EmoWare (emotion-aware), a personalized, emotionally intelligent video recommendation engine, employing a novel context-aware collaborative filtering approach, where the intensity of users' spontaneous non-verbal emotional response toward the recommended video is captured through interactions and facial expressions analysis for decision-making and video corpus evolution with real-time feedback streams. To account for users' multidimensional nature in the formulation of optimal policies, RL-scenarios are enrolled using on-policy (SARSA) and off-policy (Q-learning) temporal-difference learning techniques, which are used to train DBRNN to learn contextual patterns and to generate new video sequences for the recommendation. System evaluation for a month with real users shows that the EmoWare outperforms the state-of-the-art methods and models users' emotional preferences very well with stable convergence.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2019.2911235