Integration of Visual Temporal Information and Textual Distribution Information for News Web Video Event Mining

News web videos exhibit several characteristics, including a limited number of features, noisy text information, and error in near-duplicate keyframes (NDK) detection. Such characteristics have made the mining of the events from news web videos a challenging task. In this paper, a novel framework is...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on human-machine systems 2016-02, Vol.46 (1), p.124-135
Hauptverfasser: Zhang, Chengde, Wu, Xiao, Shyu, Mei-Ling, Peng, Qiang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:News web videos exhibit several characteristics, including a limited number of features, noisy text information, and error in near-duplicate keyframes (NDK) detection. Such characteristics have made the mining of the events from news web videos a challenging task. In this paper, a novel framework is proposed to better group the associated web videos to events. First, the data preprocessing stage performs feature selection and tag relevance learning. Next, multiple correspondence analysis is applied to explore the correlations between terms and events with the assistance of visual information. Cooccurrence and visual near-duplicate feature trajectory induced from NDKs are combined to calculate the similarity between NDKs and events. Finally, a probabilistic model is proposed for news web video event mining, where both visual temporal information and textual distribution information are integrated. Experiments on the news web videos from YouTube demonstrate that the integration of visual temporal information and textual distribution information outperforms the existing methods in the news web video event mining.
ISSN:2168-2291
2168-2305
DOI:10.1109/THMS.2015.2489681