A structure-transfer-driven temporal subspace clustering for video summarization

With the explosively increasing of mobile phones and other oriented camera devices, more and more video data is captured and stored. This brings out an urgent need for fast browsing and understanding video contents. Automatic generation of video summarization is one of effective techniques to tackle...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Multimedia tools and applications 2019-09, Vol.78 (17), p.24123-24145
Hauptverfasser: Zhang, Jing, Shi, Yue, Jing, Peiguang, Liu, Jing, Su, Yuting
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:With the explosively increasing of mobile phones and other oriented camera devices, more and more video data is captured and stored. This brings out an urgent need for fast browsing and understanding video contents. Automatic generation of video summarization is one of effective techniques to tackle these problems which extracts succinct summaries to represent the original long videos. It involves two problems: video segmentation and summary generation. Most previous works just focused on addressing the second problem by exploiting a simple strategy like boundary detection to segment videos. However, this type of approach leads to suboptimal result because they not only lack of learning mechanism in video segmentation stage, but also separate the whole task into two independent stages. In this paper, we proposed a novel structure-transfer-driven temporal subspace clustering segmentation (STSC) method for video summarization. We first learn the structure information from source videos and then transfer it to target videos. By the Determinantal Point Process (DPP) algorithm, we select an informative subset of shots to create the final video summary. Experimental results on SumMe and TVSum datasets demonstrate the effection of our proposed method, against state-of-the-art methods.
ISSN:1380-7501
1573-7721
DOI:10.1007/s11042-018-6841-4