A General Framework for Edited Video and Raw Video Summarization

In this paper, we build a general summarization framework for both of edited video and raw video summarization. Overall, our work can be divided into three folds. 1) Four models are designed to capture the properties of video summaries, i.e., containing important people and objects (importance), rep...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing 2017-08, Vol.26 (8), p.3652-3664
Hauptverfasser: Li, Xuelong, Zhao, Bin, Lu, Xiaoqiang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In this paper, we build a general summarization framework for both of edited video and raw video summarization. Overall, our work can be divided into three folds. 1) Four models are designed to capture the properties of video summaries, i.e., containing important people and objects (importance), representative to the video content (representativeness), no similar key-shots (diversity), and smoothness of the storyline (storyness). Specifically, these models are applicable to both edited videos and raw videos. 2) A comprehensive score function is built with the weighted combination of the aforementioned four models. Note that the weights of the four models in the score function, denoted as property-weight, are learned in a supervised manner. Besides, the property-weights are learned for edited videos and raw videos, respectively. 3) The training set is constructed with both edited videos and raw videos in order to make up the lack of training data. Particularly, each training video is equipped with a pair of mixing-coefficients, which can reduce the structure mess in the training set caused by the rough mixture. We test our framework on three data sets, including edited videos, short raw videos, and long raw videos. Experimental results have verified the effectiveness of the proposed framework.
ISSN:1057-7149
1941-0042
DOI:10.1109/TIP.2017.2695887