Self-supervised Adversarial Video Summarizer with Context Latent Sequence Learning

Video summarization attempts to create concise and complete synopsis of a video through identifying the most informative and explanatory parts while removing redundant video frames, which facilitates retrieving, managing and browsing video efficiently. Most existing video summarization approaches he...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on circuits and systems for video technology 2023-08, Vol.33 (8), p.1-1
Hauptverfasser: Xu, Yifei, Li, Xiangshun, Pan, Litong, Sang, Weiguang, Wei, Pingping, Zhu, Li
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Video summarization attempts to create concise and complete synopsis of a video through identifying the most informative and explanatory parts while removing redundant video frames, which facilitates retrieving, managing and browsing video efficiently. Most existing video summarization approaches heavily rely on enormous high-quality human-annotated labels or fail to produce semantically meaningful video summaries with the guidance of prior information. Without any supervised labels, we propose Self-supervised Adversarial Video Summarizer (2SAVS) that exploits context latent sequence learning to generate satisfying video summary. To implement it, our model elaborates a novel pretext task of identifying latent sequences and normal frames by training self-supervised generative adversarial network (GAN) with several well-designed losses. As the core components of 2SAVS, Clip Consistency Representation (CCR) and Hybrid Feature Refinement (HFR) are developed to ensure semantic consistency and continuity of clips. Furthermore, a novel separation loss is designed to explicitly enlarge the distance between prediction frame scores to effectively enhance the model's discriminative ability. Differently, latent sequences, additional finetune operations and generators are not required when inferring video summary. Experiments on two challenging and diverse datasets demonstrate that our approach outperforms other state-of-the-art unsupervised and weakly-supervised methods, and even produces comparable results with several excellent supervised methods.
ISSN:1051-8215
1558-2205
DOI:10.1109/TCSVT.2023.3240464