Prediction and Description of Near-Future Activities in Video

Most of the existing works on human activity analysis focus on recognition or early recognition of the activity labels from complete or partial observations. Similarly, almost all of the existing video captioning approaches focus on the observed events in videos. Predicting the labels and the captio...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Computer vision and image understanding 2021-09, Vol.210, p.103230, Article 103230
Hauptverfasser: Mahmud, Tahmida, Billah, Mohammad, Hasan, Mahmudul, Roy-Chowdhury, Amit K.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Most of the existing works on human activity analysis focus on recognition or early recognition of the activity labels from complete or partial observations. Similarly, almost all of the existing video captioning approaches focus on the observed events in videos. Predicting the labels and the captions of future activities where no frames of the predicted activities have been observed is a challenging problem, with important applications that require anticipatory response. In this work, we propose a system that can infer the labels and the captions of a sequence of future activities. Our proposed network for label prediction of a future activity sequence has three branches where the first branch takes visual features from the objects present in the scene, the second branch takes observed sequential activity features, and the third branch captures the last observed activity features. The predicted labels and the observed scene context are then mapped to meaningful captions using a sequence-to-sequence learning-based method. Experiments on four challenging activity analysis datasets and a video description dataset demonstrate that our label prediction approach achieves comparable performance with the state-of-the-arts and our captioning framework outperform the state-of-the-arts. •Our work provides a description of near-future activities from current observations.•Ours is one of the earliest works for captioning near-future events in videos.•We use spatio-temporal relationship of activities and objects for label prediction.•We use a sequence-to-sequence learning-based approach for mapping labels to captions.•We perform extensive experiments to show the effectiveness of the proposed framework.
ISSN:1077-3142
1090-235X
DOI:10.1016/j.cviu.2021.103230