Convolutional Long Short-Term Memory Networks for Recognizing First Person Interactions
In this paper, we present a novel deep learning based approach for addressing the problem of interaction recognition from a first person perspective. The proposed approach uses a pair of convolutional neural networks, whose parameters are shared, for extracting frame level features from successive f...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In this paper, we present a novel deep learning based approach for addressing
the problem of interaction recognition from a first person perspective. The
proposed approach uses a pair of convolutional neural networks, whose
parameters are shared, for extracting frame level features from successive
frames of the video. The frame level features are then aggregated using a
convolutional long short-term memory. The hidden state of the convolutional
long short-term memory, after all the input video frames are processed, is used
for classification in to the respective categories. The two branches of the
convolutional neural network perform feature encoding on a short time interval
whereas the convolutional long short term memory encodes the changes on a
longer temporal duration. In our network the spatio-temporal structure of the
input is preserved till the very final processing stage. Experimental results
show that our method outperforms the state of the art on most recent first
person interactions datasets that involve complex ego-motion. In particular, on
UTKinect-FirstPerson it competes with methods that use depth image and skeletal
joints information along with RGB images, while it surpasses all previous
methods that use only RGB images by more than 20% in recognition accuracy. |
---|---|
DOI: | 10.48550/arxiv.1709.06495 |