Graph Edge Convolutional Neural Networks for Skeleton-Based Action Recognition

Body joints, directly obtained from a pose estimation model, have proven effective for action recognition. Existing works focus on analyzing the dynamics of human joints. However, except joints, humans also explore motions of limbs for understanding actions. Given this observation, we investigate th...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transaction on neural networks and learning systems 2020-08, Vol.31 (8), p.3047-3060
Hauptverfasser: Zhang, Xikun, Xu, Chang, Tian, Xinmei, Tao, Dacheng
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Body joints, directly obtained from a pose estimation model, have proven effective for action recognition. Existing works focus on analyzing the dynamics of human joints. However, except joints, humans also explore motions of limbs for understanding actions. Given this observation, we investigate the dynamics of human limbs for skeleton-based action recognition. Specifically, we represent an edge in a graph of a human skeleton by integrating its spatial neighboring edges (for encoding the cooperation between different limbs) and its temporal neighboring edges (for achieving the consistency of movements in an action). Based on this new edge representation, we devise a graph edge convolutional neural network (CNN). Considering the complementarity between graph node convolution and edge convolution, we further construct two hybrid networks by introducing different shared intermediate layers to integrate graph node and edge CNNs. Our contributions are twofold, graph edge convolution and hybrid networks for integrating the proposed edge convolution and the conventional node convolution. Experimental results on the Kinetics and NTU-RGB+D data sets demonstrate that our graph edge convolution is effective at capturing the characteristics of actions and that our graph edge CNN significantly outperforms the existing state-of-the-art skeleton-based action recognition methods.
ISSN:2162-237X
2162-2388
DOI:10.1109/TNNLS.2019.2935173