Temporal segment graph convolutional networks for skeleton-based action recognition
Different actions usually emphasize on different parts of a skeleton, even for a specific action, different action stages have the corresponding emphases. Previous studies generally construct the human skeletons as predefined, thus lacking the adaptability to different action modes. In addition, the...
Gespeichert in:
Veröffentlicht in: | Engineering applications of artificial intelligence 2022-04, Vol.110, p.104675, Article 104675 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Different actions usually emphasize on different parts of a skeleton, even for a specific action, different action stages have the corresponding emphases. Previous studies generally construct the human skeletons as predefined, thus lacking the adaptability to different action modes. In addition, these methods simply employ the padding or truncation operation on the skeleton sequence to fix the sequence length, resulting in additional temporal misalignment problem. In this work, we propose a novel temporal segment graph convolutional networks (TS-GCN) for skeleton-based action recognition. Our model divides the whole sequence into several subsequences. Then GCNs are applied on each subsequence to capture the dynamic information stage by stage, which can align the motion features in temporal domain. Besides, in order to explore the intrinsic features contained in each subsequence, our model introduces a graph-adaptive method to construct an individual graph that can be learned and updated from skeleton data for each subsequence, which increases the generality of graph construction to adapt to different sequences. Extensive experiments are conducted on two standard datasets, NTU-RGB+D and Kinetics. The experimental results demonstrate the effectiveness of the proposed method. |
---|---|
ISSN: | 0952-1976 1873-6769 |
DOI: | 10.1016/j.engappai.2022.104675 |