Pairwise Two-Stream ConvNets for Cross-Domain Action Recognition With Small Data

In this work, we target cross-domain action recognition (CDAR) in the video domain and propose a novel end-to-end pairwise two-stream ConvNets ( PTC ) algorithm for real-life conditions, in which only a few labeled samples are available. To cope with the limited training sample problem, we employ pa...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transaction on neural networks and learning systems 2022-03, Vol.33 (3), p.1147-1161
Hauptverfasser: Gao, Zan, Guo, Leming, Ren, Tongwei, Liu, An-An, Cheng, Zhi-Yong, Chen, Shengyong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In this work, we target cross-domain action recognition (CDAR) in the video domain and propose a novel end-to-end pairwise two-stream ConvNets ( PTC ) algorithm for real-life conditions, in which only a few labeled samples are available. To cope with the limited training sample problem, we employ pairwise network architecture that can leverage training samples from a source domain and, thus, requires only a few labeled samples per category from the target domain. In particular, a frame self-attention mechanism and an adaptive weight scheme are embedded into the PTC network to adaptively combine the RGB and flow features. This design can effectively learn domain-invariant features for both the source and target domains. In addition, we propose a sphere boundary sample-selecting scheme that selects the training samples at the boundary of a class (in the feature space) to train the PTC model. In this way, a well-enhanced generalization capability can be achieved. To validate the effectiveness of our PTC model, we construct two CDAR data sets ( SDAI Action I and SDAI Action II ) that include indoor and outdoor environments; all actions and samples in these data sets were carefully collected from public action data sets. To the best of our knowledge, these are the first data sets specifically designed for the CDAR task. Extensive experiments were conducted on these two data sets. The results show that PTC outperforms state-of-the-art video action recognition methods in terms of both accuracy and training efficiency. It is noteworthy that when only two labeled training samples per category are used in the SDAI Action I data set, PTC achieves 21.9% and 6.8% improvement in accuracy over two-stream and temporal segment networks models, respectively. As an added contribution, the SDAI Action I and SDAI Action II data sets will be released to facilitate future research on the CDAR task.
ISSN:2162-237X
2162-2388
DOI:10.1109/TNNLS.2020.3041018