Learning cricket strokes from spatial and motion visual word sequences
There are a number of challenges involved in recognizing actions from Cricket telecast videos, mainly, due to the rapid camera motion, camera switching, and variations in background/foreground, scale, position and viewpoint. Our work deals with the task of trimmed Cricket stroke classification. We u...
Gespeichert in:
Veröffentlicht in: | Multimedia tools and applications 2023, Vol.82 (1), p.1237-1259 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | There are a number of challenges involved in recognizing actions from Cricket telecast videos, mainly, due to the rapid camera motion, camera switching, and variations in background/foreground, scale, position and viewpoint. Our work deals with the task of trimmed Cricket stroke classification. We used the Cricket Highlights dataset of Gupta and Balan (
2020
) and manually labeled the 562 trimmed strokes into 5 categories based on the direction of stroke play. These categories are independent of the batsman pose orientations (or handedness) and are useful in determining the outcome of a Cricket stroke. Models trained on our proposed categories can have applications in building player profiles, automated extraction of direction dependent strokes and highlights generation. The Gated Recurrent Unit (GRU) based models were trained on sequences of spatial and motion visual words, obtained by
hard
(HA) and
soft assignment
(SA). Extensive set of experiments were carried out on the frame-level dense optical flow grid(OF Grid) features, histogram of oriented optical flow(HOOF), pretrained 2D ResNet and pretrained 3D ResNet extracted features. The training on visual word sequences gives better results as compared to the training on raw feature sequences. Moreover, the soft assignment based word sequences perform better than the hard assignment based sequences of OF Grid features. We present strong baseline results for this new dataset, with the best accuracy of
8
1
.
1
3
%
on the test set, using soft assignment on optical flow based grid features. We compare our results with Transformer and 2-stream GRU models trained on HA/SA visual words, and 3D convolutional models (C3D/I3D) trained on raw frame sequences. |
---|---|
ISSN: | 1380-7501 1573-7721 |
DOI: | 10.1007/s11042-022-13307-y |