FSD-10: A Dataset for Competitive Sports Content Analysis
Action recognition is an important and challenging problem in video analysis. Although the past decade has witnessed progress in action recognition with the development of deep learning, such process has been slow in competitive sports content analysis. To promote the research on action recognition...
Gespeichert in:
Hauptverfasser: | , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Action recognition is an important and challenging problem in video analysis.
Although the past decade has witnessed progress in action recognition with the
development of deep learning, such process has been slow in competitive sports
content analysis. To promote the research on action recognition from
competitive sports video clips, we introduce a Figure Skating Dataset (FSD-10)
for finegrained sports content analysis. To this end, we collect 1484 clips
from the worldwide figure skating championships in 2017-2018, which consist of
10 different actions in men/ladies programs. Each clip is at a rate of 30
frames per second with resolution 1080 $\times$ 720. These clips are then
annotated by experts in type, grade of execution, skater info, .etc. To build a
baseline for action recognition in figure skating, we evaluate state-of-the-art
action recognition methods on FSD-10. Motivated by the idea that domain
knowledge is of great concern in sports field, we propose a keyframe based
temporal segment network (KTSN) for classification and achieve remarkable
performance. Experimental results demonstrate that FSD-10 is an ideal dataset
for benchmarking action recognition algorithms, as it requires to accurately
extract action motions rather than action poses. We hope FSD-10, which is
designed to have a large collection of finegrained actions, can serve as a new
challenge to develop more robust and advanced action recognition models. |
---|---|
DOI: | 10.48550/arxiv.2002.03312 |