Detection of human actions from a single example

We present an algorithm for detecting human actions based upon a single given video example of such actions. The proposed method is unsupervised, does not require learning, segmentation, or motion estimation. The novel features employed in our method are based on space-time locally adaptive regressi...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Hae Jong Seo, Milanfar, Peyman
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:We present an algorithm for detecting human actions based upon a single given video example of such actions. The proposed method is unsupervised, does not require learning, segmentation, or motion estimation. The novel features employed in our method are based on space-time locally adaptive regression kernels. Our method is based on the dense computation of so-called space-time local regression kernels (i.e. local descriptors) from a query video, which measure the likeness of a voxel to its spatio-temporal surroundings. Salient features are then extracted from these descriptors using principal components analysis (PCA). These are efficiently compared against analogous features from the target video using a matrix generalization of the cosine similarity measure. The algorithm yields a scalar resemblance volume; each voxel indicating the like-lihood of similarity between the query video and all cubes in the target video. By employing non-parametric significance tests and non-maxima suppression, we accurately detect the presence and location of actions similar to the given query video. High performance is demonstrated on a challenging set of action data indicating successful detection of multiple complex actions even in the presence of fast motions.
ISSN:1550-5499
2380-7504
DOI:10.1109/ICCV.2009.5459433