Evidence Feed Forward Hidden Markov Models for Visual Human Action Classification (Preprint)
Predictions of peoples actions based on visual data is a fairly easy job for people, harder job for animals, and virtually impossible for machines, although many classification systems can predict a limited number of actions. This is due to the many different movements people make while performing t...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Report |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Predictions of peoples actions based on visual data is a fairly easy job for people, harder job for animals, and virtually impossible for machines, although many classification systems can predict a limited number of actions. This is due to the many different movements people make while performing the action. Take, for example, a visit to the local store. If we were to sit and watch people walk up and down isles, we would see a unique style of movement from each person. There may be close similarities, but the actual position of the body parts in relation to time would all be unique. People tend to merge these together and look at the overall movement, focusing on only one thing at a time, making an assumption, and validating the assumption. Animals do the same thing but with less a priori knowledge, or less understanding, of the movements. Algorithms that are written for classification of human movement often look at the specific details of movements. It is much harder to generalize an algorithm while testing it on a procedural machine.
Submitted for publication in Journal of Artificial Intelligence. |
---|