Fuzzy dragon deep belief neural network for activity recognition using hierarchical skeleton features
In computer vision, human activity recognition is an active research area for different contexts, such as human–computer interaction, healthcare, military applications, and security surveillance. Activity recognition is performed to recognize the goals and actions of one or more people from a sequen...
Gespeichert in:
Veröffentlicht in: | Evolutionary intelligence 2022, Vol.15 (2), p.907-924 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In computer vision, human activity recognition is an active research area for different contexts, such as human–computer interaction, healthcare, military applications, and security surveillance. Activity recognition is performed to recognize the goals and actions of one or more people from a sequence of observations based on the actions and the environmental conditions. Still, there are numbers of challenges and issues, which motivate the development of new activity recognition method to enhance the accuracy under more realistic conditions. This paper proposes an error-based fuzzy dragon deep belief network (error-based DDBN), which is the integration of fuzzy with DDBN classifier, to recognize the human activity from a complex and diverse scenario, for which the keyframes are generated based on the Bhattacharya coefficient from the set of frames of the given video. The key frames from the Bhattacharya are extracted using the scale invariant feature transform, color histogram of the spatio-temporal interest dominant points, and hierarchical skeleton. Finally, the features are fed to the classifier, where the classification is done using the proposed error-based fuzzy DDBN to recognize the person. The experimentation is performed using two datasets, namely KTH and Weizmann for analyzing the performance of the proposed classifier. The experimental results reveal that the proposed classifier performs the activity recognition in a better way by obtaining the maximum accuracy of 1, a sensitivity of 0.99, and the specificity of 0.991. |
---|---|
ISSN: | 1864-5909 1864-5917 |
DOI: | 10.1007/s12065-019-00245-2 |