Gender recognition using motion data from multiple smart devices

•A thorough study of gender recognition using motion data from multiple devices.•A methodological framework for analyzing motion data from multiple devices.•Motion features are extracted from time, frequency and wavelet domains.•Using motion data from multiple devices can significantly improve the a...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Expert systems with applications 2020-06, Vol.147, p.113195, Article 113195
Hauptverfasser: Dong, Jianmin, Du, Youtian, Cai, Zhongmin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:•A thorough study of gender recognition using motion data from multiple devices.•A methodological framework for analyzing motion data from multiple devices.•Motion features are extracted from time, frequency and wavelet domains.•Using motion data from multiple devices can significantly improve the accuracy.•A motion dataset of 56 subjects is established for gender recognition. Using multiple smart devices, such as smartphone and smartwatch simultaneously, is becoming a popular life style with the popularity of wearables. This multiple-sensor setting provides new opportunities for enhanced user trait analysis via multiple data fusion. In this study, we explore the task of gender recognition by using motion data collected from multiple smart devices. Specifically, motion data are collected from smartphone and smart band simultaneously. Motion features are extracted from the collected motion data according to three aspects: time, frequency, and wavelet domains. We present a feature selection method considering the redundancies between motion features. Gender recognition is performed using four supervised learning methods. Experimental results demonstrate that using motion data collected from multiple smart devices can significantly improve the accuracy of gender recognition. Evaluation of our method on a dataset of 56 subjects shows that it can reach an accuracy of 98.7% compared with the accuracies of 93.7% and 88.2% when using smartphone and smart band individually.
ISSN:0957-4174
1873-6793
DOI:10.1016/j.eswa.2020.113195