Estimation of habit-related information from male voice data using machine learning-based methods

According to a survey on the cause of death among Japanese people, lifestyle-related diseases (such as malignant neoplasms, cardiovascular diseases, and pneumonia) account for 55.8% of all deaths. Three habits, namely, drinking, smoking, and sleeping, are considered the most important factors associ...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Artificial life and robotics 2023-08, Vol.28 (3), p.520-529
Hauptverfasser: Yokoo, Takaya, Hatano, Ryo, Nishiyama, Hiroyuki
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:According to a survey on the cause of death among Japanese people, lifestyle-related diseases (such as malignant neoplasms, cardiovascular diseases, and pneumonia) account for 55.8% of all deaths. Three habits, namely, drinking, smoking, and sleeping, are considered the most important factors associated with lifestyle-related diseases, but it is difficult to measure these habits autonomously and regularly. Here, we propose a machine learning-based approach for detecting these lifestyle habits using voice data. We used classifiers and probabilistic linear discriminant analysis based on acoustic features, such as mel-frequency cepstrum coefficients (MFCCs) and jitter, extracted from a speech dataset we developed, and an X-vector from a pre-trained ECAPA-TDNN model. For training models, we used several classifiers implemented in MATLAB 2021b, such as support vector machines, K-nearest neighbors (KNN), and ensemble methods with some feature-projection options. Our results show that a cubic KNN method using acoustic features performs well on the sleep habit classification, while X-vector-based models perform well on smoking and drinking habit classifications. These results suggest that X-vectors may help estimate factors directly affecting the vocal cords and vocal tracts of the users (e.g., due to smoking and drinking), while acoustic features may help classify chronotypes, which might be informative with respect to the individuals’ vocal cord and vocal tract ultrastructure.
ISSN:1433-5298
1614-7456
DOI:10.1007/s10015-023-00870-2