A novel study to classify breath inhalation and breath exhalation using audio signals from heart and trachea
•This is the first breath inhalation and breath exhalation assessment study with eight different conditions using ML models.•Audio signals from volunteer’s heart by Method 1 and audio signals from volunteers’ trachea by Method 2 are compared.•An interactive tool to obtain quick information about the...
Gespeichert in:
Veröffentlicht in: | Biomedical signal processing and control 2023-02, Vol.80, p.104220, Article 104220 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | •This is the first breath inhalation and breath exhalation assessment study with eight different conditions using ML models.•Audio signals from volunteer’s heart by Method 1 and audio signals from volunteers’ trachea by Method 2 are compared.•An interactive tool to obtain quick information about the detection of health problems based on respiratory is presented.•Usage of ML models both individually and majority voting are compared to classify audio signals.
Respiration is a vital process for all living organisms. In the diagnosis and the detection of many health problems, patient's respiration rate, breath inhalation, and breath exhalation conditions are primarily taken into consideration by doctors, clinicians, and healthcare staff. In this study, an interactive application is designed to collect audio signals, present visual information about them, create a novel 21253x20 audio signal dataset for the detection of breath inhalation and breath exhalation that can be performed through nose and mouth, and classify audio signals based on machine learning (ML) models as breath inhalation and breath exhalation. Audio signals are received from both volunteers’ hearts (method 1) and trachea (method 2). ML models as decision tree (DT), Naïve Bayes (NB), support vector machines (SVM), k-nearest neighbor (KNN), gradient boosted trees (GBT), random forest (RF), and artificial neural network model (ANN) are used on the created dataset to classify the received audio signals from nose and mouth into two different conditions. The highest sensitivity, specificity, accuracy, and Matthews correlation coefficient (MCC) for the classification of breath inhalation and breath exhalation are respectively obtained as 91.82%, 87.20%, 89.51%, and 0.79 by method 2 based on majority voting of KNN, RF, and SVM. This paper mainly focuses on usage of audio signals and ML models as a novel approach to classify respiratory conditions based on breath inhalation and breath exhalation via an interactive application. This paper uncovers that audio signals received from method 2 are more effective and eligible to extract information than audio signals received from method 1. |
---|---|
ISSN: | 1746-8094 1746-8108 |
DOI: | 10.1016/j.bspc.2022.104220 |