Estimating Respiratory Rate From Breath Audio Obtained Through Wearable Microphones
Respiratory rate (RR) is a clinical metric used to assess overall health and physical fitness. An individual's RR can change from their baseline due to chronic illness symptoms (e.g., asthma, congestive heart failure), acute illness (e.g., breathlessness due to infection), and over the course o...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Respiratory rate (RR) is a clinical metric used to assess overall health and
physical fitness. An individual's RR can change from their baseline due to
chronic illness symptoms (e.g., asthma, congestive heart failure), acute
illness (e.g., breathlessness due to infection), and over the course of the day
due to physical exhaustion during heightened exertion. Remote estimation of RR
can offer a cost-effective method to track disease progression and
cardio-respiratory fitness over time. This work investigates a model-driven
approach to estimate RR from short audio segments obtained after physical
exertion in healthy adults. Data was collected from 21 individuals using
microphone-enabled, near-field headphones before, during, and after strenuous
exercise. RR was manually annotated by counting perceived inhalations and
exhalations. A multi-task Long-Short Term Memory (LSTM) network with
convolutional layers was implemented to process mel-filterbank energies,
estimate RR in varying background noise conditions, and predict heavy
breathing, indicated by an RR of more than 25 breaths per minute. The
multi-task model performs both classification and regression tasks and
leverages a mixture of loss functions. It was observed that RR can be estimated
with a concordance correlation coefficient (CCC) of 0.76 and a mean squared
error (MSE) of 0.2, demonstrating that audio can be a viable signal for
approximating RR. |
---|---|
DOI: | 10.48550/arxiv.2107.14028 |