Scaling Wearable Foundation Models
Wearable sensors have become ubiquitous thanks to a variety of health tracking features. The resulting continuous and longitudinal measurements from everyday life generate large volumes of data; however, making sense of these observations for scientific and actionable insights is non-trivial. Inspir...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Wearable sensors have become ubiquitous thanks to a variety of health
tracking features. The resulting continuous and longitudinal measurements from
everyday life generate large volumes of data; however, making sense of these
observations for scientific and actionable insights is non-trivial. Inspired by
the empirical success of generative modeling, where large neural networks learn
powerful representations from vast amounts of text, image, video, or audio
data, we investigate the scaling properties of sensor foundation models across
compute, data, and model size. Using a dataset of up to 40 million hours of
in-situ heart rate, heart rate variability, electrodermal activity,
accelerometer, skin temperature, and altimeter per-minute data from over
165,000 people, we create LSM, a multimodal foundation model built on the
largest wearable-signals dataset with the most extensive range of sensor
modalities to date. Our results establish the scaling laws of LSM for tasks
such as imputation, interpolation and extrapolation, both across time and
sensor modalities. Moreover, we highlight how LSM enables sample-efficient
downstream learning for tasks like exercise and activity recognition. |
---|---|
DOI: | 10.48550/arxiv.2410.13638 |