TS-MoCo: Time-Series Momentum Contrast for Self-Supervised Physiological Representation Learning
Limited availability of labeled physiological data often prohibits the use of powerful supervised deep learning models in the biomedical machine intelligence domain. We approach this problem and propose a novel encoding framework that relies on self-supervised learning with momentum contrast to lear...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Limited availability of labeled physiological data often prohibits the use of
powerful supervised deep learning models in the biomedical machine intelligence
domain. We approach this problem and propose a novel encoding framework that
relies on self-supervised learning with momentum contrast to learn
representations from multivariate time-series of various physiological domains
without needing labels. Our model uses a transformer architecture that can be
easily adapted to classification problems by optimizing a linear output
classification layer. We experimentally evaluate our framework using two
publicly available physiological datasets from different domains, i.e., human
activity recognition from embedded inertial sensory and emotion recognition
from electroencephalography. We show that our self-supervised learning approach
can indeed learn discriminative features which can be exploited in downstream
classification tasks. Our work enables the development of domain-agnostic
intelligent systems that can effectively analyze multivariate time-series data
from physiological domains. |
---|---|
DOI: | 10.48550/arxiv.2306.06522 |