Model enhancement and personalization using weakly supervised learning for multi-modal mobile sensing

Always-on sensing of mobile device user's contextual information is critical to many intelligent use cases nowadays such as healthcare, drive assistance, voice UI. State-of-the-art approaches for predicting user context have proved the value to leverage multiple sensing modalities for better ac...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Teng, Diyan, Kulkarni, Rashmi, McGloin, Justin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Always-on sensing of mobile device user's contextual information is critical to many intelligent use cases nowadays such as healthcare, drive assistance, voice UI. State-of-the-art approaches for predicting user context have proved the value to leverage multiple sensing modalities for better accuracy. However, those context inference algorithms that run on application processor nowadays tend to drain heavy amount of power, making them not suitable for an always-on implementation. We claim that not every sensing modality is suitable to be activated all the time and it remains challenging to build an inference engine using power friendly sensing modalities. Meanwhile, due to the diverse population, we find it challenging to learn a context inference model that generalizes well, with limited training data, especially when only using always-on low power sensors. In this work, we propose an approach to leverage the opportunistically-on counterparts in device to improve the always-on prediction model, leading to a personalized solution. We model this problem using a weakly supervised learning framework and provide both theoretical and experimental results to validate our design. The proposed framework achieves satisfying result in the IMU based activity recognition application we considered.
DOI:10.48550/arxiv.1910.13401