Learning Language and Multimodal Privacy-Preserving Markers of Mood from Mobile Data
Mental health conditions remain underdiagnosed even in countries with common access to advanced medical care. The ability to accurately and efficiently predict mood from easily collectible data has several important implications for the early detection, intervention, and treatment of mental health d...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Mental health conditions remain underdiagnosed even in countries with common
access to advanced medical care. The ability to accurately and efficiently
predict mood from easily collectible data has several important implications
for the early detection, intervention, and treatment of mental health
disorders. One promising data source to help monitor human behavior is daily
smartphone usage. However, care must be taken to summarize behaviors without
identifying the user through personal (e.g., personally identifiable
information) or protected (e.g., race, gender) attributes. In this paper, we
study behavioral markers of daily mood using a recent dataset of mobile
behaviors from adolescent populations at high risk of suicidal behaviors. Using
computational models, we find that language and multimodal representations of
mobile typed text (spanning typed characters, words, keystroke timings, and app
usage) are predictive of daily mood. However, we find that models trained to
predict mood often also capture private user identities in their intermediate
representations. To tackle this problem, we evaluate approaches that obfuscate
user identity while remaining predictive. By combining multimodal
representations with privacy-preserving learning, we are able to push forward
the performance-privacy frontier. |
---|---|
DOI: | 10.48550/arxiv.2106.13213 |