When the machine learns from users, is it helping or snooping?

Media systems that personalize their offerings keep track of users’ tastes by constantly learning from their activities. Some systems use this characteristic of machine learning to encourage users with statements like “the more you use the system, the better it can serve you in the future.” However,...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Computers in human behavior 2023-01, Vol.138, p.107427, Article 107427
Hauptverfasser: Lee, Sangwook, Moon, Won-Ki, Lee, Jae-Gil, Sundar, S. Shyam
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Media systems that personalize their offerings keep track of users’ tastes by constantly learning from their activities. Some systems use this characteristic of machine learning to encourage users with statements like “the more you use the system, the better it can serve you in the future.” However, it is not clear whether users indeed feel encouraged and consider the system to be helpful and beneficial, or begin to worry about jeopardizing their privacy in the process. We conducted a between-subjects experiment (N = 269) to find out. Guided by the HAII-TIME model (Sundar, 2020), we examined the effects of both explicit and implicit cues on the interface which conveyed that the machine is learning. Data indicate that users consider the system to be a helper and tend to trust it more when the system is transparent about its learning, regardless of the quality of its performance and the degree of explicitness in conveying the fact that it is learning from their activities. The study found no evidence to suggest privacy concerns arising from the machine disclosing that it is learning from its users. We discuss theoretical and practical implications of deploying machine learning cues to enhance user experience of AI-embedded systems. •When a system is transparent about its learning (machine learning cue), users consider the system as a helper.•When users perceive an AI system to be a helper, their frustration is low and they express more trust in the system.•Machine learning cue is effective regardless of the system's performance and explicitness of the cue.•No evidence to suggest that machine learning cue evokes privacy concerns.
ISSN:0747-5632
1873-7692
DOI:10.1016/j.chb.2022.107427