Meet Malexa, Alexa’s malicious twin: Malware-induced misperception through intelligent voice assistants
•A novel attack called “malware-induced misperception” is introduced in intelligent voice assistant environments.•The attack leads users towards making false or implausible interpretations of a set of true facts.•The attack works regardless of user's gender, political ideology, or frequency of...
Gespeichert in:
Veröffentlicht in: | International journal of human-computer studies 2021-05, Vol.149, p.102604, Article 102604 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | •A novel attack called “malware-induced misperception” is introduced in intelligent voice assistant environments.•The attack leads users towards making false or implausible interpretations of a set of true facts.•The attack works regardless of user's gender, political ideology, or frequency of interaction with voice assistants.•Countermeasures for preventing “malware-induced misperception” attacks in intelligent voice assistant environments are proposed.•The implications of the attack on a larger scale considering social media and polarized online content are elaborated.
This paper reports the findings of a study where users (N=220) interacted with Malexa, Alexa’s malicious twin. Malexa is an intelligent voice assistant with a simple and seemingly harmless third-party skill that delivers news briefings to users. The twist, however, is that Malexa covertly rewords these briefings to intentionally introduce misperception about the reported events. This covert rewording is referred to as a Malware-Induced Misperception (MIM) attack. It differs from squatting or invocation hijacking attacks in that it is focused on manipulating the “content” delivered through a third-party skill instead of the skill’s “invocation logic.” Malexa, in the study, reworded regulatory briefings to make a government response sound more accidental or lenient than the original news delivered by Alexa. The results show that users who interacted with Malexa perceived that the government was less friendly to working people and more in favor of big businesses. The results also show that Malexa is capable of inducing misperceptions regardless of the user’s political ideology, gender identity, age or frequency of interaction with intelligent voice assistants. We propose a system-level solution for countering Malexa and discuss the implications of using Malexa as a covert “influencer” in people’s living environments. |
---|---|
ISSN: | 1071-5819 1095-9300 |
DOI: | 10.1016/j.ijhcs.2021.102604 |