Exploring accidental triggers of smart speakers
Voice assistants like Amazon’s Alexa, Google’s Assistant, Tencent’s Xiaowei, or Apple’s Siri, have become the primary (voice) interface in smart speakers that can be found in millions of households. For privacy reasons, these speakers analyze every sound in their environment for their respective wak...
Gespeichert in:
Veröffentlicht in: | Computer speech & language 2022-05, Vol.73, p.101328, Article 101328 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Voice assistants like Amazon’s Alexa, Google’s Assistant, Tencent’s Xiaowei, or Apple’s Siri, have become the primary (voice) interface in smart speakers that can be found in millions of households. For privacy reasons, these speakers analyze every sound in their environment for their respective wake word like “Alexa,” “Jiǔsì’èr líng,” or “Hey Siri,” before uploading the audio stream to the cloud for further processing. Previous work reported on examples of an inaccurate wake word detection, which can be tricked using similar words or sounds like “cocaine noodles” instead of “OK Google.”
In this paper, we perform a comprehensive analysis of such accidental triggers, i.e., sounds that should not have triggered the voice assistant, but did. More specifically, we automate the process of finding accidental triggers and measure their prevalence across 11 smart speakers from 8 different manufacturers using everyday media such as TV shows, news, and other kinds of audio datasets. To systematically detect accidental triggers, we describe a method to artificially craft such triggers using a pronouncing dictionary and a weighted, phone-based Levenshtein distance. In total, we have found hundreds of accidental triggers. Moreover, we explore potential gender and language biases and analyze the reproducibility. Finally, we discuss the resulting privacy implications of accidental triggers and explore countermeasures to reduce and limit their impact on users’ privacy. To foster additional research on these sounds that mislead machine learning models, we publish a dataset of more than 350 verified triggers as a research artifact.
[Display omitted]
•Measurement setup to study the prevalence of accidental triggers in smart speakers.•Analysis of a diverse set of audio sources, exploration of potential gender and language biases, and reproducibility.•A method to synthesize accidental triggers via a pronouncing dictionary and a weighted phone-based distance metric.•Analysis of how commercial companies deal with accidental triggers in practice.•Discussion of potential countermeasures that can help to reduce the impact of accidental triggers on users’ privacy. |
---|---|
ISSN: | 0885-2308 1095-8363 |
DOI: | 10.1016/j.csl.2021.101328 |