Improving Voice Trigger Detection with Metric Learning
Voice trigger detection is an important task, which enables activating a voice assistant when a target user speaks a keyword phrase. A detector is typically trained on speech data independent of speaker information and used for the voice trigger detection task. However, such a speaker independent vo...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Nayak, Prateeth Higuchi, Takuya Gupta, Anmol Ranjan, Shivesh Shum, Stephen Sigtia, Siddharth Marchi, Erik Lakshminarasimhan, Varun Cho, Minsik Adya, Saurabh Dhir, Chandra Tewfik, Ahmed |
description | Voice trigger detection is an important task, which enables activating a
voice assistant when a target user speaks a keyword phrase. A detector is
typically trained on speech data independent of speaker information and used
for the voice trigger detection task. However, such a speaker independent voice
trigger detector typically suffers from performance degradation on speech from
underrepresented groups, such as accented speakers. In this work, we propose a
novel voice trigger detector that can use a small number of utterances from a
target speaker to improve detection accuracy. Our proposed model employs an
encoder-decoder architecture. While the encoder performs speaker independent
voice trigger detection, similar to the conventional detector, the decoder
predicts a personalized embedding for each utterance. A personalized voice
trigger score is then obtained as a similarity score between the embeddings of
enrollment utterances and a test utterance. The personalized embedding allows
adapting to target speaker's speech when computing the voice trigger score,
hence improving voice trigger detection accuracy. Experimental results show
that the proposed approach achieves a 38% relative reduction in a false
rejection rate (FRR) compared to a baseline speaker independent voice trigger
model. |
doi_str_mv | 10.48550/arxiv.2204.02455 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2204_02455</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2204_02455</sourcerecordid><originalsourceid>FETCH-LOGICAL-a675-d1f7c14e24ed64fc01d31a7b8af27c557d436a715557af0a8eb8000d043c8d933</originalsourceid><addsrcrecordid>eNotj8tuwjAURL3pAtF-AKv6B5Jev2KzrChtkYK6ibqNLvZ1agkS5EY8_r4UWM0sRkdzGJsJKLUzBl4wn9KhlBJ0CVIbM2HVarfPwyH1Hf8ekife5NR1lPkbjeTHNPT8mMYfvqYxJ89rwtxfxo_sIeL2l57uOWXN-7JZfBb118dq8VoXWFlTBBGtF5qkplDp6EEEJdBuHEZpvTE2aFWhFeZSMQI62jgACKCVd2Gu1JQ937DX4-0-px3mc_sv0F4F1B_vOT_V</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Improving Voice Trigger Detection with Metric Learning</title><source>arXiv.org</source><creator>Nayak, Prateeth ; Higuchi, Takuya ; Gupta, Anmol ; Ranjan, Shivesh ; Shum, Stephen ; Sigtia, Siddharth ; Marchi, Erik ; Lakshminarasimhan, Varun ; Cho, Minsik ; Adya, Saurabh ; Dhir, Chandra ; Tewfik, Ahmed</creator><creatorcontrib>Nayak, Prateeth ; Higuchi, Takuya ; Gupta, Anmol ; Ranjan, Shivesh ; Shum, Stephen ; Sigtia, Siddharth ; Marchi, Erik ; Lakshminarasimhan, Varun ; Cho, Minsik ; Adya, Saurabh ; Dhir, Chandra ; Tewfik, Ahmed</creatorcontrib><description>Voice trigger detection is an important task, which enables activating a
voice assistant when a target user speaks a keyword phrase. A detector is
typically trained on speech data independent of speaker information and used
for the voice trigger detection task. However, such a speaker independent voice
trigger detector typically suffers from performance degradation on speech from
underrepresented groups, such as accented speakers. In this work, we propose a
novel voice trigger detector that can use a small number of utterances from a
target speaker to improve detection accuracy. Our proposed model employs an
encoder-decoder architecture. While the encoder performs speaker independent
voice trigger detection, similar to the conventional detector, the decoder
predicts a personalized embedding for each utterance. A personalized voice
trigger score is then obtained as a similarity score between the embeddings of
enrollment utterances and a test utterance. The personalized embedding allows
adapting to target speaker's speech when computing the voice trigger score,
hence improving voice trigger detection accuracy. Experimental results show
that the proposed approach achieves a 38% relative reduction in a false
rejection rate (FRR) compared to a baseline speaker independent voice trigger
model.</description><identifier>DOI: 10.48550/arxiv.2204.02455</identifier><language>eng</language><subject>Computer Science - Learning ; Computer Science - Sound</subject><creationdate>2022-04</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2204.02455$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2204.02455$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Nayak, Prateeth</creatorcontrib><creatorcontrib>Higuchi, Takuya</creatorcontrib><creatorcontrib>Gupta, Anmol</creatorcontrib><creatorcontrib>Ranjan, Shivesh</creatorcontrib><creatorcontrib>Shum, Stephen</creatorcontrib><creatorcontrib>Sigtia, Siddharth</creatorcontrib><creatorcontrib>Marchi, Erik</creatorcontrib><creatorcontrib>Lakshminarasimhan, Varun</creatorcontrib><creatorcontrib>Cho, Minsik</creatorcontrib><creatorcontrib>Adya, Saurabh</creatorcontrib><creatorcontrib>Dhir, Chandra</creatorcontrib><creatorcontrib>Tewfik, Ahmed</creatorcontrib><title>Improving Voice Trigger Detection with Metric Learning</title><description>Voice trigger detection is an important task, which enables activating a
voice assistant when a target user speaks a keyword phrase. A detector is
typically trained on speech data independent of speaker information and used
for the voice trigger detection task. However, such a speaker independent voice
trigger detector typically suffers from performance degradation on speech from
underrepresented groups, such as accented speakers. In this work, we propose a
novel voice trigger detector that can use a small number of utterances from a
target speaker to improve detection accuracy. Our proposed model employs an
encoder-decoder architecture. While the encoder performs speaker independent
voice trigger detection, similar to the conventional detector, the decoder
predicts a personalized embedding for each utterance. A personalized voice
trigger score is then obtained as a similarity score between the embeddings of
enrollment utterances and a test utterance. The personalized embedding allows
adapting to target speaker's speech when computing the voice trigger score,
hence improving voice trigger detection accuracy. Experimental results show
that the proposed approach achieves a 38% relative reduction in a false
rejection rate (FRR) compared to a baseline speaker independent voice trigger
model.</description><subject>Computer Science - Learning</subject><subject>Computer Science - Sound</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tuwjAURL3pAtF-AKv6B5Jev2KzrChtkYK6ibqNLvZ1agkS5EY8_r4UWM0sRkdzGJsJKLUzBl4wn9KhlBJ0CVIbM2HVarfPwyH1Hf8ekife5NR1lPkbjeTHNPT8mMYfvqYxJ89rwtxfxo_sIeL2l57uOWXN-7JZfBb118dq8VoXWFlTBBGtF5qkplDp6EEEJdBuHEZpvTE2aFWhFeZSMQI62jgACKCVd2Gu1JQ937DX4-0-px3mc_sv0F4F1B_vOT_V</recordid><startdate>20220405</startdate><enddate>20220405</enddate><creator>Nayak, Prateeth</creator><creator>Higuchi, Takuya</creator><creator>Gupta, Anmol</creator><creator>Ranjan, Shivesh</creator><creator>Shum, Stephen</creator><creator>Sigtia, Siddharth</creator><creator>Marchi, Erik</creator><creator>Lakshminarasimhan, Varun</creator><creator>Cho, Minsik</creator><creator>Adya, Saurabh</creator><creator>Dhir, Chandra</creator><creator>Tewfik, Ahmed</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20220405</creationdate><title>Improving Voice Trigger Detection with Metric Learning</title><author>Nayak, Prateeth ; Higuchi, Takuya ; Gupta, Anmol ; Ranjan, Shivesh ; Shum, Stephen ; Sigtia, Siddharth ; Marchi, Erik ; Lakshminarasimhan, Varun ; Cho, Minsik ; Adya, Saurabh ; Dhir, Chandra ; Tewfik, Ahmed</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a675-d1f7c14e24ed64fc01d31a7b8af27c557d436a715557af0a8eb8000d043c8d933</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Learning</topic><topic>Computer Science - Sound</topic><toplevel>online_resources</toplevel><creatorcontrib>Nayak, Prateeth</creatorcontrib><creatorcontrib>Higuchi, Takuya</creatorcontrib><creatorcontrib>Gupta, Anmol</creatorcontrib><creatorcontrib>Ranjan, Shivesh</creatorcontrib><creatorcontrib>Shum, Stephen</creatorcontrib><creatorcontrib>Sigtia, Siddharth</creatorcontrib><creatorcontrib>Marchi, Erik</creatorcontrib><creatorcontrib>Lakshminarasimhan, Varun</creatorcontrib><creatorcontrib>Cho, Minsik</creatorcontrib><creatorcontrib>Adya, Saurabh</creatorcontrib><creatorcontrib>Dhir, Chandra</creatorcontrib><creatorcontrib>Tewfik, Ahmed</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Nayak, Prateeth</au><au>Higuchi, Takuya</au><au>Gupta, Anmol</au><au>Ranjan, Shivesh</au><au>Shum, Stephen</au><au>Sigtia, Siddharth</au><au>Marchi, Erik</au><au>Lakshminarasimhan, Varun</au><au>Cho, Minsik</au><au>Adya, Saurabh</au><au>Dhir, Chandra</au><au>Tewfik, Ahmed</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Improving Voice Trigger Detection with Metric Learning</atitle><date>2022-04-05</date><risdate>2022</risdate><abstract>Voice trigger detection is an important task, which enables activating a
voice assistant when a target user speaks a keyword phrase. A detector is
typically trained on speech data independent of speaker information and used
for the voice trigger detection task. However, such a speaker independent voice
trigger detector typically suffers from performance degradation on speech from
underrepresented groups, such as accented speakers. In this work, we propose a
novel voice trigger detector that can use a small number of utterances from a
target speaker to improve detection accuracy. Our proposed model employs an
encoder-decoder architecture. While the encoder performs speaker independent
voice trigger detection, similar to the conventional detector, the decoder
predicts a personalized embedding for each utterance. A personalized voice
trigger score is then obtained as a similarity score between the embeddings of
enrollment utterances and a test utterance. The personalized embedding allows
adapting to target speaker's speech when computing the voice trigger score,
hence improving voice trigger detection accuracy. Experimental results show
that the proposed approach achieves a 38% relative reduction in a false
rejection rate (FRR) compared to a baseline speaker independent voice trigger
model.</abstract><doi>10.48550/arxiv.2204.02455</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2204.02455 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2204_02455 |
source | arXiv.org |
subjects | Computer Science - Learning Computer Science - Sound |
title | Improving Voice Trigger Detection with Metric Learning |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T07%3A24%3A54IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Improving%20Voice%20Trigger%20Detection%20with%20Metric%20Learning&rft.au=Nayak,%20Prateeth&rft.date=2022-04-05&rft_id=info:doi/10.48550/arxiv.2204.02455&rft_dat=%3Carxiv_GOX%3E2204_02455%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |