Improving Voice Trigger Detection with Metric Learning

Voice trigger detection is an important task, which enables activating a voice assistant when a target user speaks a keyword phrase. A detector is typically trained on speech data independent of speaker information and used for the voice trigger detection task. However, such a speaker independent vo...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2022-09
Hauptverfasser: Nayak, Prateeth, Higuchi, Takuya, Gupta, Anmol, Shivesh Ranjan, Shum, Stephen, Sigtia, Siddharth, Marchi, Erik, Lakshminarasimhan, Varun, Cho, Minsik, Adya, Saurabh, Dhir, Chandra, Tewfik, Ahmed
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Nayak, Prateeth
Higuchi, Takuya
Gupta, Anmol
Shivesh Ranjan
Shum, Stephen
Sigtia, Siddharth
Marchi, Erik
Lakshminarasimhan, Varun
Cho, Minsik
Adya, Saurabh
Dhir, Chandra
Tewfik, Ahmed
description Voice trigger detection is an important task, which enables activating a voice assistant when a target user speaks a keyword phrase. A detector is typically trained on speech data independent of speaker information and used for the voice trigger detection task. However, such a speaker independent voice trigger detector typically suffers from performance degradation on speech from underrepresented groups, such as accented speakers. In this work, we propose a novel voice trigger detector that can use a small number of utterances from a target speaker to improve detection accuracy. Our proposed model employs an encoder-decoder architecture. While the encoder performs speaker independent voice trigger detection, similar to the conventional detector, the decoder predicts a personalized embedding for each utterance. A personalized voice trigger score is then obtained as a similarity score between the embeddings of enrollment utterances and a test utterance. The personalized embedding allows adapting to target speaker's speech when computing the voice trigger score, hence improving voice trigger detection accuracy. Experimental results show that the proposed approach achieves a 38% relative reduction in a false rejection rate (FRR) compared to a baseline speaker independent voice trigger model.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2647910156</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2647910156</sourcerecordid><originalsourceid>FETCH-proquest_journals_26479101563</originalsourceid><addsrcrecordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mQw88wtKMovy8xLVwjLz0xOVQgpykxPTy1ScEktSU0uyczPUyjPLMlQ8E0tKcpMVvBJTSzKAyrmYWBNS8wpTuWF0twMym6uIc4eukDDCktTi0vis_JLi_KAUvFGZibmloYGhqZmxsSpAgDGETVB</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2647910156</pqid></control><display><type>article</type><title>Improving Voice Trigger Detection with Metric Learning</title><source>Free E- Journals</source><creator>Nayak, Prateeth ; Higuchi, Takuya ; Gupta, Anmol ; Shivesh Ranjan ; Shum, Stephen ; Sigtia, Siddharth ; Marchi, Erik ; Lakshminarasimhan, Varun ; Cho, Minsik ; Adya, Saurabh ; Dhir, Chandra ; Tewfik, Ahmed</creator><creatorcontrib>Nayak, Prateeth ; Higuchi, Takuya ; Gupta, Anmol ; Shivesh Ranjan ; Shum, Stephen ; Sigtia, Siddharth ; Marchi, Erik ; Lakshminarasimhan, Varun ; Cho, Minsik ; Adya, Saurabh ; Dhir, Chandra ; Tewfik, Ahmed</creatorcontrib><description>Voice trigger detection is an important task, which enables activating a voice assistant when a target user speaks a keyword phrase. A detector is typically trained on speech data independent of speaker information and used for the voice trigger detection task. However, such a speaker independent voice trigger detector typically suffers from performance degradation on speech from underrepresented groups, such as accented speakers. In this work, we propose a novel voice trigger detector that can use a small number of utterances from a target speaker to improve detection accuracy. Our proposed model employs an encoder-decoder architecture. While the encoder performs speaker independent voice trigger detection, similar to the conventional detector, the decoder predicts a personalized embedding for each utterance. A personalized voice trigger score is then obtained as a similarity score between the embeddings of enrollment utterances and a test utterance. The personalized embedding allows adapting to target speaker's speech when computing the voice trigger score, hence improving voice trigger detection accuracy. Experimental results show that the proposed approach achieves a 38% relative reduction in a false rejection rate (FRR) compared to a baseline speaker independent voice trigger model.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Coders ; Customization ; Embedding ; Encoders-Decoders ; Model accuracy ; Performance degradation ; Rejection rate ; Sensors ; Speech ; Target detection ; Voice activity detectors ; Voice recognition</subject><ispartof>arXiv.org, 2022-09</ispartof><rights>2022. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>777,781</link.rule.ids></links><search><creatorcontrib>Nayak, Prateeth</creatorcontrib><creatorcontrib>Higuchi, Takuya</creatorcontrib><creatorcontrib>Gupta, Anmol</creatorcontrib><creatorcontrib>Shivesh Ranjan</creatorcontrib><creatorcontrib>Shum, Stephen</creatorcontrib><creatorcontrib>Sigtia, Siddharth</creatorcontrib><creatorcontrib>Marchi, Erik</creatorcontrib><creatorcontrib>Lakshminarasimhan, Varun</creatorcontrib><creatorcontrib>Cho, Minsik</creatorcontrib><creatorcontrib>Adya, Saurabh</creatorcontrib><creatorcontrib>Dhir, Chandra</creatorcontrib><creatorcontrib>Tewfik, Ahmed</creatorcontrib><title>Improving Voice Trigger Detection with Metric Learning</title><title>arXiv.org</title><description>Voice trigger detection is an important task, which enables activating a voice assistant when a target user speaks a keyword phrase. A detector is typically trained on speech data independent of speaker information and used for the voice trigger detection task. However, such a speaker independent voice trigger detector typically suffers from performance degradation on speech from underrepresented groups, such as accented speakers. In this work, we propose a novel voice trigger detector that can use a small number of utterances from a target speaker to improve detection accuracy. Our proposed model employs an encoder-decoder architecture. While the encoder performs speaker independent voice trigger detection, similar to the conventional detector, the decoder predicts a personalized embedding for each utterance. A personalized voice trigger score is then obtained as a similarity score between the embeddings of enrollment utterances and a test utterance. The personalized embedding allows adapting to target speaker's speech when computing the voice trigger score, hence improving voice trigger detection accuracy. Experimental results show that the proposed approach achieves a 38% relative reduction in a false rejection rate (FRR) compared to a baseline speaker independent voice trigger model.</description><subject>Coders</subject><subject>Customization</subject><subject>Embedding</subject><subject>Encoders-Decoders</subject><subject>Model accuracy</subject><subject>Performance degradation</subject><subject>Rejection rate</subject><subject>Sensors</subject><subject>Speech</subject><subject>Target detection</subject><subject>Voice activity detectors</subject><subject>Voice recognition</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mQw88wtKMovy8xLVwjLz0xOVQgpykxPTy1ScEktSU0uyczPUyjPLMlQ8E0tKcpMVvBJTSzKAyrmYWBNS8wpTuWF0twMym6uIc4eukDDCktTi0vis_JLi_KAUvFGZibmloYGhqZmxsSpAgDGETVB</recordid><startdate>20220913</startdate><enddate>20220913</enddate><creator>Nayak, Prateeth</creator><creator>Higuchi, Takuya</creator><creator>Gupta, Anmol</creator><creator>Shivesh Ranjan</creator><creator>Shum, Stephen</creator><creator>Sigtia, Siddharth</creator><creator>Marchi, Erik</creator><creator>Lakshminarasimhan, Varun</creator><creator>Cho, Minsik</creator><creator>Adya, Saurabh</creator><creator>Dhir, Chandra</creator><creator>Tewfik, Ahmed</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20220913</creationdate><title>Improving Voice Trigger Detection with Metric Learning</title><author>Nayak, Prateeth ; Higuchi, Takuya ; Gupta, Anmol ; Shivesh Ranjan ; Shum, Stephen ; Sigtia, Siddharth ; Marchi, Erik ; Lakshminarasimhan, Varun ; Cho, Minsik ; Adya, Saurabh ; Dhir, Chandra ; Tewfik, Ahmed</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_26479101563</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Coders</topic><topic>Customization</topic><topic>Embedding</topic><topic>Encoders-Decoders</topic><topic>Model accuracy</topic><topic>Performance degradation</topic><topic>Rejection rate</topic><topic>Sensors</topic><topic>Speech</topic><topic>Target detection</topic><topic>Voice activity detectors</topic><topic>Voice recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Nayak, Prateeth</creatorcontrib><creatorcontrib>Higuchi, Takuya</creatorcontrib><creatorcontrib>Gupta, Anmol</creatorcontrib><creatorcontrib>Shivesh Ranjan</creatorcontrib><creatorcontrib>Shum, Stephen</creatorcontrib><creatorcontrib>Sigtia, Siddharth</creatorcontrib><creatorcontrib>Marchi, Erik</creatorcontrib><creatorcontrib>Lakshminarasimhan, Varun</creatorcontrib><creatorcontrib>Cho, Minsik</creatorcontrib><creatorcontrib>Adya, Saurabh</creatorcontrib><creatorcontrib>Dhir, Chandra</creatorcontrib><creatorcontrib>Tewfik, Ahmed</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Nayak, Prateeth</au><au>Higuchi, Takuya</au><au>Gupta, Anmol</au><au>Shivesh Ranjan</au><au>Shum, Stephen</au><au>Sigtia, Siddharth</au><au>Marchi, Erik</au><au>Lakshminarasimhan, Varun</au><au>Cho, Minsik</au><au>Adya, Saurabh</au><au>Dhir, Chandra</au><au>Tewfik, Ahmed</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Improving Voice Trigger Detection with Metric Learning</atitle><jtitle>arXiv.org</jtitle><date>2022-09-13</date><risdate>2022</risdate><eissn>2331-8422</eissn><abstract>Voice trigger detection is an important task, which enables activating a voice assistant when a target user speaks a keyword phrase. A detector is typically trained on speech data independent of speaker information and used for the voice trigger detection task. However, such a speaker independent voice trigger detector typically suffers from performance degradation on speech from underrepresented groups, such as accented speakers. In this work, we propose a novel voice trigger detector that can use a small number of utterances from a target speaker to improve detection accuracy. Our proposed model employs an encoder-decoder architecture. While the encoder performs speaker independent voice trigger detection, similar to the conventional detector, the decoder predicts a personalized embedding for each utterance. A personalized voice trigger score is then obtained as a similarity score between the embeddings of enrollment utterances and a test utterance. The personalized embedding allows adapting to target speaker's speech when computing the voice trigger score, hence improving voice trigger detection accuracy. Experimental results show that the proposed approach achieves a 38% relative reduction in a false rejection rate (FRR) compared to a baseline speaker independent voice trigger model.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2022-09
issn 2331-8422
language eng
recordid cdi_proquest_journals_2647910156
source Free E- Journals
subjects Coders
Customization
Embedding
Encoders-Decoders
Model accuracy
Performance degradation
Rejection rate
Sensors
Speech
Target detection
Voice activity detectors
Voice recognition
title Improving Voice Trigger Detection with Metric Learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-19T20%3A58%3A18IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Improving%20Voice%20Trigger%20Detection%20with%20Metric%20Learning&rft.jtitle=arXiv.org&rft.au=Nayak,%20Prateeth&rft.date=2022-09-13&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2647910156%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2647910156&rft_id=info:pmid/&rfr_iscdi=true