Improving Multi-Scale Aggregation Using Feature Pyramid Module for Robust Speaker Verification of Variable-Duration Utterances
Currently, the most widely used approach for speaker verification is the deep speaker embedding learning. In this approach, we obtain a speaker embedding vector by pooling single-scale features that are extracted from the last layer of a speaker feature extractor. Multi-scale aggregation (MSA), whic...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2020-08 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Jung, Youngmoon Kye, Seong Min Choi, Yeunju Jung, Myunghun Kim, Hoirin |
description | Currently, the most widely used approach for speaker verification is the deep speaker embedding learning. In this approach, we obtain a speaker embedding vector by pooling single-scale features that are extracted from the last layer of a speaker feature extractor. Multi-scale aggregation (MSA), which utilizes multi-scale features from different layers of the feature extractor, has recently been introduced and shows superior performance for variable-duration utterances. To increase the robustness dealing with utterances of arbitrary duration, this paper improves the MSA by using a feature pyramid module. The module enhances speaker-discriminative information of features from multiple layers via a top-down pathway and lateral connections. We extract speaker embeddings using the enhanced features that contain rich speaker information with different time scales. Experiments on the VoxCeleb dataset show that the proposed module improves previous MSA methods with a smaller number of parameters. It also achieves better performance than state-of-the-art approaches for both short and long utterances. |
doi_str_mv | 10.48550/arxiv.2004.03194 |
format | Article |
fullrecord | <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_2004_03194</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2387523452</sourcerecordid><originalsourceid>FETCH-LOGICAL-a522-f5eda980ae1cd86e107fb50e93d37206b64ce5a946ab9eac15e63512970108503</originalsourceid><addsrcrecordid>eNotkE1PwkAQhjcmJhLkB3hyE8_F2d1uP44ERUkgGkGuzbSdNouli9uWyMXfbhEOkznM876ZPIzdCRj7kdbwiO7HHMYSwB-DErF_xQZSKeFFvpQ3bNQ0WwCQQSi1VgP2O9_tnT2YuuTLrmqNt8qwIj4pS0cltsbW_LM5XWeEbeeIvx8d7kzOlzbverCwjn_YtGtavtoTfpHjG3KmMNk5bAu-QWcwrch76tylsW3JYZ1Rc8uuC6waGl32kK1nz-vpq7d4e5lPJwsPtZReoSnHOAIkkeVRQALCItVAscpVKCFIAz8jjbEfYBoTZkJToLSQcQgCIg1qyO7Ptf9ykr0zO3TH5CQp-ZfUEw9norfx3VHTJlvbubr_KZEqCrVUfj9_O9prjQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2387523452</pqid></control><display><type>article</type><title>Improving Multi-Scale Aggregation Using Feature Pyramid Module for Robust Speaker Verification of Variable-Duration Utterances</title><source>arXiv.org</source><source>Free E- Journals</source><creator>Jung, Youngmoon ; Kye, Seong Min ; Choi, Yeunju ; Jung, Myunghun ; Kim, Hoirin</creator><creatorcontrib>Jung, Youngmoon ; Kye, Seong Min ; Choi, Yeunju ; Jung, Myunghun ; Kim, Hoirin</creatorcontrib><description>Currently, the most widely used approach for speaker verification is the deep speaker embedding learning. In this approach, we obtain a speaker embedding vector by pooling single-scale features that are extracted from the last layer of a speaker feature extractor. Multi-scale aggregation (MSA), which utilizes multi-scale features from different layers of the feature extractor, has recently been introduced and shows superior performance for variable-duration utterances. To increase the robustness dealing with utterances of arbitrary duration, this paper improves the MSA by using a feature pyramid module. The module enhances speaker-discriminative information of features from multiple layers via a top-down pathway and lateral connections. We extract speaker embeddings using the enhanced features that contain rich speaker information with different time scales. Experiments on the VoxCeleb dataset show that the proposed module improves previous MSA methods with a smaller number of parameters. It also achieves better performance than state-of-the-art approaches for both short and long utterances.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2004.03194</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Agglomeration ; Artificial neural networks ; Computer Science - Computation and Language ; Computer Science - Learning ; Computer Science - Sound ; Feature extraction ; Modules ; Statistics - Machine Learning ; Verification</subject><ispartof>arXiv.org, 2020-08</ispartof><rights>2020. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,780,881,27904</link.rule.ids><backlink>$$Uhttps://doi.org/10.21437/Interspeech.2020-1025$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.48550/arXiv.2004.03194$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Jung, Youngmoon</creatorcontrib><creatorcontrib>Kye, Seong Min</creatorcontrib><creatorcontrib>Choi, Yeunju</creatorcontrib><creatorcontrib>Jung, Myunghun</creatorcontrib><creatorcontrib>Kim, Hoirin</creatorcontrib><title>Improving Multi-Scale Aggregation Using Feature Pyramid Module for Robust Speaker Verification of Variable-Duration Utterances</title><title>arXiv.org</title><description>Currently, the most widely used approach for speaker verification is the deep speaker embedding learning. In this approach, we obtain a speaker embedding vector by pooling single-scale features that are extracted from the last layer of a speaker feature extractor. Multi-scale aggregation (MSA), which utilizes multi-scale features from different layers of the feature extractor, has recently been introduced and shows superior performance for variable-duration utterances. To increase the robustness dealing with utterances of arbitrary duration, this paper improves the MSA by using a feature pyramid module. The module enhances speaker-discriminative information of features from multiple layers via a top-down pathway and lateral connections. We extract speaker embeddings using the enhanced features that contain rich speaker information with different time scales. Experiments on the VoxCeleb dataset show that the proposed module improves previous MSA methods with a smaller number of parameters. It also achieves better performance than state-of-the-art approaches for both short and long utterances.</description><subject>Agglomeration</subject><subject>Artificial neural networks</subject><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Learning</subject><subject>Computer Science - Sound</subject><subject>Feature extraction</subject><subject>Modules</subject><subject>Statistics - Machine Learning</subject><subject>Verification</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GOX</sourceid><recordid>eNotkE1PwkAQhjcmJhLkB3hyE8_F2d1uP44ERUkgGkGuzbSdNouli9uWyMXfbhEOkznM876ZPIzdCRj7kdbwiO7HHMYSwB-DErF_xQZSKeFFvpQ3bNQ0WwCQQSi1VgP2O9_tnT2YuuTLrmqNt8qwIj4pS0cltsbW_LM5XWeEbeeIvx8d7kzOlzbverCwjn_YtGtavtoTfpHjG3KmMNk5bAu-QWcwrch76tylsW3JYZ1Rc8uuC6waGl32kK1nz-vpq7d4e5lPJwsPtZReoSnHOAIkkeVRQALCItVAscpVKCFIAz8jjbEfYBoTZkJToLSQcQgCIg1qyO7Ptf9ykr0zO3TH5CQp-ZfUEw9norfx3VHTJlvbubr_KZEqCrVUfj9_O9prjQ</recordid><startdate>20200806</startdate><enddate>20200806</enddate><creator>Jung, Youngmoon</creator><creator>Kye, Seong Min</creator><creator>Choi, Yeunju</creator><creator>Jung, Myunghun</creator><creator>Kim, Hoirin</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20200806</creationdate><title>Improving Multi-Scale Aggregation Using Feature Pyramid Module for Robust Speaker Verification of Variable-Duration Utterances</title><author>Jung, Youngmoon ; Kye, Seong Min ; Choi, Yeunju ; Jung, Myunghun ; Kim, Hoirin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a522-f5eda980ae1cd86e107fb50e93d37206b64ce5a946ab9eac15e63512970108503</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Agglomeration</topic><topic>Artificial neural networks</topic><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Learning</topic><topic>Computer Science - Sound</topic><topic>Feature extraction</topic><topic>Modules</topic><topic>Statistics - Machine Learning</topic><topic>Verification</topic><toplevel>online_resources</toplevel><creatorcontrib>Jung, Youngmoon</creatorcontrib><creatorcontrib>Kye, Seong Min</creatorcontrib><creatorcontrib>Choi, Yeunju</creatorcontrib><creatorcontrib>Jung, Myunghun</creatorcontrib><creatorcontrib>Kim, Hoirin</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Jung, Youngmoon</au><au>Kye, Seong Min</au><au>Choi, Yeunju</au><au>Jung, Myunghun</au><au>Kim, Hoirin</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Improving Multi-Scale Aggregation Using Feature Pyramid Module for Robust Speaker Verification of Variable-Duration Utterances</atitle><jtitle>arXiv.org</jtitle><date>2020-08-06</date><risdate>2020</risdate><eissn>2331-8422</eissn><abstract>Currently, the most widely used approach for speaker verification is the deep speaker embedding learning. In this approach, we obtain a speaker embedding vector by pooling single-scale features that are extracted from the last layer of a speaker feature extractor. Multi-scale aggregation (MSA), which utilizes multi-scale features from different layers of the feature extractor, has recently been introduced and shows superior performance for variable-duration utterances. To increase the robustness dealing with utterances of arbitrary duration, this paper improves the MSA by using a feature pyramid module. The module enhances speaker-discriminative information of features from multiple layers via a top-down pathway and lateral connections. We extract speaker embeddings using the enhanced features that contain rich speaker information with different time scales. Experiments on the VoxCeleb dataset show that the proposed module improves previous MSA methods with a smaller number of parameters. It also achieves better performance than state-of-the-art approaches for both short and long utterances.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2004.03194</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2020-08 |
issn | 2331-8422 |
language | eng |
recordid | cdi_arxiv_primary_2004_03194 |
source | arXiv.org; Free E- Journals |
subjects | Agglomeration Artificial neural networks Computer Science - Computation and Language Computer Science - Learning Computer Science - Sound Feature extraction Modules Statistics - Machine Learning Verification |
title | Improving Multi-Scale Aggregation Using Feature Pyramid Module for Robust Speaker Verification of Variable-Duration Utterances |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-22T19%3A34%3A10IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Improving%20Multi-Scale%20Aggregation%20Using%20Feature%20Pyramid%20Module%20for%20Robust%20Speaker%20Verification%20of%20Variable-Duration%20Utterances&rft.jtitle=arXiv.org&rft.au=Jung,%20Youngmoon&rft.date=2020-08-06&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2004.03194&rft_dat=%3Cproquest_arxiv%3E2387523452%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2387523452&rft_id=info:pmid/&rfr_iscdi=true |