Speaker Re-identification with Speaker Dependent Speech Enhancement

While the use of deep neural networks has significantly boosted speaker recognition performance, it is still challenging to separate speakers in poor acoustic environments. Here speech enhancement methods have traditionally allowed improved performance. The recent works have shown that adapting spee...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Shi, Yanpei, Huang, Qiang, Hain, Thomas
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Shi, Yanpei
Huang, Qiang
Hain, Thomas
description While the use of deep neural networks has significantly boosted speaker recognition performance, it is still challenging to separate speakers in poor acoustic environments. Here speech enhancement methods have traditionally allowed improved performance. The recent works have shown that adapting speech enhancement can lead to further gains. This paper introduces a novel approach that cascades speech enhancement and speaker recognition. In the first step, a speaker embedding vector is generated , which is used in the second step to enhance the speech quality and re-identify the speakers. Models are trained in an integrated framework with joint optimisation. The proposed approach is evaluated using the Voxceleb1 dataset, which aims to assess speaker recognition in real world situations. In addition three types of noise at different signal-noise-ratios were added for this work. The obtained results show that the proposed approach using speaker dependent speech enhancement can yield better speaker recognition and speech enhancement performances than two baselines in various noise conditions.
doi_str_mv 10.48550/arxiv.2005.07818
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2005_07818</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2005_07818</sourcerecordid><originalsourceid>FETCH-LOGICAL-a678-d11708f26a5dfbf2e93a3c08c78d7df1f252cdc0e94bad7589a02859d45a23e13</originalsourceid><addsrcrecordid>eNo1j8tqwzAQRbXpoqT9gK7qH7AzkqxovAxu2gYCgTZ7M5FGWDRRjWPy-PvWSbu6cA5cOEI8SShKNAam1J_jsVAApgCLEu9F_dkxfXGffXAePachhuhoiN8pO8Whzf71C3ecRj8Sdm22SC0lx_tf9CDuAu0O_Pi3E7F5XWzq93y1flvW81VOM4u5l9ICBjUj48M2KK40aQfoLHrrgwzKKOcdcFVuyVuDFYFCU_nSkNIs9UQ8326vFU3Xxz31l2asaa41-gcua0VF</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Speaker Re-identification with Speaker Dependent Speech Enhancement</title><source>arXiv.org</source><creator>Shi, Yanpei ; Huang, Qiang ; Hain, Thomas</creator><creatorcontrib>Shi, Yanpei ; Huang, Qiang ; Hain, Thomas</creatorcontrib><description>While the use of deep neural networks has significantly boosted speaker recognition performance, it is still challenging to separate speakers in poor acoustic environments. Here speech enhancement methods have traditionally allowed improved performance. The recent works have shown that adapting speech enhancement can lead to further gains. This paper introduces a novel approach that cascades speech enhancement and speaker recognition. In the first step, a speaker embedding vector is generated , which is used in the second step to enhance the speech quality and re-identify the speakers. Models are trained in an integrated framework with joint optimisation. The proposed approach is evaluated using the Voxceleb1 dataset, which aims to assess speaker recognition in real world situations. In addition three types of noise at different signal-noise-ratios were added for this work. The obtained results show that the proposed approach using speaker dependent speech enhancement can yield better speaker recognition and speech enhancement performances than two baselines in various noise conditions.</description><identifier>DOI: 10.48550/arxiv.2005.07818</identifier><language>eng</language><subject>Computer Science - Computation and Language ; Computer Science - Sound</subject><creationdate>2020-05</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2005.07818$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2005.07818$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Shi, Yanpei</creatorcontrib><creatorcontrib>Huang, Qiang</creatorcontrib><creatorcontrib>Hain, Thomas</creatorcontrib><title>Speaker Re-identification with Speaker Dependent Speech Enhancement</title><description>While the use of deep neural networks has significantly boosted speaker recognition performance, it is still challenging to separate speakers in poor acoustic environments. Here speech enhancement methods have traditionally allowed improved performance. The recent works have shown that adapting speech enhancement can lead to further gains. This paper introduces a novel approach that cascades speech enhancement and speaker recognition. In the first step, a speaker embedding vector is generated , which is used in the second step to enhance the speech quality and re-identify the speakers. Models are trained in an integrated framework with joint optimisation. The proposed approach is evaluated using the Voxceleb1 dataset, which aims to assess speaker recognition in real world situations. In addition three types of noise at different signal-noise-ratios were added for this work. The obtained results show that the proposed approach using speaker dependent speech enhancement can yield better speaker recognition and speech enhancement performances than two baselines in various noise conditions.</description><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Sound</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNo1j8tqwzAQRbXpoqT9gK7qH7AzkqxovAxu2gYCgTZ7M5FGWDRRjWPy-PvWSbu6cA5cOEI8SShKNAam1J_jsVAApgCLEu9F_dkxfXGffXAePachhuhoiN8pO8Whzf71C3ecRj8Sdm22SC0lx_tf9CDuAu0O_Pi3E7F5XWzq93y1flvW81VOM4u5l9ICBjUj48M2KK40aQfoLHrrgwzKKOcdcFVuyVuDFYFCU_nSkNIs9UQ8326vFU3Xxz31l2asaa41-gcua0VF</recordid><startdate>20200515</startdate><enddate>20200515</enddate><creator>Shi, Yanpei</creator><creator>Huang, Qiang</creator><creator>Hain, Thomas</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20200515</creationdate><title>Speaker Re-identification with Speaker Dependent Speech Enhancement</title><author>Shi, Yanpei ; Huang, Qiang ; Hain, Thomas</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a678-d11708f26a5dfbf2e93a3c08c78d7df1f252cdc0e94bad7589a02859d45a23e13</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Sound</topic><toplevel>online_resources</toplevel><creatorcontrib>Shi, Yanpei</creatorcontrib><creatorcontrib>Huang, Qiang</creatorcontrib><creatorcontrib>Hain, Thomas</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Shi, Yanpei</au><au>Huang, Qiang</au><au>Hain, Thomas</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Speaker Re-identification with Speaker Dependent Speech Enhancement</atitle><date>2020-05-15</date><risdate>2020</risdate><abstract>While the use of deep neural networks has significantly boosted speaker recognition performance, it is still challenging to separate speakers in poor acoustic environments. Here speech enhancement methods have traditionally allowed improved performance. The recent works have shown that adapting speech enhancement can lead to further gains. This paper introduces a novel approach that cascades speech enhancement and speaker recognition. In the first step, a speaker embedding vector is generated , which is used in the second step to enhance the speech quality and re-identify the speakers. Models are trained in an integrated framework with joint optimisation. The proposed approach is evaluated using the Voxceleb1 dataset, which aims to assess speaker recognition in real world situations. In addition three types of noise at different signal-noise-ratios were added for this work. The obtained results show that the proposed approach using speaker dependent speech enhancement can yield better speaker recognition and speech enhancement performances than two baselines in various noise conditions.</abstract><doi>10.48550/arxiv.2005.07818</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2005.07818
ispartof
issn
language eng
recordid cdi_arxiv_primary_2005_07818
source arXiv.org
subjects Computer Science - Computation and Language
Computer Science - Sound
title Speaker Re-identification with Speaker Dependent Speech Enhancement
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-31T03%3A24%3A35IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Speaker%20Re-identification%20with%20Speaker%20Dependent%20Speech%20Enhancement&rft.au=Shi,%20Yanpei&rft.date=2020-05-15&rft_id=info:doi/10.48550/arxiv.2005.07818&rft_dat=%3Carxiv_GOX%3E2005_07818%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true