Short-time speaker verification with different speaking style utterances
In recent years, great progress has been made in the technical aspects of automatic speaker verification (ASV). However, the promotion of ASV technology is still a very challenging issue, because most technologies are still very sensitive to new, unknown and spoofing conditions. Most previous studie...
Gespeichert in:
Veröffentlicht in: | PloS one 2020-11, Vol.15 (11), p.e0241809-e0241809 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | e0241809 |
---|---|
container_issue | 11 |
container_start_page | e0241809 |
container_title | PloS one |
container_volume | 15 |
creator | Mao, Hongwei Shi, Yan Liu, Yue Wei, Linqiang Li, Yijie Long, Yanhua |
description | In recent years, great progress has been made in the technical aspects of automatic speaker verification (ASV). However, the promotion of ASV technology is still a very challenging issue, because most technologies are still very sensitive to new, unknown and spoofing conditions. Most previous studies focused on extracting target speaker information from natural speech. This paper aims to design a new ASV corpus with multi-speaking styles and investigate the ASV robustness to these different speaking styles. We first release this corpus in the Zenodo website for public research, in which each speaker has several text-dependent and text-independent singing, humming and normal reading speech utterances. Then, we investigate the speaker discrimination of each speaking style in the feature space. Furthermore, the intra and inter-speaker variabilities in each different speaking style and cross-speaking styles are investigated in both text-dependent and text-independent ASV tasks. Conventional Gaussian Mixture Model (GMM), and the state-of-the-art x-vector are used to build ASV systems. Experimental results show that the voiceprint information in humming and singing speech are more distinguishable than that in normal reading speech for conventional ASV systems. Furthermore, we find that combing the three speaking styles can significantly improve the x-vector based ASV system, even when only limited gains are obtained by conventional GMM-based systems. |
doi_str_mv | 10.1371/journal.pone.0241809 |
format | Article |
fullrecord | <record><control><sourceid>gale_plos_</sourceid><recordid>TN_cdi_plos_journals_2459615775</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A641243308</galeid><doaj_id>oai_doaj_org_article_69163b8a777a4531b779a9d01a014906</doaj_id><sourcerecordid>A641243308</sourcerecordid><originalsourceid>FETCH-LOGICAL-c641t-ef17bdaea46cd49956b64913ab7ee52521c7fc9195b93a34c4bab64b9f04473f3</originalsourceid><addsrcrecordid>eNqNkl9rFDEUxQdRbK1-A9EBQfRh1mTyb_IilKJ2oVCw6mvIZO7sZp2dbJNMtd_ejDstO9IHyUNC8rvnJicny15itMBE4A8bN_hed4ud62GBSoorJB9lx1iSsuAlIo8P1kfZsxA2CDFScf40OyIEC1bJ6jg7v1o7H4tot5CHHeif4PMb8La1Rkfr-vyXjeu8sW0LHvq4Z2y_ykO87SAfYgSvewPhefak1V2AF9N8kn3__Onb2XlxcflleXZ6URhOcSygxaJuNGjKTUOlZLzmVGKiawHASlZiI1ojsWS1JJpQQ2udiFq2iFJBWnKSvd7r7joX1GRCUCVlkmMmBEvEck80Tm_Uztut9rfKaav-bji_UtpHazpQXGJO6koLITRlBNdCSC0bhDXCVCKetD5O3YZ6C41JFnjdzUTnJ71dq5W7UYIzweh4mXeTgHfXA4SotjYY6DrdgxvGe3OEKiaQTOibf9CHXzdRK50eYPvWpb5mFFWnyeGSEoKqRC0eoNJoYGtNSkxr0_6s4P2sIDERfseVHkJQy6uv_89e_pizbw_YNeguroPrhjFbYQ7SPWi8C8FDe28yRmoM_J0bagy8mgKfyl4dftB90V3CyR8OIvpP</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2459615775</pqid></control><display><type>article</type><title>Short-time speaker verification with different speaking style utterances</title><source>MEDLINE</source><source>DOAJ Directory of Open Access Journals</source><source>Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals</source><source>Public Library of Science (PLoS)</source><source>PubMed Central</source><source>Free Full-Text Journals in Chemistry</source><creator>Mao, Hongwei ; Shi, Yan ; Liu, Yue ; Wei, Linqiang ; Li, Yijie ; Long, Yanhua</creator><contributor>McLoughlin, Ian</contributor><creatorcontrib>Mao, Hongwei ; Shi, Yan ; Liu, Yue ; Wei, Linqiang ; Li, Yijie ; Long, Yanhua ; McLoughlin, Ian</creatorcontrib><description>In recent years, great progress has been made in the technical aspects of automatic speaker verification (ASV). However, the promotion of ASV technology is still a very challenging issue, because most technologies are still very sensitive to new, unknown and spoofing conditions. Most previous studies focused on extracting target speaker information from natural speech. This paper aims to design a new ASV corpus with multi-speaking styles and investigate the ASV robustness to these different speaking styles. We first release this corpus in the Zenodo website for public research, in which each speaker has several text-dependent and text-independent singing, humming and normal reading speech utterances. Then, we investigate the speaker discrimination of each speaking style in the feature space. Furthermore, the intra and inter-speaker variabilities in each different speaking style and cross-speaking styles are investigated in both text-dependent and text-independent ASV tasks. Conventional Gaussian Mixture Model (GMM), and the state-of-the-art x-vector are used to build ASV systems. Experimental results show that the voiceprint information in humming and singing speech are more distinguishable than that in normal reading speech for conventional ASV systems. Furthermore, we find that combing the three speaking styles can significantly improve the x-vector based ASV system, even when only limited gains are obtained by conventional GMM-based systems.</description><identifier>ISSN: 1932-6203</identifier><identifier>EISSN: 1932-6203</identifier><identifier>DOI: 10.1371/journal.pone.0241809</identifier><identifier>PMID: 33175898</identifier><language>eng</language><publisher>United States: Public Library of Science</publisher><subject>Access control ; Audio equipment ; Biology and Life Sciences ; Computer and Information Sciences ; Engineering and Technology ; Human-computer interaction ; Humans ; Identification and classification ; Laboratories ; Methods ; Normal Distribution ; Physical Sciences ; Probabilistic models ; Public speakers ; Reading ; Singing ; Social Sciences ; Speaking ; Speech ; Speech Acoustics ; Speech acts (Linguistics) ; Speech Perception ; Speech recognition ; Spoofing ; Students ; Verification ; Verification (Logic) ; Websites</subject><ispartof>PloS one, 2020-11, Vol.15 (11), p.e0241809-e0241809</ispartof><rights>COPYRIGHT 2020 Public Library of Science</rights><rights>2020 Mao et al. This is an open access article distributed under the terms of the Creative Commons Attribution License: http://creativecommons.org/licenses/by/4.0/ (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>2020 Mao et al 2020 Mao et al</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c641t-ef17bdaea46cd49956b64913ab7ee52521c7fc9195b93a34c4bab64b9f04473f3</cites><orcidid>0000-0003-0924-408X</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC7657545/pdf/$$EPDF$$P50$$Gpubmedcentral$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC7657545/$$EHTML$$P50$$Gpubmedcentral$$Hfree_for_read</linktohtml><link.rule.ids>230,314,724,777,781,861,882,2096,2915,23847,27905,27906,53772,53774,79349,79350</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/33175898$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><contributor>McLoughlin, Ian</contributor><creatorcontrib>Mao, Hongwei</creatorcontrib><creatorcontrib>Shi, Yan</creatorcontrib><creatorcontrib>Liu, Yue</creatorcontrib><creatorcontrib>Wei, Linqiang</creatorcontrib><creatorcontrib>Li, Yijie</creatorcontrib><creatorcontrib>Long, Yanhua</creatorcontrib><title>Short-time speaker verification with different speaking style utterances</title><title>PloS one</title><addtitle>PLoS One</addtitle><description>In recent years, great progress has been made in the technical aspects of automatic speaker verification (ASV). However, the promotion of ASV technology is still a very challenging issue, because most technologies are still very sensitive to new, unknown and spoofing conditions. Most previous studies focused on extracting target speaker information from natural speech. This paper aims to design a new ASV corpus with multi-speaking styles and investigate the ASV robustness to these different speaking styles. We first release this corpus in the Zenodo website for public research, in which each speaker has several text-dependent and text-independent singing, humming and normal reading speech utterances. Then, we investigate the speaker discrimination of each speaking style in the feature space. Furthermore, the intra and inter-speaker variabilities in each different speaking style and cross-speaking styles are investigated in both text-dependent and text-independent ASV tasks. Conventional Gaussian Mixture Model (GMM), and the state-of-the-art x-vector are used to build ASV systems. Experimental results show that the voiceprint information in humming and singing speech are more distinguishable than that in normal reading speech for conventional ASV systems. Furthermore, we find that combing the three speaking styles can significantly improve the x-vector based ASV system, even when only limited gains are obtained by conventional GMM-based systems.</description><subject>Access control</subject><subject>Audio equipment</subject><subject>Biology and Life Sciences</subject><subject>Computer and Information Sciences</subject><subject>Engineering and Technology</subject><subject>Human-computer interaction</subject><subject>Humans</subject><subject>Identification and classification</subject><subject>Laboratories</subject><subject>Methods</subject><subject>Normal Distribution</subject><subject>Physical Sciences</subject><subject>Probabilistic models</subject><subject>Public speakers</subject><subject>Reading</subject><subject>Singing</subject><subject>Social Sciences</subject><subject>Speaking</subject><subject>Speech</subject><subject>Speech Acoustics</subject><subject>Speech acts (Linguistics)</subject><subject>Speech Perception</subject><subject>Speech recognition</subject><subject>Spoofing</subject><subject>Students</subject><subject>Verification</subject><subject>Verification (Logic)</subject><subject>Websites</subject><issn>1932-6203</issn><issn>1932-6203</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>EIF</sourceid><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><sourceid>DOA</sourceid><recordid>eNqNkl9rFDEUxQdRbK1-A9EBQfRh1mTyb_IilKJ2oVCw6mvIZO7sZp2dbJNMtd_ejDstO9IHyUNC8rvnJicny15itMBE4A8bN_hed4ud62GBSoorJB9lx1iSsuAlIo8P1kfZsxA2CDFScf40OyIEC1bJ6jg7v1o7H4tot5CHHeif4PMb8La1Rkfr-vyXjeu8sW0LHvq4Z2y_ykO87SAfYgSvewPhefak1V2AF9N8kn3__Onb2XlxcflleXZ6URhOcSygxaJuNGjKTUOlZLzmVGKiawHASlZiI1ojsWS1JJpQQ2udiFq2iFJBWnKSvd7r7joX1GRCUCVlkmMmBEvEck80Tm_Uztut9rfKaav-bji_UtpHazpQXGJO6koLITRlBNdCSC0bhDXCVCKetD5O3YZ6C41JFnjdzUTnJ71dq5W7UYIzweh4mXeTgHfXA4SotjYY6DrdgxvGe3OEKiaQTOibf9CHXzdRK50eYPvWpb5mFFWnyeGSEoKqRC0eoNJoYGtNSkxr0_6s4P2sIDERfseVHkJQy6uv_89e_pizbw_YNeguroPrhjFbYQ7SPWi8C8FDe28yRmoM_J0bagy8mgKfyl4dftB90V3CyR8OIvpP</recordid><startdate>20201111</startdate><enddate>20201111</enddate><creator>Mao, Hongwei</creator><creator>Shi, Yan</creator><creator>Liu, Yue</creator><creator>Wei, Linqiang</creator><creator>Li, Yijie</creator><creator>Long, Yanhua</creator><general>Public Library of Science</general><general>Public Library of Science (PLoS)</general><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>IOV</scope><scope>ISR</scope><scope>3V.</scope><scope>7QG</scope><scope>7QL</scope><scope>7QO</scope><scope>7RV</scope><scope>7SN</scope><scope>7SS</scope><scope>7T5</scope><scope>7TG</scope><scope>7TM</scope><scope>7U9</scope><scope>7X2</scope><scope>7X7</scope><scope>7XB</scope><scope>88E</scope><scope>8AO</scope><scope>8C1</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FH</scope><scope>8FI</scope><scope>8FJ</scope><scope>8FK</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AEUYN</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>ATCPS</scope><scope>AZQEC</scope><scope>BBNVY</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>BHPHI</scope><scope>C1K</scope><scope>CCPQU</scope><scope>D1I</scope><scope>DWQXO</scope><scope>FR3</scope><scope>FYUFA</scope><scope>GHDGH</scope><scope>GNUQQ</scope><scope>H94</scope><scope>HCIFZ</scope><scope>K9.</scope><scope>KB.</scope><scope>KB0</scope><scope>KL.</scope><scope>L6V</scope><scope>LK8</scope><scope>M0K</scope><scope>M0S</scope><scope>M1P</scope><scope>M7N</scope><scope>M7P</scope><scope>M7S</scope><scope>NAPCQ</scope><scope>P5Z</scope><scope>P62</scope><scope>P64</scope><scope>PATMY</scope><scope>PDBOC</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>PYCSY</scope><scope>RC3</scope><scope>7X8</scope><scope>5PM</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0003-0924-408X</orcidid></search><sort><creationdate>20201111</creationdate><title>Short-time speaker verification with different speaking style utterances</title><author>Mao, Hongwei ; Shi, Yan ; Liu, Yue ; Wei, Linqiang ; Li, Yijie ; Long, Yanhua</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c641t-ef17bdaea46cd49956b64913ab7ee52521c7fc9195b93a34c4bab64b9f04473f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Access control</topic><topic>Audio equipment</topic><topic>Biology and Life Sciences</topic><topic>Computer and Information Sciences</topic><topic>Engineering and Technology</topic><topic>Human-computer interaction</topic><topic>Humans</topic><topic>Identification and classification</topic><topic>Laboratories</topic><topic>Methods</topic><topic>Normal Distribution</topic><topic>Physical Sciences</topic><topic>Probabilistic models</topic><topic>Public speakers</topic><topic>Reading</topic><topic>Singing</topic><topic>Social Sciences</topic><topic>Speaking</topic><topic>Speech</topic><topic>Speech Acoustics</topic><topic>Speech acts (Linguistics)</topic><topic>Speech Perception</topic><topic>Speech recognition</topic><topic>Spoofing</topic><topic>Students</topic><topic>Verification</topic><topic>Verification (Logic)</topic><topic>Websites</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Mao, Hongwei</creatorcontrib><creatorcontrib>Shi, Yan</creatorcontrib><creatorcontrib>Liu, Yue</creatorcontrib><creatorcontrib>Wei, Linqiang</creatorcontrib><creatorcontrib>Li, Yijie</creatorcontrib><creatorcontrib>Long, Yanhua</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Gale In Context: Opposing Viewpoints</collection><collection>Gale In Context: Science</collection><collection>ProQuest Central (Corporate)</collection><collection>Animal Behavior Abstracts</collection><collection>Bacteriology Abstracts (Microbiology B)</collection><collection>Biotechnology Research Abstracts</collection><collection>Nursing & Allied Health Database</collection><collection>Ecology Abstracts</collection><collection>Entomology Abstracts (Full archive)</collection><collection>Immunology Abstracts</collection><collection>Meteorological & Geoastrophysical Abstracts</collection><collection>Nucleic Acids Abstracts</collection><collection>Virology and AIDS Abstracts</collection><collection>Agricultural Science Collection</collection><collection>Health & Medical Collection</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Medical Database (Alumni Edition)</collection><collection>ProQuest Pharma Collection</collection><collection>Public Health Database</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Natural Science Collection</collection><collection>Hospital Premium Collection</collection><collection>Hospital Premium Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest One Sustainability</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>Agricultural & Environmental Science Collection</collection><collection>ProQuest Central Essentials</collection><collection>Biological Science Collection</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>Natural Science Collection</collection><collection>Environmental Sciences and Pollution Management</collection><collection>ProQuest One Community College</collection><collection>ProQuest Materials Science Collection</collection><collection>ProQuest Central Korea</collection><collection>Engineering Research Database</collection><collection>Health Research Premium Collection</collection><collection>Health Research Premium Collection (Alumni)</collection><collection>ProQuest Central Student</collection><collection>AIDS and Cancer Research Abstracts</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Health & Medical Complete (Alumni)</collection><collection>Materials Science Database</collection><collection>Nursing & Allied Health Database (Alumni Edition)</collection><collection>Meteorological & Geoastrophysical Abstracts - Academic</collection><collection>ProQuest Engineering Collection</collection><collection>ProQuest Biological Science Collection</collection><collection>Agricultural Science Database</collection><collection>Health & Medical Collection (Alumni Edition)</collection><collection>Medical Database</collection><collection>Algology Mycology and Protozoology Abstracts (Microbiology C)</collection><collection>Biological Science Database</collection><collection>Engineering Database</collection><collection>Nursing & Allied Health Premium</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>Environmental Science Database</collection><collection>Materials Science Collection</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>Environmental Science Collection</collection><collection>Genetics Abstracts</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>PloS one</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Mao, Hongwei</au><au>Shi, Yan</au><au>Liu, Yue</au><au>Wei, Linqiang</au><au>Li, Yijie</au><au>Long, Yanhua</au><au>McLoughlin, Ian</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Short-time speaker verification with different speaking style utterances</atitle><jtitle>PloS one</jtitle><addtitle>PLoS One</addtitle><date>2020-11-11</date><risdate>2020</risdate><volume>15</volume><issue>11</issue><spage>e0241809</spage><epage>e0241809</epage><pages>e0241809-e0241809</pages><issn>1932-6203</issn><eissn>1932-6203</eissn><abstract>In recent years, great progress has been made in the technical aspects of automatic speaker verification (ASV). However, the promotion of ASV technology is still a very challenging issue, because most technologies are still very sensitive to new, unknown and spoofing conditions. Most previous studies focused on extracting target speaker information from natural speech. This paper aims to design a new ASV corpus with multi-speaking styles and investigate the ASV robustness to these different speaking styles. We first release this corpus in the Zenodo website for public research, in which each speaker has several text-dependent and text-independent singing, humming and normal reading speech utterances. Then, we investigate the speaker discrimination of each speaking style in the feature space. Furthermore, the intra and inter-speaker variabilities in each different speaking style and cross-speaking styles are investigated in both text-dependent and text-independent ASV tasks. Conventional Gaussian Mixture Model (GMM), and the state-of-the-art x-vector are used to build ASV systems. Experimental results show that the voiceprint information in humming and singing speech are more distinguishable than that in normal reading speech for conventional ASV systems. Furthermore, we find that combing the three speaking styles can significantly improve the x-vector based ASV system, even when only limited gains are obtained by conventional GMM-based systems.</abstract><cop>United States</cop><pub>Public Library of Science</pub><pmid>33175898</pmid><doi>10.1371/journal.pone.0241809</doi><tpages>e0241809</tpages><orcidid>https://orcid.org/0000-0003-0924-408X</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1932-6203 |
ispartof | PloS one, 2020-11, Vol.15 (11), p.e0241809-e0241809 |
issn | 1932-6203 1932-6203 |
language | eng |
recordid | cdi_plos_journals_2459615775 |
source | MEDLINE; DOAJ Directory of Open Access Journals; Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals; Public Library of Science (PLoS); PubMed Central; Free Full-Text Journals in Chemistry |
subjects | Access control Audio equipment Biology and Life Sciences Computer and Information Sciences Engineering and Technology Human-computer interaction Humans Identification and classification Laboratories Methods Normal Distribution Physical Sciences Probabilistic models Public speakers Reading Singing Social Sciences Speaking Speech Speech Acoustics Speech acts (Linguistics) Speech Perception Speech recognition Spoofing Students Verification Verification (Logic) Websites |
title | Short-time speaker verification with different speaking style utterances |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-18T12%3A02%3A50IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_plos_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Short-time%20speaker%20verification%20with%20different%20speaking%20style%20utterances&rft.jtitle=PloS%20one&rft.au=Mao,%20Hongwei&rft.date=2020-11-11&rft.volume=15&rft.issue=11&rft.spage=e0241809&rft.epage=e0241809&rft.pages=e0241809-e0241809&rft.issn=1932-6203&rft.eissn=1932-6203&rft_id=info:doi/10.1371/journal.pone.0241809&rft_dat=%3Cgale_plos_%3EA641243308%3C/gale_plos_%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2459615775&rft_id=info:pmid/33175898&rft_galeid=A641243308&rft_doaj_id=oai_doaj_org_article_69163b8a777a4531b779a9d01a014906&rfr_iscdi=true |