Speaker authentication in digital communication networks

Example embodiments provide a speaker authentication technology that compensates for mismatches between enrollment process conditions and test process conditions using correction parameters or correction models, which allow for correcting one of the test voice characterizing parameter set and the en...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: PILZ CHRISTIAN S, KUPPUSWAMY RAJA
Format: Patent
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator PILZ CHRISTIAN S
KUPPUSWAMY RAJA
description Example embodiments provide a speaker authentication technology that compensates for mismatches between enrollment process conditions and test process conditions using correction parameters or correction models, which allow for correcting one of the test voice characterizing parameter set and the enrollment voice characterizing parameter set according to a mismatch between the test process conditions and the enrollment process conditions, thereby obtaining values for the test voice characterizing parameter set and the enrollment voice characterizing parameter set that are based on the same or at least similar process conditions. Alternatively, each of the enrollment and test voice characterizing parameter sets may be normalized to predetermined standard process conditions by using the correction parameters or correction models. This abstract is provided to comply with rules requiring an abstract, and it is submitted with the intention that it will not be used to interpret or limit the scope or meaning of the claims.
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_US2007233483A1</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>US2007233483A1</sourcerecordid><originalsourceid>FETCH-epo_espacenet_US2007233483A13</originalsourceid><addsrcrecordid>eNrjZLAILkhNzE4tUkgsLclIzSvJTE4syczPU8jMU0jJTM8sScxRSM7PzS3Ng0nkpZaU5xdlF_MwsKYl5hSn8kJpbgZlN9cQZw_d1IL8-NTigsTkVKDS-NBgIwMDcyNjYxMLY0dDY-JUAQAbmjAG</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Speaker authentication in digital communication networks</title><source>esp@cenet</source><creator>PILZ CHRISTIAN S ; KUPPUSWAMY RAJA</creator><creatorcontrib>PILZ CHRISTIAN S ; KUPPUSWAMY RAJA</creatorcontrib><description>Example embodiments provide a speaker authentication technology that compensates for mismatches between enrollment process conditions and test process conditions using correction parameters or correction models, which allow for correcting one of the test voice characterizing parameter set and the enrollment voice characterizing parameter set according to a mismatch between the test process conditions and the enrollment process conditions, thereby obtaining values for the test voice characterizing parameter set and the enrollment voice characterizing parameter set that are based on the same or at least similar process conditions. Alternatively, each of the enrollment and test voice characterizing parameter sets may be normalized to predetermined standard process conditions by using the correction parameters or correction models. This abstract is provided to comply with rules requiring an abstract, and it is submitted with the intention that it will not be used to interpret or limit the scope or meaning of the claims.</description><language>eng</language><subject>ACOUSTICS ; MUSICAL INSTRUMENTS ; PHYSICS ; SPEECH ANALYSIS OR SYNTHESIS ; SPEECH OR AUDIO CODING OR DECODING ; SPEECH OR VOICE PROCESSING ; SPEECH RECOGNITION</subject><creationdate>2007</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20071004&amp;DB=EPODOC&amp;CC=US&amp;NR=2007233483A1$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,780,885,25564,76419</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20071004&amp;DB=EPODOC&amp;CC=US&amp;NR=2007233483A1$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>PILZ CHRISTIAN S</creatorcontrib><creatorcontrib>KUPPUSWAMY RAJA</creatorcontrib><title>Speaker authentication in digital communication networks</title><description>Example embodiments provide a speaker authentication technology that compensates for mismatches between enrollment process conditions and test process conditions using correction parameters or correction models, which allow for correcting one of the test voice characterizing parameter set and the enrollment voice characterizing parameter set according to a mismatch between the test process conditions and the enrollment process conditions, thereby obtaining values for the test voice characterizing parameter set and the enrollment voice characterizing parameter set that are based on the same or at least similar process conditions. Alternatively, each of the enrollment and test voice characterizing parameter sets may be normalized to predetermined standard process conditions by using the correction parameters or correction models. This abstract is provided to comply with rules requiring an abstract, and it is submitted with the intention that it will not be used to interpret or limit the scope or meaning of the claims.</description><subject>ACOUSTICS</subject><subject>MUSICAL INSTRUMENTS</subject><subject>PHYSICS</subject><subject>SPEECH ANALYSIS OR SYNTHESIS</subject><subject>SPEECH OR AUDIO CODING OR DECODING</subject><subject>SPEECH OR VOICE PROCESSING</subject><subject>SPEECH RECOGNITION</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2007</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNrjZLAILkhNzE4tUkgsLclIzSvJTE4syczPU8jMU0jJTM8sScxRSM7PzS3Ng0nkpZaU5xdlF_MwsKYl5hSn8kJpbgZlN9cQZw_d1IL8-NTigsTkVKDS-NBgIwMDcyNjYxMLY0dDY-JUAQAbmjAG</recordid><startdate>20071004</startdate><enddate>20071004</enddate><creator>PILZ CHRISTIAN S</creator><creator>KUPPUSWAMY RAJA</creator><scope>EVB</scope></search><sort><creationdate>20071004</creationdate><title>Speaker authentication in digital communication networks</title><author>PILZ CHRISTIAN S ; KUPPUSWAMY RAJA</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_US2007233483A13</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>eng</language><creationdate>2007</creationdate><topic>ACOUSTICS</topic><topic>MUSICAL INSTRUMENTS</topic><topic>PHYSICS</topic><topic>SPEECH ANALYSIS OR SYNTHESIS</topic><topic>SPEECH OR AUDIO CODING OR DECODING</topic><topic>SPEECH OR VOICE PROCESSING</topic><topic>SPEECH RECOGNITION</topic><toplevel>online_resources</toplevel><creatorcontrib>PILZ CHRISTIAN S</creatorcontrib><creatorcontrib>KUPPUSWAMY RAJA</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>PILZ CHRISTIAN S</au><au>KUPPUSWAMY RAJA</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Speaker authentication in digital communication networks</title><date>2007-10-04</date><risdate>2007</risdate><abstract>Example embodiments provide a speaker authentication technology that compensates for mismatches between enrollment process conditions and test process conditions using correction parameters or correction models, which allow for correcting one of the test voice characterizing parameter set and the enrollment voice characterizing parameter set according to a mismatch between the test process conditions and the enrollment process conditions, thereby obtaining values for the test voice characterizing parameter set and the enrollment voice characterizing parameter set that are based on the same or at least similar process conditions. Alternatively, each of the enrollment and test voice characterizing parameter sets may be normalized to predetermined standard process conditions by using the correction parameters or correction models. This abstract is provided to comply with rules requiring an abstract, and it is submitted with the intention that it will not be used to interpret or limit the scope or meaning of the claims.</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language eng
recordid cdi_epo_espacenet_US2007233483A1
source esp@cenet
subjects ACOUSTICS
MUSICAL INSTRUMENTS
PHYSICS
SPEECH ANALYSIS OR SYNTHESIS
SPEECH OR AUDIO CODING OR DECODING
SPEECH OR VOICE PROCESSING
SPEECH RECOGNITION
title Speaker authentication in digital communication networks
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T20%3A17%3A58IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=PILZ%20CHRISTIAN%20S&rft.date=2007-10-04&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3EUS2007233483A1%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true