Speaker verification method based on deep neural network, terminal and storage medium

The invention discloses a speaker verification method based on a deep neural network, a terminal and a storage medium. The method comprises the following steps: acquiring voice data of a plurality of speakers in a preset data set; converting the plurality of voice data into a two-dimensional data gr...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: YANG BO, LIANG XINGWEI, ZHUANG XINNAN
Format: Patent
Sprache:chi ; eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator YANG BO
LIANG XINGWEI
ZHUANG XINNAN
description The invention discloses a speaker verification method based on a deep neural network, a terminal and a storage medium. The method comprises the following steps: acquiring voice data of a plurality of speakers in a preset data set; converting the plurality of voice data into a two-dimensional data group through preprocessing, and dividing the two-dimensional data group into a training set and a verification set according to a preset proportion; constructing a deep neural network according to the residual neural network and the long-short term memory network, and performing training verification on the deep neural network through the training set and the verification set to obtain a trained deep neural network; and through the trained deep neural network, predicting a plurality of pieces of input audio information of the to-be-tested speaker, and outputting a verification result of the to-be-tested speaker. The method makes full use of the frequency domain feature and time domain feature information of the audi
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_CN115223569A</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>CN115223569A</sourcerecordid><originalsourceid>FETCH-epo_espacenet_CN115223569A3</originalsourceid><addsrcrecordid>eNqNjDsOwjAQRN1QIOAOSw9FEgWJEkUgKhqgjpZ4Albij9YOXB8XHIDqaUZvZq7u1wAeIPSGmN50nIx3ZJFeXtODIzTlrIFADpPwmJE-XoYNJYg1LjfsNMXkhZ_IS20mu1SznseI1Y8LtT4db815i-BbxMAd8k3bXIqiLsuq3u0P1T_OF9xbOSs</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Speaker verification method based on deep neural network, terminal and storage medium</title><source>esp@cenet</source><creator>YANG BO ; LIANG XINGWEI ; ZHUANG XINNAN</creator><creatorcontrib>YANG BO ; LIANG XINGWEI ; ZHUANG XINNAN</creatorcontrib><description>The invention discloses a speaker verification method based on a deep neural network, a terminal and a storage medium. The method comprises the following steps: acquiring voice data of a plurality of speakers in a preset data set; converting the plurality of voice data into a two-dimensional data group through preprocessing, and dividing the two-dimensional data group into a training set and a verification set according to a preset proportion; constructing a deep neural network according to the residual neural network and the long-short term memory network, and performing training verification on the deep neural network through the training set and the verification set to obtain a trained deep neural network; and through the trained deep neural network, predicting a plurality of pieces of input audio information of the to-be-tested speaker, and outputting a verification result of the to-be-tested speaker. The method makes full use of the frequency domain feature and time domain feature information of the audi</description><language>chi ; eng</language><subject>ACOUSTICS ; MUSICAL INSTRUMENTS ; PHYSICS ; SPEECH ANALYSIS OR SYNTHESIS ; SPEECH OR AUDIO CODING OR DECODING ; SPEECH OR VOICE PROCESSING ; SPEECH RECOGNITION</subject><creationdate>2022</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20221021&amp;DB=EPODOC&amp;CC=CN&amp;NR=115223569A$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,780,885,25564,76547</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20221021&amp;DB=EPODOC&amp;CC=CN&amp;NR=115223569A$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>YANG BO</creatorcontrib><creatorcontrib>LIANG XINGWEI</creatorcontrib><creatorcontrib>ZHUANG XINNAN</creatorcontrib><title>Speaker verification method based on deep neural network, terminal and storage medium</title><description>The invention discloses a speaker verification method based on a deep neural network, a terminal and a storage medium. The method comprises the following steps: acquiring voice data of a plurality of speakers in a preset data set; converting the plurality of voice data into a two-dimensional data group through preprocessing, and dividing the two-dimensional data group into a training set and a verification set according to a preset proportion; constructing a deep neural network according to the residual neural network and the long-short term memory network, and performing training verification on the deep neural network through the training set and the verification set to obtain a trained deep neural network; and through the trained deep neural network, predicting a plurality of pieces of input audio information of the to-be-tested speaker, and outputting a verification result of the to-be-tested speaker. The method makes full use of the frequency domain feature and time domain feature information of the audi</description><subject>ACOUSTICS</subject><subject>MUSICAL INSTRUMENTS</subject><subject>PHYSICS</subject><subject>SPEECH ANALYSIS OR SYNTHESIS</subject><subject>SPEECH OR AUDIO CODING OR DECODING</subject><subject>SPEECH OR VOICE PROCESSING</subject><subject>SPEECH RECOGNITION</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2022</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNqNjDsOwjAQRN1QIOAOSw9FEgWJEkUgKhqgjpZ4Albij9YOXB8XHIDqaUZvZq7u1wAeIPSGmN50nIx3ZJFeXtODIzTlrIFADpPwmJE-XoYNJYg1LjfsNMXkhZ_IS20mu1SznseI1Y8LtT4db815i-BbxMAd8k3bXIqiLsuq3u0P1T_OF9xbOSs</recordid><startdate>20221021</startdate><enddate>20221021</enddate><creator>YANG BO</creator><creator>LIANG XINGWEI</creator><creator>ZHUANG XINNAN</creator><scope>EVB</scope></search><sort><creationdate>20221021</creationdate><title>Speaker verification method based on deep neural network, terminal and storage medium</title><author>YANG BO ; LIANG XINGWEI ; ZHUANG XINNAN</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_CN115223569A3</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>chi ; eng</language><creationdate>2022</creationdate><topic>ACOUSTICS</topic><topic>MUSICAL INSTRUMENTS</topic><topic>PHYSICS</topic><topic>SPEECH ANALYSIS OR SYNTHESIS</topic><topic>SPEECH OR AUDIO CODING OR DECODING</topic><topic>SPEECH OR VOICE PROCESSING</topic><topic>SPEECH RECOGNITION</topic><toplevel>online_resources</toplevel><creatorcontrib>YANG BO</creatorcontrib><creatorcontrib>LIANG XINGWEI</creatorcontrib><creatorcontrib>ZHUANG XINNAN</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>YANG BO</au><au>LIANG XINGWEI</au><au>ZHUANG XINNAN</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Speaker verification method based on deep neural network, terminal and storage medium</title><date>2022-10-21</date><risdate>2022</risdate><abstract>The invention discloses a speaker verification method based on a deep neural network, a terminal and a storage medium. The method comprises the following steps: acquiring voice data of a plurality of speakers in a preset data set; converting the plurality of voice data into a two-dimensional data group through preprocessing, and dividing the two-dimensional data group into a training set and a verification set according to a preset proportion; constructing a deep neural network according to the residual neural network and the long-short term memory network, and performing training verification on the deep neural network through the training set and the verification set to obtain a trained deep neural network; and through the trained deep neural network, predicting a plurality of pieces of input audio information of the to-be-tested speaker, and outputting a verification result of the to-be-tested speaker. The method makes full use of the frequency domain feature and time domain feature information of the audi</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language chi ; eng
recordid cdi_epo_espacenet_CN115223569A
source esp@cenet
subjects ACOUSTICS
MUSICAL INSTRUMENTS
PHYSICS
SPEECH ANALYSIS OR SYNTHESIS
SPEECH OR AUDIO CODING OR DECODING
SPEECH OR VOICE PROCESSING
SPEECH RECOGNITION
title Speaker verification method based on deep neural network, terminal and storage medium
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-22T18%3A27%3A05IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=YANG%20BO&rft.date=2022-10-21&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3ECN115223569A%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true