End-to-end voice enhancement method based on generation of countermeasure network

The invention discloses an end-to-end voice enhancement method based on generation of a countermeasure network. The method comprises the following steps of directly inputting a noisy voice signal intoa pre-trained deep neural network for signal processing and outputting an enhanced voice signal, whe...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: WU JIANFENG, QIN HUIBIN, QIN HONGSHUAI
Format: Patent
Sprache:chi ; eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator WU JIANFENG
QIN HUIBIN
QIN HONGSHUAI
description The invention discloses an end-to-end voice enhancement method based on generation of a countermeasure network. The method comprises the following steps of directly inputting a noisy voice signal intoa pre-trained deep neural network for signal processing and outputting an enhanced voice signal, wherein the depth neural network is obtained through training by the following steps of S1, preliminarily training to generate a countermeasure network, wherein the generation of the countermeasure network comprises two deep neural networks: a generator G and a discriminator D; S2, performing knowledge distillation on the simulated noisy voice through a traditional statistical speech enhancement algorithm, and then training to generate a countermeasure network again; S3, performing fine adjustmenton the generator G obtained through training through the real noisy voice; and S4, outputting the generator G trained in the above step as a final deep neural network for voice enhancement processing. 本发明公开了一种基于生成对抗网络的端到端语音增强
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_CN110390950A</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>CN110390950A</sourcerecordid><originalsourceid>FETCH-epo_espacenet_CN110390950A3</originalsourceid><addsrcrecordid>eNqNjLEKwkAQBdNYiPoP6wccXAgWKSVErATBPpx3Lybo7Ya7jf6-KfwAq5limHVxbTkYFQMO9JbRg8CDY48IVorQQQLdXUYgYXqAkZyOi0pPXmZWpAiX5wRi6EfSc1usevfK2P24Kfan9tacDSbpkCfnl4l2zaUsbVXb-mCP1T_NF88IN9E</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>End-to-end voice enhancement method based on generation of countermeasure network</title><source>esp@cenet</source><creator>WU JIANFENG ; QIN HUIBIN ; QIN HONGSHUAI</creator><creatorcontrib>WU JIANFENG ; QIN HUIBIN ; QIN HONGSHUAI</creatorcontrib><description>The invention discloses an end-to-end voice enhancement method based on generation of a countermeasure network. The method comprises the following steps of directly inputting a noisy voice signal intoa pre-trained deep neural network for signal processing and outputting an enhanced voice signal, wherein the depth neural network is obtained through training by the following steps of S1, preliminarily training to generate a countermeasure network, wherein the generation of the countermeasure network comprises two deep neural networks: a generator G and a discriminator D; S2, performing knowledge distillation on the simulated noisy voice through a traditional statistical speech enhancement algorithm, and then training to generate a countermeasure network again; S3, performing fine adjustmenton the generator G obtained through training through the real noisy voice; and S4, outputting the generator G trained in the above step as a final deep neural network for voice enhancement processing. 本发明公开了一种基于生成对抗网络的端到端语音增强</description><language>chi ; eng</language><subject>ACOUSTICS ; CALCULATING ; COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS ; COMPUTING ; COUNTING ; MUSICAL INSTRUMENTS ; PHYSICS ; SPEECH ANALYSIS OR SYNTHESIS ; SPEECH OR AUDIO CODING OR DECODING ; SPEECH OR VOICE PROCESSING ; SPEECH RECOGNITION</subject><creationdate>2019</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20191029&amp;DB=EPODOC&amp;CC=CN&amp;NR=110390950A$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,780,885,25564,76547</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20191029&amp;DB=EPODOC&amp;CC=CN&amp;NR=110390950A$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>WU JIANFENG</creatorcontrib><creatorcontrib>QIN HUIBIN</creatorcontrib><creatorcontrib>QIN HONGSHUAI</creatorcontrib><title>End-to-end voice enhancement method based on generation of countermeasure network</title><description>The invention discloses an end-to-end voice enhancement method based on generation of a countermeasure network. The method comprises the following steps of directly inputting a noisy voice signal intoa pre-trained deep neural network for signal processing and outputting an enhanced voice signal, wherein the depth neural network is obtained through training by the following steps of S1, preliminarily training to generate a countermeasure network, wherein the generation of the countermeasure network comprises two deep neural networks: a generator G and a discriminator D; S2, performing knowledge distillation on the simulated noisy voice through a traditional statistical speech enhancement algorithm, and then training to generate a countermeasure network again; S3, performing fine adjustmenton the generator G obtained through training through the real noisy voice; and S4, outputting the generator G trained in the above step as a final deep neural network for voice enhancement processing. 本发明公开了一种基于生成对抗网络的端到端语音增强</description><subject>ACOUSTICS</subject><subject>CALCULATING</subject><subject>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>MUSICAL INSTRUMENTS</subject><subject>PHYSICS</subject><subject>SPEECH ANALYSIS OR SYNTHESIS</subject><subject>SPEECH OR AUDIO CODING OR DECODING</subject><subject>SPEECH OR VOICE PROCESSING</subject><subject>SPEECH RECOGNITION</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2019</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNqNjLEKwkAQBdNYiPoP6wccXAgWKSVErATBPpx3Lybo7Ya7jf6-KfwAq5limHVxbTkYFQMO9JbRg8CDY48IVorQQQLdXUYgYXqAkZyOi0pPXmZWpAiX5wRi6EfSc1usevfK2P24Kfan9tacDSbpkCfnl4l2zaUsbVXb-mCP1T_NF88IN9E</recordid><startdate>20191029</startdate><enddate>20191029</enddate><creator>WU JIANFENG</creator><creator>QIN HUIBIN</creator><creator>QIN HONGSHUAI</creator><scope>EVB</scope></search><sort><creationdate>20191029</creationdate><title>End-to-end voice enhancement method based on generation of countermeasure network</title><author>WU JIANFENG ; QIN HUIBIN ; QIN HONGSHUAI</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_CN110390950A3</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>chi ; eng</language><creationdate>2019</creationdate><topic>ACOUSTICS</topic><topic>CALCULATING</topic><topic>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>MUSICAL INSTRUMENTS</topic><topic>PHYSICS</topic><topic>SPEECH ANALYSIS OR SYNTHESIS</topic><topic>SPEECH OR AUDIO CODING OR DECODING</topic><topic>SPEECH OR VOICE PROCESSING</topic><topic>SPEECH RECOGNITION</topic><toplevel>online_resources</toplevel><creatorcontrib>WU JIANFENG</creatorcontrib><creatorcontrib>QIN HUIBIN</creatorcontrib><creatorcontrib>QIN HONGSHUAI</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>WU JIANFENG</au><au>QIN HUIBIN</au><au>QIN HONGSHUAI</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>End-to-end voice enhancement method based on generation of countermeasure network</title><date>2019-10-29</date><risdate>2019</risdate><abstract>The invention discloses an end-to-end voice enhancement method based on generation of a countermeasure network. The method comprises the following steps of directly inputting a noisy voice signal intoa pre-trained deep neural network for signal processing and outputting an enhanced voice signal, wherein the depth neural network is obtained through training by the following steps of S1, preliminarily training to generate a countermeasure network, wherein the generation of the countermeasure network comprises two deep neural networks: a generator G and a discriminator D; S2, performing knowledge distillation on the simulated noisy voice through a traditional statistical speech enhancement algorithm, and then training to generate a countermeasure network again; S3, performing fine adjustmenton the generator G obtained through training through the real noisy voice; and S4, outputting the generator G trained in the above step as a final deep neural network for voice enhancement processing. 本发明公开了一种基于生成对抗网络的端到端语音增强</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language chi ; eng
recordid cdi_epo_espacenet_CN110390950A
source esp@cenet
subjects ACOUSTICS
CALCULATING
COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
COMPUTING
COUNTING
MUSICAL INSTRUMENTS
PHYSICS
SPEECH ANALYSIS OR SYNTHESIS
SPEECH OR AUDIO CODING OR DECODING
SPEECH OR VOICE PROCESSING
SPEECH RECOGNITION
title End-to-end voice enhancement method based on generation of countermeasure network
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T03%3A26%3A29IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=WU%20JIANFENG&rft.date=2019-10-29&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3ECN110390950A%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true