Motion sound false judgment method and device based on time-frequency graph and convolutional neural network
The invention discloses a motion sound false judgment method and device based on a time-frequency graph and a convolutional neural network. The method comprises the following steps: S1, splicing a plurality of motion sound segments with the same sound category label to form a motion audio; S2, rando...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Patent |
Sprache: | chi ; eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | ZHU SHAOGONG FENG XINGPAN WU YOUYIN |
description | The invention discloses a motion sound false judgment method and device based on a time-frequency graph and a convolutional neural network. The method comprises the following steps: S1, splicing a plurality of motion sound segments with the same sound category label to form a motion audio; S2, randomly intercepting a plurality of motion sound segments and reverse sound segments from the motion audio and the reverse audio respectively in an oversampling mode and an undersampling mode to serve as forward sample data and reverse sample data of model training; S3, inputting the forward and reverse sample data into an improved convolutional neural network, and forming a motion sound false judgment model through iterative updating training; and S4, intercepting a to-be-recognized sound segment from the audio collected in the real environment, inputting the to-be-recognized sound segment into the motion sound false judgment model, and outputting a motion false judgment result of the to-be-recognized sound segment by |
format | Patent |
fullrecord | <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_CN113870896A</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>CN113870896A</sourcerecordid><originalsourceid>FETCH-epo_espacenet_CN113870896A3</originalsourceid><addsrcrecordid>eNqNzTsOgkAYBGAaC6Pe4fcAJBIShdIQjY1W9mTdHR667I_7wHh7kXgAqynmy8w80mf2LRtyHIyiSmgHugdVdzCeOviGFYmxURhaCboJB0Wj922HuLJ4Bhj5ptqKvpmgZDOwDt9Rockg2Cn8i-1jGc2mh9UvF9H6eLgWpxg9l3C9kBhlWVySJM12myzf7tN_zAdE-0JC</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Motion sound false judgment method and device based on time-frequency graph and convolutional neural network</title><source>esp@cenet</source><creator>ZHU SHAOGONG ; FENG XINGPAN ; WU YOUYIN</creator><creatorcontrib>ZHU SHAOGONG ; FENG XINGPAN ; WU YOUYIN</creatorcontrib><description>The invention discloses a motion sound false judgment method and device based on a time-frequency graph and a convolutional neural network. The method comprises the following steps: S1, splicing a plurality of motion sound segments with the same sound category label to form a motion audio; S2, randomly intercepting a plurality of motion sound segments and reverse sound segments from the motion audio and the reverse audio respectively in an oversampling mode and an undersampling mode to serve as forward sample data and reverse sample data of model training; S3, inputting the forward and reverse sample data into an improved convolutional neural network, and forming a motion sound false judgment model through iterative updating training; and S4, intercepting a to-be-recognized sound segment from the audio collected in the real environment, inputting the to-be-recognized sound segment into the motion sound false judgment model, and outputting a motion false judgment result of the to-be-recognized sound segment by</description><language>chi ; eng</language><subject>ACOUSTICS ; CALCULATING ; COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS ; COMPUTING ; COUNTING ; MUSICAL INSTRUMENTS ; PHYSICS ; SPEECH ANALYSIS OR SYNTHESIS ; SPEECH OR AUDIO CODING OR DECODING ; SPEECH OR VOICE PROCESSING ; SPEECH RECOGNITION</subject><creationdate>2021</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&date=20211231&DB=EPODOC&CC=CN&NR=113870896A$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,776,881,25542,76290</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&date=20211231&DB=EPODOC&CC=CN&NR=113870896A$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>ZHU SHAOGONG</creatorcontrib><creatorcontrib>FENG XINGPAN</creatorcontrib><creatorcontrib>WU YOUYIN</creatorcontrib><title>Motion sound false judgment method and device based on time-frequency graph and convolutional neural network</title><description>The invention discloses a motion sound false judgment method and device based on a time-frequency graph and a convolutional neural network. The method comprises the following steps: S1, splicing a plurality of motion sound segments with the same sound category label to form a motion audio; S2, randomly intercepting a plurality of motion sound segments and reverse sound segments from the motion audio and the reverse audio respectively in an oversampling mode and an undersampling mode to serve as forward sample data and reverse sample data of model training; S3, inputting the forward and reverse sample data into an improved convolutional neural network, and forming a motion sound false judgment model through iterative updating training; and S4, intercepting a to-be-recognized sound segment from the audio collected in the real environment, inputting the to-be-recognized sound segment into the motion sound false judgment model, and outputting a motion false judgment result of the to-be-recognized sound segment by</description><subject>ACOUSTICS</subject><subject>CALCULATING</subject><subject>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>MUSICAL INSTRUMENTS</subject><subject>PHYSICS</subject><subject>SPEECH ANALYSIS OR SYNTHESIS</subject><subject>SPEECH OR AUDIO CODING OR DECODING</subject><subject>SPEECH OR VOICE PROCESSING</subject><subject>SPEECH RECOGNITION</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2021</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNqNzTsOgkAYBGAaC6Pe4fcAJBIShdIQjY1W9mTdHR667I_7wHh7kXgAqynmy8w80mf2LRtyHIyiSmgHugdVdzCeOviGFYmxURhaCboJB0Wj922HuLJ4Bhj5ptqKvpmgZDOwDt9Rockg2Cn8i-1jGc2mh9UvF9H6eLgWpxg9l3C9kBhlWVySJM12myzf7tN_zAdE-0JC</recordid><startdate>20211231</startdate><enddate>20211231</enddate><creator>ZHU SHAOGONG</creator><creator>FENG XINGPAN</creator><creator>WU YOUYIN</creator><scope>EVB</scope></search><sort><creationdate>20211231</creationdate><title>Motion sound false judgment method and device based on time-frequency graph and convolutional neural network</title><author>ZHU SHAOGONG ; FENG XINGPAN ; WU YOUYIN</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_CN113870896A3</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>chi ; eng</language><creationdate>2021</creationdate><topic>ACOUSTICS</topic><topic>CALCULATING</topic><topic>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>MUSICAL INSTRUMENTS</topic><topic>PHYSICS</topic><topic>SPEECH ANALYSIS OR SYNTHESIS</topic><topic>SPEECH OR AUDIO CODING OR DECODING</topic><topic>SPEECH OR VOICE PROCESSING</topic><topic>SPEECH RECOGNITION</topic><toplevel>online_resources</toplevel><creatorcontrib>ZHU SHAOGONG</creatorcontrib><creatorcontrib>FENG XINGPAN</creatorcontrib><creatorcontrib>WU YOUYIN</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>ZHU SHAOGONG</au><au>FENG XINGPAN</au><au>WU YOUYIN</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Motion sound false judgment method and device based on time-frequency graph and convolutional neural network</title><date>2021-12-31</date><risdate>2021</risdate><abstract>The invention discloses a motion sound false judgment method and device based on a time-frequency graph and a convolutional neural network. The method comprises the following steps: S1, splicing a plurality of motion sound segments with the same sound category label to form a motion audio; S2, randomly intercepting a plurality of motion sound segments and reverse sound segments from the motion audio and the reverse audio respectively in an oversampling mode and an undersampling mode to serve as forward sample data and reverse sample data of model training; S3, inputting the forward and reverse sample data into an improved convolutional neural network, and forming a motion sound false judgment model through iterative updating training; and S4, intercepting a to-be-recognized sound segment from the audio collected in the real environment, inputting the to-be-recognized sound segment into the motion sound false judgment model, and outputting a motion false judgment result of the to-be-recognized sound segment by</abstract><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | |
ispartof | |
issn | |
language | chi ; eng |
recordid | cdi_epo_espacenet_CN113870896A |
source | esp@cenet |
subjects | ACOUSTICS CALCULATING COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS COMPUTING COUNTING MUSICAL INSTRUMENTS PHYSICS SPEECH ANALYSIS OR SYNTHESIS SPEECH OR AUDIO CODING OR DECODING SPEECH OR VOICE PROCESSING SPEECH RECOGNITION |
title | Motion sound false judgment method and device based on time-frequency graph and convolutional neural network |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-03T08%3A38%3A15IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=ZHU%20SHAOGONG&rft.date=2021-12-31&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3ECN113870896A%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |