Speech evaluation method and system

The present invention belongs to the speech recognition technical field and discloses a speech evaluation method and system. The invention aims to improve the accuracy of evaluation. The speech evaluation method disclosed by the invention comprises the following steps that: a client collects the spe...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: DANG WEIRAN, LI MIAOLEI, CUI YUJIE, JIANG ZHIPING, YU JIANXIN, ZHAO YANG
Format: Patent
Sprache:chi ; eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator DANG WEIRAN
LI MIAOLEI
CUI YUJIE
JIANG ZHIPING
YU JIANXIN
ZHAO YANG
description The present invention belongs to the speech recognition technical field and discloses a speech evaluation method and system. The invention aims to improve the accuracy of evaluation. The speech evaluation method disclosed by the invention comprises the following steps that: a client collects the speech data of a user, splits the collected speech one word by one word according to uniform time intervals, records the split speech and plays the split speech for the user, so that the user can confirm the correctness of the speech splitting; and after the user of the client confirms that the splitting is correct, the split speech data are sent to a server in a packaged manner, so that the server can identify and evaluate the split speech data. According to the speech evaluation method and system of the present invention, since the speech speeds of different evaluation users are different, the client splits the collected speech one word by one word, records the split speech and plays the split speech for the users,
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_CN107068145A</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>CN107068145A</sourcerecordid><originalsourceid>FETCH-epo_espacenet_CN107068145A3</originalsourceid><addsrcrecordid>eNrjZFAOLkhNTc5QSC1LzClNLMnMz1PITS3JyE9RSMxLUSiuLC5JzeVhYE1LzClO5YXS3AyKbq4hzh66qQX58anFBYnJqXmpJfHOfoYG5gZmFoYmpo7GxKgBAOUAJnM</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Speech evaluation method and system</title><source>esp@cenet</source><creator>DANG WEIRAN ; LI MIAOLEI ; CUI YUJIE ; JIANG ZHIPING ; YU JIANXIN ; ZHAO YANG</creator><creatorcontrib>DANG WEIRAN ; LI MIAOLEI ; CUI YUJIE ; JIANG ZHIPING ; YU JIANXIN ; ZHAO YANG</creatorcontrib><description>The present invention belongs to the speech recognition technical field and discloses a speech evaluation method and system. The invention aims to improve the accuracy of evaluation. The speech evaluation method disclosed by the invention comprises the following steps that: a client collects the speech data of a user, splits the collected speech one word by one word according to uniform time intervals, records the split speech and plays the split speech for the user, so that the user can confirm the correctness of the speech splitting; and after the user of the client confirms that the splitting is correct, the split speech data are sent to a server in a packaged manner, so that the server can identify and evaluate the split speech data. According to the speech evaluation method and system of the present invention, since the speech speeds of different evaluation users are different, the client splits the collected speech one word by one word, records the split speech and plays the split speech for the users,</description><language>chi ; eng</language><subject>ACOUSTICS ; MUSICAL INSTRUMENTS ; PHYSICS ; SPEECH ANALYSIS OR SYNTHESIS ; SPEECH OR AUDIO CODING OR DECODING ; SPEECH OR VOICE PROCESSING ; SPEECH RECOGNITION</subject><creationdate>2017</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20170818&amp;DB=EPODOC&amp;CC=CN&amp;NR=107068145A$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,776,881,25542,76290</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20170818&amp;DB=EPODOC&amp;CC=CN&amp;NR=107068145A$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>DANG WEIRAN</creatorcontrib><creatorcontrib>LI MIAOLEI</creatorcontrib><creatorcontrib>CUI YUJIE</creatorcontrib><creatorcontrib>JIANG ZHIPING</creatorcontrib><creatorcontrib>YU JIANXIN</creatorcontrib><creatorcontrib>ZHAO YANG</creatorcontrib><title>Speech evaluation method and system</title><description>The present invention belongs to the speech recognition technical field and discloses a speech evaluation method and system. The invention aims to improve the accuracy of evaluation. The speech evaluation method disclosed by the invention comprises the following steps that: a client collects the speech data of a user, splits the collected speech one word by one word according to uniform time intervals, records the split speech and plays the split speech for the user, so that the user can confirm the correctness of the speech splitting; and after the user of the client confirms that the splitting is correct, the split speech data are sent to a server in a packaged manner, so that the server can identify and evaluate the split speech data. According to the speech evaluation method and system of the present invention, since the speech speeds of different evaluation users are different, the client splits the collected speech one word by one word, records the split speech and plays the split speech for the users,</description><subject>ACOUSTICS</subject><subject>MUSICAL INSTRUMENTS</subject><subject>PHYSICS</subject><subject>SPEECH ANALYSIS OR SYNTHESIS</subject><subject>SPEECH OR AUDIO CODING OR DECODING</subject><subject>SPEECH OR VOICE PROCESSING</subject><subject>SPEECH RECOGNITION</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2017</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNrjZFAOLkhNTc5QSC1LzClNLMnMz1PITS3JyE9RSMxLUSiuLC5JzeVhYE1LzClO5YXS3AyKbq4hzh66qQX58anFBYnJqXmpJfHOfoYG5gZmFoYmpo7GxKgBAOUAJnM</recordid><startdate>20170818</startdate><enddate>20170818</enddate><creator>DANG WEIRAN</creator><creator>LI MIAOLEI</creator><creator>CUI YUJIE</creator><creator>JIANG ZHIPING</creator><creator>YU JIANXIN</creator><creator>ZHAO YANG</creator><scope>EVB</scope></search><sort><creationdate>20170818</creationdate><title>Speech evaluation method and system</title><author>DANG WEIRAN ; LI MIAOLEI ; CUI YUJIE ; JIANG ZHIPING ; YU JIANXIN ; ZHAO YANG</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_CN107068145A3</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>chi ; eng</language><creationdate>2017</creationdate><topic>ACOUSTICS</topic><topic>MUSICAL INSTRUMENTS</topic><topic>PHYSICS</topic><topic>SPEECH ANALYSIS OR SYNTHESIS</topic><topic>SPEECH OR AUDIO CODING OR DECODING</topic><topic>SPEECH OR VOICE PROCESSING</topic><topic>SPEECH RECOGNITION</topic><toplevel>online_resources</toplevel><creatorcontrib>DANG WEIRAN</creatorcontrib><creatorcontrib>LI MIAOLEI</creatorcontrib><creatorcontrib>CUI YUJIE</creatorcontrib><creatorcontrib>JIANG ZHIPING</creatorcontrib><creatorcontrib>YU JIANXIN</creatorcontrib><creatorcontrib>ZHAO YANG</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>DANG WEIRAN</au><au>LI MIAOLEI</au><au>CUI YUJIE</au><au>JIANG ZHIPING</au><au>YU JIANXIN</au><au>ZHAO YANG</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Speech evaluation method and system</title><date>2017-08-18</date><risdate>2017</risdate><abstract>The present invention belongs to the speech recognition technical field and discloses a speech evaluation method and system. The invention aims to improve the accuracy of evaluation. The speech evaluation method disclosed by the invention comprises the following steps that: a client collects the speech data of a user, splits the collected speech one word by one word according to uniform time intervals, records the split speech and plays the split speech for the user, so that the user can confirm the correctness of the speech splitting; and after the user of the client confirms that the splitting is correct, the split speech data are sent to a server in a packaged manner, so that the server can identify and evaluate the split speech data. According to the speech evaluation method and system of the present invention, since the speech speeds of different evaluation users are different, the client splits the collected speech one word by one word, records the split speech and plays the split speech for the users,</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language chi ; eng
recordid cdi_epo_espacenet_CN107068145A
source esp@cenet
subjects ACOUSTICS
MUSICAL INSTRUMENTS
PHYSICS
SPEECH ANALYSIS OR SYNTHESIS
SPEECH OR AUDIO CODING OR DECODING
SPEECH OR VOICE PROCESSING
SPEECH RECOGNITION
title Speech evaluation method and system
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-29T01%3A46%3A59IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=DANG%20WEIRAN&rft.date=2017-08-18&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3ECN107068145A%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true