SOUND SOURCE SPECIFICATION SYSTEM AND SOUND SOURCE SPECIFICATION METHOD
PROBLEM TO BE SOLVED: To specify sound source data capable of giving an appropriate expression to a synthesized voice of sentence data when outputting the synthesized voice that reads the sentence data by voice synthesis.SOLUTION: In sound source specification processing, a sentence indicated by des...
Gespeichert in:
1. Verfasser: | |
---|---|
Format: | Patent |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | ASEMI NORIAKI |
description | PROBLEM TO BE SOLVED: To specify sound source data capable of giving an appropriate expression to a synthesized voice of sentence data when outputting the synthesized voice that reads the sentence data by voice synthesis.SOLUTION: In sound source specification processing, a sentence indicated by designated sentence data WT is analyzed, and text expression distributions (i, k) indicating distribution degrees of various types of expressions appearing in the sentence are derived by the sentence (S350). Each of sound source data SD is obtained and analyzed, and sound source expression distributions vpd (j, k) indicating the distribution degrees of the various types of expressions exposed to the voice sound indicated by the voice sound parameter PV included in the sound source data SD is derived for each sound source data SD (S360). In addition, text expression distributions tpd (i, k) are collated with each of the sound source expression distributions vpd (j, k) so as to derive a correlation value cor (i, j) of both of them (S370), and the sound source data SD with the highest correlation value cor (i, j) is presented (S380). |
format | Patent |
fullrecord | <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_JP2014167556A</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>JP2014167556A</sourcerecordid><originalsourceid>FETCH-epo_espacenet_JP2014167556A3</originalsourceid><addsrcrecordid>eNrjZHAP9g_1c1EAkkHOrgrBAa7Onm6ezo4hnv5-CsGRwSGuvgqOEHlcqnxdQzz8XXgYWNMSc4pTeaE0N4OSm2uIs4duakF-fGpxQWJyal5qSbxXgJGBoYmhmbmpqZmjMVGKAFwjLVs</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>SOUND SOURCE SPECIFICATION SYSTEM AND SOUND SOURCE SPECIFICATION METHOD</title><source>esp@cenet</source><creator>ASEMI NORIAKI</creator><creatorcontrib>ASEMI NORIAKI</creatorcontrib><description>PROBLEM TO BE SOLVED: To specify sound source data capable of giving an appropriate expression to a synthesized voice of sentence data when outputting the synthesized voice that reads the sentence data by voice synthesis.SOLUTION: In sound source specification processing, a sentence indicated by designated sentence data WT is analyzed, and text expression distributions (i, k) indicating distribution degrees of various types of expressions appearing in the sentence are derived by the sentence (S350). Each of sound source data SD is obtained and analyzed, and sound source expression distributions vpd (j, k) indicating the distribution degrees of the various types of expressions exposed to the voice sound indicated by the voice sound parameter PV included in the sound source data SD is derived for each sound source data SD (S360). In addition, text expression distributions tpd (i, k) are collated with each of the sound source expression distributions vpd (j, k) so as to derive a correlation value cor (i, j) of both of them (S370), and the sound source data SD with the highest correlation value cor (i, j) is presented (S380).</description><language>eng</language><subject>ACOUSTICS ; MUSICAL INSTRUMENTS ; PHYSICS ; SPEECH ANALYSIS OR SYNTHESIS ; SPEECH OR AUDIO CODING OR DECODING ; SPEECH OR VOICE PROCESSING ; SPEECH RECOGNITION</subject><creationdate>2014</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&date=20140911&DB=EPODOC&CC=JP&NR=2014167556A$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,776,881,25543,76293</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&date=20140911&DB=EPODOC&CC=JP&NR=2014167556A$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>ASEMI NORIAKI</creatorcontrib><title>SOUND SOURCE SPECIFICATION SYSTEM AND SOUND SOURCE SPECIFICATION METHOD</title><description>PROBLEM TO BE SOLVED: To specify sound source data capable of giving an appropriate expression to a synthesized voice of sentence data when outputting the synthesized voice that reads the sentence data by voice synthesis.SOLUTION: In sound source specification processing, a sentence indicated by designated sentence data WT is analyzed, and text expression distributions (i, k) indicating distribution degrees of various types of expressions appearing in the sentence are derived by the sentence (S350). Each of sound source data SD is obtained and analyzed, and sound source expression distributions vpd (j, k) indicating the distribution degrees of the various types of expressions exposed to the voice sound indicated by the voice sound parameter PV included in the sound source data SD is derived for each sound source data SD (S360). In addition, text expression distributions tpd (i, k) are collated with each of the sound source expression distributions vpd (j, k) so as to derive a correlation value cor (i, j) of both of them (S370), and the sound source data SD with the highest correlation value cor (i, j) is presented (S380).</description><subject>ACOUSTICS</subject><subject>MUSICAL INSTRUMENTS</subject><subject>PHYSICS</subject><subject>SPEECH ANALYSIS OR SYNTHESIS</subject><subject>SPEECH OR AUDIO CODING OR DECODING</subject><subject>SPEECH OR VOICE PROCESSING</subject><subject>SPEECH RECOGNITION</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2014</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNrjZHAP9g_1c1EAkkHOrgrBAa7Onm6ezo4hnv5-CsGRwSGuvgqOEHlcqnxdQzz8XXgYWNMSc4pTeaE0N4OSm2uIs4duakF-fGpxQWJyal5qSbxXgJGBoYmhmbmpqZmjMVGKAFwjLVs</recordid><startdate>20140911</startdate><enddate>20140911</enddate><creator>ASEMI NORIAKI</creator><scope>EVB</scope></search><sort><creationdate>20140911</creationdate><title>SOUND SOURCE SPECIFICATION SYSTEM AND SOUND SOURCE SPECIFICATION METHOD</title><author>ASEMI NORIAKI</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_JP2014167556A3</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>eng</language><creationdate>2014</creationdate><topic>ACOUSTICS</topic><topic>MUSICAL INSTRUMENTS</topic><topic>PHYSICS</topic><topic>SPEECH ANALYSIS OR SYNTHESIS</topic><topic>SPEECH OR AUDIO CODING OR DECODING</topic><topic>SPEECH OR VOICE PROCESSING</topic><topic>SPEECH RECOGNITION</topic><toplevel>online_resources</toplevel><creatorcontrib>ASEMI NORIAKI</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>ASEMI NORIAKI</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>SOUND SOURCE SPECIFICATION SYSTEM AND SOUND SOURCE SPECIFICATION METHOD</title><date>2014-09-11</date><risdate>2014</risdate><abstract>PROBLEM TO BE SOLVED: To specify sound source data capable of giving an appropriate expression to a synthesized voice of sentence data when outputting the synthesized voice that reads the sentence data by voice synthesis.SOLUTION: In sound source specification processing, a sentence indicated by designated sentence data WT is analyzed, and text expression distributions (i, k) indicating distribution degrees of various types of expressions appearing in the sentence are derived by the sentence (S350). Each of sound source data SD is obtained and analyzed, and sound source expression distributions vpd (j, k) indicating the distribution degrees of the various types of expressions exposed to the voice sound indicated by the voice sound parameter PV included in the sound source data SD is derived for each sound source data SD (S360). In addition, text expression distributions tpd (i, k) are collated with each of the sound source expression distributions vpd (j, k) so as to derive a correlation value cor (i, j) of both of them (S370), and the sound source data SD with the highest correlation value cor (i, j) is presented (S380).</abstract><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | |
ispartof | |
issn | |
language | eng |
recordid | cdi_epo_espacenet_JP2014167556A |
source | esp@cenet |
subjects | ACOUSTICS MUSICAL INSTRUMENTS PHYSICS SPEECH ANALYSIS OR SYNTHESIS SPEECH OR AUDIO CODING OR DECODING SPEECH OR VOICE PROCESSING SPEECH RECOGNITION |
title | SOUND SOURCE SPECIFICATION SYSTEM AND SOUND SOURCE SPECIFICATION METHOD |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-23T20%3A33%3A30IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=ASEMI%20NORIAKI&rft.date=2014-09-11&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3EJP2014167556A%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |