Dialogue supporting apparatus

The present invention provides a dialogue supporting apparatus that can easily select a desired sample sentence from among candidate-sample sentences corresponding to inputted speech. The dialogue supporting apparatus includes: a speech recognition unit which performs continuous speech recognition o...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: NAMBU TARO, MIZUTANI KENJI, OKIMOTO YOSHIYUKI
Format: Patent
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator NAMBU TARO
MIZUTANI KENJI
OKIMOTO YOSHIYUKI
description The present invention provides a dialogue supporting apparatus that can easily select a desired sample sentence from among candidate-sample sentences corresponding to inputted speech. The dialogue supporting apparatus includes: a speech recognition unit which performs continuous speech recognition of the inputted speech; a database unit having a sample sentence database which holds the correspondence of sample sentences of a source language and a target language; a sample sentence selection unit which selects one or more sample sentences from within the sample sentence database, according to a speech recognition result or operation of a GUI unit; a sample sentence comparison unit which (i) compares the one or more sample sentences selected by the sample sentence selection unit and the speech recognition result, (ii) calculates word scores from an appearance location of the words, and (iii) derives a display parameter for each word of each sample sentence, based on the word scores; and the GUI unit which performs the display of a sample sentence based on the display parameter.
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_US2005283365A1</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>US2005283365A1</sourcerecordid><originalsourceid>FETCH-epo_espacenet_US2005283365A13</originalsourceid><addsrcrecordid>eNrjZJB1yUzMyU8vTVUoLi0oyC8qycxLV0gsKEgsSiwpLeZhYE1LzClO5YXS3AzKbq4hzh66qQX58anFBYnJqXmpJfGhwUYGBqZGFsbGZqaOhsbEqQIAj6clkg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Dialogue supporting apparatus</title><source>esp@cenet</source><creator>NAMBU TARO ; MIZUTANI KENJI ; OKIMOTO YOSHIYUKI</creator><creatorcontrib>NAMBU TARO ; MIZUTANI KENJI ; OKIMOTO YOSHIYUKI</creatorcontrib><description>The present invention provides a dialogue supporting apparatus that can easily select a desired sample sentence from among candidate-sample sentences corresponding to inputted speech. The dialogue supporting apparatus includes: a speech recognition unit which performs continuous speech recognition of the inputted speech; a database unit having a sample sentence database which holds the correspondence of sample sentences of a source language and a target language; a sample sentence selection unit which selects one or more sample sentences from within the sample sentence database, according to a speech recognition result or operation of a GUI unit; a sample sentence comparison unit which (i) compares the one or more sample sentences selected by the sample sentence selection unit and the speech recognition result, (ii) calculates word scores from an appearance location of the words, and (iii) derives a display parameter for each word of each sample sentence, based on the word scores; and the GUI unit which performs the display of a sample sentence based on the display parameter.</description><edition>7</edition><language>eng</language><subject>ACOUSTICS ; CALCULATING ; COMPUTING ; COUNTING ; ELECTRIC DIGITAL DATA PROCESSING ; MUSICAL INSTRUMENTS ; PHYSICS ; SPEECH ANALYSIS OR SYNTHESIS ; SPEECH OR AUDIO CODING OR DECODING ; SPEECH OR VOICE PROCESSING ; SPEECH RECOGNITION</subject><creationdate>2005</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20051222&amp;DB=EPODOC&amp;CC=US&amp;NR=2005283365A1$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,776,881,25542,76289</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20051222&amp;DB=EPODOC&amp;CC=US&amp;NR=2005283365A1$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>NAMBU TARO</creatorcontrib><creatorcontrib>MIZUTANI KENJI</creatorcontrib><creatorcontrib>OKIMOTO YOSHIYUKI</creatorcontrib><title>Dialogue supporting apparatus</title><description>The present invention provides a dialogue supporting apparatus that can easily select a desired sample sentence from among candidate-sample sentences corresponding to inputted speech. The dialogue supporting apparatus includes: a speech recognition unit which performs continuous speech recognition of the inputted speech; a database unit having a sample sentence database which holds the correspondence of sample sentences of a source language and a target language; a sample sentence selection unit which selects one or more sample sentences from within the sample sentence database, according to a speech recognition result or operation of a GUI unit; a sample sentence comparison unit which (i) compares the one or more sample sentences selected by the sample sentence selection unit and the speech recognition result, (ii) calculates word scores from an appearance location of the words, and (iii) derives a display parameter for each word of each sample sentence, based on the word scores; and the GUI unit which performs the display of a sample sentence based on the display parameter.</description><subject>ACOUSTICS</subject><subject>CALCULATING</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>ELECTRIC DIGITAL DATA PROCESSING</subject><subject>MUSICAL INSTRUMENTS</subject><subject>PHYSICS</subject><subject>SPEECH ANALYSIS OR SYNTHESIS</subject><subject>SPEECH OR AUDIO CODING OR DECODING</subject><subject>SPEECH OR VOICE PROCESSING</subject><subject>SPEECH RECOGNITION</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2005</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNrjZJB1yUzMyU8vTVUoLi0oyC8qycxLV0gsKEgsSiwpLeZhYE1LzClO5YXS3AzKbq4hzh66qQX58anFBYnJqXmpJfGhwUYGBqZGFsbGZqaOhsbEqQIAj6clkg</recordid><startdate>20051222</startdate><enddate>20051222</enddate><creator>NAMBU TARO</creator><creator>MIZUTANI KENJI</creator><creator>OKIMOTO YOSHIYUKI</creator><scope>EVB</scope></search><sort><creationdate>20051222</creationdate><title>Dialogue supporting apparatus</title><author>NAMBU TARO ; MIZUTANI KENJI ; OKIMOTO YOSHIYUKI</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_US2005283365A13</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>eng</language><creationdate>2005</creationdate><topic>ACOUSTICS</topic><topic>CALCULATING</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>ELECTRIC DIGITAL DATA PROCESSING</topic><topic>MUSICAL INSTRUMENTS</topic><topic>PHYSICS</topic><topic>SPEECH ANALYSIS OR SYNTHESIS</topic><topic>SPEECH OR AUDIO CODING OR DECODING</topic><topic>SPEECH OR VOICE PROCESSING</topic><topic>SPEECH RECOGNITION</topic><toplevel>online_resources</toplevel><creatorcontrib>NAMBU TARO</creatorcontrib><creatorcontrib>MIZUTANI KENJI</creatorcontrib><creatorcontrib>OKIMOTO YOSHIYUKI</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>NAMBU TARO</au><au>MIZUTANI KENJI</au><au>OKIMOTO YOSHIYUKI</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Dialogue supporting apparatus</title><date>2005-12-22</date><risdate>2005</risdate><abstract>The present invention provides a dialogue supporting apparatus that can easily select a desired sample sentence from among candidate-sample sentences corresponding to inputted speech. The dialogue supporting apparatus includes: a speech recognition unit which performs continuous speech recognition of the inputted speech; a database unit having a sample sentence database which holds the correspondence of sample sentences of a source language and a target language; a sample sentence selection unit which selects one or more sample sentences from within the sample sentence database, according to a speech recognition result or operation of a GUI unit; a sample sentence comparison unit which (i) compares the one or more sample sentences selected by the sample sentence selection unit and the speech recognition result, (ii) calculates word scores from an appearance location of the words, and (iii) derives a display parameter for each word of each sample sentence, based on the word scores; and the GUI unit which performs the display of a sample sentence based on the display parameter.</abstract><edition>7</edition><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language eng
recordid cdi_epo_espacenet_US2005283365A1
source esp@cenet
subjects ACOUSTICS
CALCULATING
COMPUTING
COUNTING
ELECTRIC DIGITAL DATA PROCESSING
MUSICAL INSTRUMENTS
PHYSICS
SPEECH ANALYSIS OR SYNTHESIS
SPEECH OR AUDIO CODING OR DECODING
SPEECH OR VOICE PROCESSING
SPEECH RECOGNITION
title Dialogue supporting apparatus
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-10T18%3A46%3A04IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=NAMBU%20TARO&rft.date=2005-12-22&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3EUS2005283365A1%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true