CONFERENCE SUPPORT SYSTEM, CONFERENCE SUPPORT METHOD, AND COMPUTER PROGRAM PRODUCT

According to an embodiment, a conference support system includes a recognizer, a classifier, a first caption controller, a second caption controller, and a display controller. The recognizer is configured to recognize text data corresponding speech from a speech section and configured to distinguish...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: ASHIKAWA Taira, FUME Kosei, ASHIKAWA Masayuki, FUJIMURA Hiroshi
Format: Patent
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator ASHIKAWA Taira
FUME Kosei
ASHIKAWA Masayuki
FUJIMURA Hiroshi
description According to an embodiment, a conference support system includes a recognizer, a classifier, a first caption controller, a second caption controller, and a display controller. The recognizer is configured to recognize text data corresponding speech from a speech section and configured to distinguish between the speech section and a non-speech section in speech data. The classifier is configured to classify the text data into first utterance data representing a principal utterance and second utterance data representing another utterance. The first caption controller is configured to generate first caption data for displaying the first utterance data without waiting for identification of the first utterance data to finish. The second caption controller is configured to generate second caption data for displaying the second utterance data after identification of the second utterance data finishes. The display controller is configured to control a display of the first caption data and the second caption data.
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_US2018082688A1</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>US2018082688A1</sourcerecordid><originalsourceid>FETCH-epo_espacenet_US2018082688A13</originalsourceid><addsrcrecordid>eNrjZAhy9vdzcw1y9XN2VQgODQjwDwpRCI4MDnH11VHAIuXrGuLh76Kj4OjnApT2DQgNcQ1SCAjydw9y9AXRLqHOITwMrGmJOcWpvFCam0HZzTXE2UM3tSA_PrW4IDE5NS-1JD402MjA0MLAwsjMwsLR0Jg4VQDKZDEG</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>CONFERENCE SUPPORT SYSTEM, CONFERENCE SUPPORT METHOD, AND COMPUTER PROGRAM PRODUCT</title><source>esp@cenet</source><creator>ASHIKAWA Taira ; FUME Kosei ; ASHIKAWA Masayuki ; FUJIMURA Hiroshi</creator><creatorcontrib>ASHIKAWA Taira ; FUME Kosei ; ASHIKAWA Masayuki ; FUJIMURA Hiroshi</creatorcontrib><description>According to an embodiment, a conference support system includes a recognizer, a classifier, a first caption controller, a second caption controller, and a display controller. The recognizer is configured to recognize text data corresponding speech from a speech section and configured to distinguish between the speech section and a non-speech section in speech data. The classifier is configured to classify the text data into first utterance data representing a principal utterance and second utterance data representing another utterance. The first caption controller is configured to generate first caption data for displaying the first utterance data without waiting for identification of the first utterance data to finish. The second caption controller is configured to generate second caption data for displaying the second utterance data after identification of the second utterance data finishes. The display controller is configured to control a display of the first caption data and the second caption data.</description><language>eng</language><subject>ACOUSTICS ; MUSICAL INSTRUMENTS ; PHYSICS ; SPEECH ANALYSIS OR SYNTHESIS ; SPEECH OR AUDIO CODING OR DECODING ; SPEECH OR VOICE PROCESSING ; SPEECH RECOGNITION</subject><creationdate>2018</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20180322&amp;DB=EPODOC&amp;CC=US&amp;NR=2018082688A1$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,776,881,25543,76293</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20180322&amp;DB=EPODOC&amp;CC=US&amp;NR=2018082688A1$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>ASHIKAWA Taira</creatorcontrib><creatorcontrib>FUME Kosei</creatorcontrib><creatorcontrib>ASHIKAWA Masayuki</creatorcontrib><creatorcontrib>FUJIMURA Hiroshi</creatorcontrib><title>CONFERENCE SUPPORT SYSTEM, CONFERENCE SUPPORT METHOD, AND COMPUTER PROGRAM PRODUCT</title><description>According to an embodiment, a conference support system includes a recognizer, a classifier, a first caption controller, a second caption controller, and a display controller. The recognizer is configured to recognize text data corresponding speech from a speech section and configured to distinguish between the speech section and a non-speech section in speech data. The classifier is configured to classify the text data into first utterance data representing a principal utterance and second utterance data representing another utterance. The first caption controller is configured to generate first caption data for displaying the first utterance data without waiting for identification of the first utterance data to finish. The second caption controller is configured to generate second caption data for displaying the second utterance data after identification of the second utterance data finishes. The display controller is configured to control a display of the first caption data and the second caption data.</description><subject>ACOUSTICS</subject><subject>MUSICAL INSTRUMENTS</subject><subject>PHYSICS</subject><subject>SPEECH ANALYSIS OR SYNTHESIS</subject><subject>SPEECH OR AUDIO CODING OR DECODING</subject><subject>SPEECH OR VOICE PROCESSING</subject><subject>SPEECH RECOGNITION</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2018</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNrjZAhy9vdzcw1y9XN2VQgODQjwDwpRCI4MDnH11VHAIuXrGuLh76Kj4OjnApT2DQgNcQ1SCAjydw9y9AXRLqHOITwMrGmJOcWpvFCam0HZzTXE2UM3tSA_PrW4IDE5NS-1JD402MjA0MLAwsjMwsLR0Jg4VQDKZDEG</recordid><startdate>20180322</startdate><enddate>20180322</enddate><creator>ASHIKAWA Taira</creator><creator>FUME Kosei</creator><creator>ASHIKAWA Masayuki</creator><creator>FUJIMURA Hiroshi</creator><scope>EVB</scope></search><sort><creationdate>20180322</creationdate><title>CONFERENCE SUPPORT SYSTEM, CONFERENCE SUPPORT METHOD, AND COMPUTER PROGRAM PRODUCT</title><author>ASHIKAWA Taira ; FUME Kosei ; ASHIKAWA Masayuki ; FUJIMURA Hiroshi</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_US2018082688A13</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>eng</language><creationdate>2018</creationdate><topic>ACOUSTICS</topic><topic>MUSICAL INSTRUMENTS</topic><topic>PHYSICS</topic><topic>SPEECH ANALYSIS OR SYNTHESIS</topic><topic>SPEECH OR AUDIO CODING OR DECODING</topic><topic>SPEECH OR VOICE PROCESSING</topic><topic>SPEECH RECOGNITION</topic><toplevel>online_resources</toplevel><creatorcontrib>ASHIKAWA Taira</creatorcontrib><creatorcontrib>FUME Kosei</creatorcontrib><creatorcontrib>ASHIKAWA Masayuki</creatorcontrib><creatorcontrib>FUJIMURA Hiroshi</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>ASHIKAWA Taira</au><au>FUME Kosei</au><au>ASHIKAWA Masayuki</au><au>FUJIMURA Hiroshi</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>CONFERENCE SUPPORT SYSTEM, CONFERENCE SUPPORT METHOD, AND COMPUTER PROGRAM PRODUCT</title><date>2018-03-22</date><risdate>2018</risdate><abstract>According to an embodiment, a conference support system includes a recognizer, a classifier, a first caption controller, a second caption controller, and a display controller. The recognizer is configured to recognize text data corresponding speech from a speech section and configured to distinguish between the speech section and a non-speech section in speech data. The classifier is configured to classify the text data into first utterance data representing a principal utterance and second utterance data representing another utterance. The first caption controller is configured to generate first caption data for displaying the first utterance data without waiting for identification of the first utterance data to finish. The second caption controller is configured to generate second caption data for displaying the second utterance data after identification of the second utterance data finishes. The display controller is configured to control a display of the first caption data and the second caption data.</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language eng
recordid cdi_epo_espacenet_US2018082688A1
source esp@cenet
subjects ACOUSTICS
MUSICAL INSTRUMENTS
PHYSICS
SPEECH ANALYSIS OR SYNTHESIS
SPEECH OR AUDIO CODING OR DECODING
SPEECH OR VOICE PROCESSING
SPEECH RECOGNITION
title CONFERENCE SUPPORT SYSTEM, CONFERENCE SUPPORT METHOD, AND COMPUTER PROGRAM PRODUCT
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-26T16%3A26%3A17IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=ASHIKAWA%20Taira&rft.date=2018-03-22&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3EUS2018082688A1%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true