System and method for developing interactive speech applications

Dialogue modules are provided, with each dialogue module includes computer readable instructions for accomplishing a predefined interactive dialogue task in an interactive speech application. In response to user input, a subset of the plurality of dialogue modules are selected to accomplish their re...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: MARK A. HOLTHOUSE, STEPHEN D. SEABURY, MATTHEW T. MARX, MICHAEL S. PHILLIPS, BRETT D. PHANEUF, JOSE L. ELIZONDO-CECENAS, JERRY K. CARTER
Format: Patent
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator MARK A. HOLTHOUSE
STEPHEN D. SEABURY
MATTHEW T. MARX
MICHAEL S. PHILLIPS
BRETT D. PHANEUF
JOSE L. ELIZONDO-CECENAS
JERRY K. CARTER
description Dialogue modules are provided, with each dialogue module includes computer readable instructions for accomplishing a predefined interactive dialogue task in an interactive speech application. In response to user input, a subset of the plurality of dialogue modules are selected to accomplish their respective interactive dialogue tasks in the interactive speech application and are interconnected in an order defining the call flow of the application, and the application is generated. A graphical user interface represents the stored plurality of dialogue modules as icons in a graphical display in which icons for the subset of dialogue modules are selected in the graphical display in response to user input, the icons for the subset of dialogue modules are graphically interconnected into a graphical representation of the call flow of the interactive speech application, and the interactive speech application is generated based upon the graphical representation. Using the graphical display, the method further includes associating configuration parameters with specific dialogue modules. Each configuration parameter causes a change in operation of the dialogue module when the interactive speech program executes. A window is displayed for setting the value of the configuration parameter in response to user input, when an icon for a dialogue module having an associated configuration parameter is selected.
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_AU7374798A</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>AU7374798A</sourcerecordid><originalsourceid>FETCH-epo_espacenet_AU7374798A3</originalsourceid><addsrcrecordid>eNqFyrEKwjAQBuAsDqI-g_cCThWim6Uo7upcjuSvPUiTozkKvr2Lu9O3fGt3eXyqYSLOkSbYWCINZaaIBamo5DdJNswcTBZQVSCMxKpJApuUXLduNXCq2P3cuP3t-uzuB2jpUZUDMqxvX77xR38-tc3_8QVDrDEx</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>System and method for developing interactive speech applications</title><source>esp@cenet</source><creator>MARK A. HOLTHOUSE ; STEPHEN D. SEABURY ; MATTHEW T. MARX ; MICHAEL S. PHILLIPS ; BRETT D. PHANEUF ; JOSE L. ELIZONDO-CECENAS ; JERRY K. CARTER</creator><creatorcontrib>MARK A. HOLTHOUSE ; STEPHEN D. SEABURY ; MATTHEW T. MARX ; MICHAEL S. PHILLIPS ; BRETT D. PHANEUF ; JOSE L. ELIZONDO-CECENAS ; JERRY K. CARTER</creatorcontrib><description>Dialogue modules are provided, with each dialogue module includes computer readable instructions for accomplishing a predefined interactive dialogue task in an interactive speech application. In response to user input, a subset of the plurality of dialogue modules are selected to accomplish their respective interactive dialogue tasks in the interactive speech application and are interconnected in an order defining the call flow of the application, and the application is generated. A graphical user interface represents the stored plurality of dialogue modules as icons in a graphical display in which icons for the subset of dialogue modules are selected in the graphical display in response to user input, the icons for the subset of dialogue modules are graphically interconnected into a graphical representation of the call flow of the interactive speech application, and the interactive speech application is generated based upon the graphical representation. Using the graphical display, the method further includes associating configuration parameters with specific dialogue modules. Each configuration parameter causes a change in operation of the dialogue module when the interactive speech program executes. A window is displayed for setting the value of the configuration parameter in response to user input, when an icon for a dialogue module having an associated configuration parameter is selected.</description><edition>6</edition><language>eng</language><subject>ACOUSTICS ; ELECTRIC COMMUNICATION TECHNIQUE ; ELECTRICITY ; MUSICAL INSTRUMENTS ; PHYSICS ; SPEECH ANALYSIS OR SYNTHESIS ; SPEECH OR AUDIO CODING OR DECODING ; SPEECH OR VOICE PROCESSING ; SPEECH RECOGNITION ; TELEPHONIC COMMUNICATION</subject><creationdate>1998</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=19981127&amp;DB=EPODOC&amp;CC=AU&amp;NR=7374798A$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,776,881,25542,76290</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=19981127&amp;DB=EPODOC&amp;CC=AU&amp;NR=7374798A$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>MARK A. HOLTHOUSE</creatorcontrib><creatorcontrib>STEPHEN D. SEABURY</creatorcontrib><creatorcontrib>MATTHEW T. MARX</creatorcontrib><creatorcontrib>MICHAEL S. PHILLIPS</creatorcontrib><creatorcontrib>BRETT D. PHANEUF</creatorcontrib><creatorcontrib>JOSE L. ELIZONDO-CECENAS</creatorcontrib><creatorcontrib>JERRY K. CARTER</creatorcontrib><title>System and method for developing interactive speech applications</title><description>Dialogue modules are provided, with each dialogue module includes computer readable instructions for accomplishing a predefined interactive dialogue task in an interactive speech application. In response to user input, a subset of the plurality of dialogue modules are selected to accomplish their respective interactive dialogue tasks in the interactive speech application and are interconnected in an order defining the call flow of the application, and the application is generated. A graphical user interface represents the stored plurality of dialogue modules as icons in a graphical display in which icons for the subset of dialogue modules are selected in the graphical display in response to user input, the icons for the subset of dialogue modules are graphically interconnected into a graphical representation of the call flow of the interactive speech application, and the interactive speech application is generated based upon the graphical representation. Using the graphical display, the method further includes associating configuration parameters with specific dialogue modules. Each configuration parameter causes a change in operation of the dialogue module when the interactive speech program executes. A window is displayed for setting the value of the configuration parameter in response to user input, when an icon for a dialogue module having an associated configuration parameter is selected.</description><subject>ACOUSTICS</subject><subject>ELECTRIC COMMUNICATION TECHNIQUE</subject><subject>ELECTRICITY</subject><subject>MUSICAL INSTRUMENTS</subject><subject>PHYSICS</subject><subject>SPEECH ANALYSIS OR SYNTHESIS</subject><subject>SPEECH OR AUDIO CODING OR DECODING</subject><subject>SPEECH OR VOICE PROCESSING</subject><subject>SPEECH RECOGNITION</subject><subject>TELEPHONIC COMMUNICATION</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>1998</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNqFyrEKwjAQBuAsDqI-g_cCThWim6Uo7upcjuSvPUiTozkKvr2Lu9O3fGt3eXyqYSLOkSbYWCINZaaIBamo5DdJNswcTBZQVSCMxKpJApuUXLduNXCq2P3cuP3t-uzuB2jpUZUDMqxvX77xR38-tc3_8QVDrDEx</recordid><startdate>19981127</startdate><enddate>19981127</enddate><creator>MARK A. HOLTHOUSE</creator><creator>STEPHEN D. SEABURY</creator><creator>MATTHEW T. MARX</creator><creator>MICHAEL S. PHILLIPS</creator><creator>BRETT D. PHANEUF</creator><creator>JOSE L. ELIZONDO-CECENAS</creator><creator>JERRY K. CARTER</creator><scope>EVB</scope></search><sort><creationdate>19981127</creationdate><title>System and method for developing interactive speech applications</title><author>MARK A. HOLTHOUSE ; STEPHEN D. SEABURY ; MATTHEW T. MARX ; MICHAEL S. PHILLIPS ; BRETT D. PHANEUF ; JOSE L. ELIZONDO-CECENAS ; JERRY K. CARTER</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_AU7374798A3</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>eng</language><creationdate>1998</creationdate><topic>ACOUSTICS</topic><topic>ELECTRIC COMMUNICATION TECHNIQUE</topic><topic>ELECTRICITY</topic><topic>MUSICAL INSTRUMENTS</topic><topic>PHYSICS</topic><topic>SPEECH ANALYSIS OR SYNTHESIS</topic><topic>SPEECH OR AUDIO CODING OR DECODING</topic><topic>SPEECH OR VOICE PROCESSING</topic><topic>SPEECH RECOGNITION</topic><topic>TELEPHONIC COMMUNICATION</topic><toplevel>online_resources</toplevel><creatorcontrib>MARK A. HOLTHOUSE</creatorcontrib><creatorcontrib>STEPHEN D. SEABURY</creatorcontrib><creatorcontrib>MATTHEW T. MARX</creatorcontrib><creatorcontrib>MICHAEL S. PHILLIPS</creatorcontrib><creatorcontrib>BRETT D. PHANEUF</creatorcontrib><creatorcontrib>JOSE L. ELIZONDO-CECENAS</creatorcontrib><creatorcontrib>JERRY K. CARTER</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>MARK A. HOLTHOUSE</au><au>STEPHEN D. SEABURY</au><au>MATTHEW T. MARX</au><au>MICHAEL S. PHILLIPS</au><au>BRETT D. PHANEUF</au><au>JOSE L. ELIZONDO-CECENAS</au><au>JERRY K. CARTER</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>System and method for developing interactive speech applications</title><date>1998-11-27</date><risdate>1998</risdate><abstract>Dialogue modules are provided, with each dialogue module includes computer readable instructions for accomplishing a predefined interactive dialogue task in an interactive speech application. In response to user input, a subset of the plurality of dialogue modules are selected to accomplish their respective interactive dialogue tasks in the interactive speech application and are interconnected in an order defining the call flow of the application, and the application is generated. A graphical user interface represents the stored plurality of dialogue modules as icons in a graphical display in which icons for the subset of dialogue modules are selected in the graphical display in response to user input, the icons for the subset of dialogue modules are graphically interconnected into a graphical representation of the call flow of the interactive speech application, and the interactive speech application is generated based upon the graphical representation. Using the graphical display, the method further includes associating configuration parameters with specific dialogue modules. Each configuration parameter causes a change in operation of the dialogue module when the interactive speech program executes. A window is displayed for setting the value of the configuration parameter in response to user input, when an icon for a dialogue module having an associated configuration parameter is selected.</abstract><edition>6</edition><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language eng
recordid cdi_epo_espacenet_AU7374798A
source esp@cenet
subjects ACOUSTICS
ELECTRIC COMMUNICATION TECHNIQUE
ELECTRICITY
MUSICAL INSTRUMENTS
PHYSICS
SPEECH ANALYSIS OR SYNTHESIS
SPEECH OR AUDIO CODING OR DECODING
SPEECH OR VOICE PROCESSING
SPEECH RECOGNITION
TELEPHONIC COMMUNICATION
title System and method for developing interactive speech applications
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-02T21%3A13%3A05IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=MARK%20A.%20HOLTHOUSE&rft.date=1998-11-27&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3EAU7374798A%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true