Natural language processing models for conversational computing

In non-limiting examples of the present disclosure, systems, methods and devices for training conversational language models are presented. An embedding library may be generated and maintained. Exemplary target inputs and associated intent types may be received. The target inputs may be encoded into...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Suwandy, Tien Widya, Taniguchi, David Shigeru, Yang, Hung-chih, Marcjan, Cezary Antoni
Format: Patent
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Suwandy, Tien Widya
Taniguchi, David Shigeru
Yang, Hung-chih
Marcjan, Cezary Antoni
description In non-limiting examples of the present disclosure, systems, methods and devices for training conversational language models are presented. An embedding library may be generated and maintained. Exemplary target inputs and associated intent types may be received. The target inputs may be encoded into contextual embeddings. The embeddings may be added to the embedding library. When a conversational entity receives a new natural language input, that new input may be encoded into a contextual embedding. The new embedding may be added to the embedding library. A similarity score model may be applied to the new embedding and one or more embeddings for the exemplary target inputs. Similarity scores may be calculated based on the application of the similarity score model. A response may be generated by the conversational entity for an intent type for which a similarity score exceeds a threshold value.
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_US11250839B2</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>US11250839B2</sourcerecordid><originalsourceid>FETCH-epo_espacenet_US11250839B23</originalsourceid><addsrcrecordid>eNrjZLD3SywpLUrMUchJzEsvTUxPVSgoyk9OLS7OzEtXyM1PSc0pVkjLL1JIzs8rSy0qTizJzM8Dqk7Ozy0oLQGq4WFgTUvMKU7lhdLcDIpuriHOHrqpBfnxqcUFicmpeakl8aHBhoZGpgYWxpZORsbEqAEA9oEx9Q</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Natural language processing models for conversational computing</title><source>esp@cenet</source><creator>Suwandy, Tien Widya ; Taniguchi, David Shigeru ; Yang, Hung-chih ; Marcjan, Cezary Antoni</creator><creatorcontrib>Suwandy, Tien Widya ; Taniguchi, David Shigeru ; Yang, Hung-chih ; Marcjan, Cezary Antoni</creatorcontrib><description>In non-limiting examples of the present disclosure, systems, methods and devices for training conversational language models are presented. An embedding library may be generated and maintained. Exemplary target inputs and associated intent types may be received. The target inputs may be encoded into contextual embeddings. The embeddings may be added to the embedding library. When a conversational entity receives a new natural language input, that new input may be encoded into a contextual embedding. The new embedding may be added to the embedding library. A similarity score model may be applied to the new embedding and one or more embeddings for the exemplary target inputs. Similarity scores may be calculated based on the application of the similarity score model. A response may be generated by the conversational entity for an intent type for which a similarity score exceeds a threshold value.</description><language>eng</language><subject>ACOUSTICS ; MUSICAL INSTRUMENTS ; PHYSICS ; SPEECH ANALYSIS OR SYNTHESIS ; SPEECH OR AUDIO CODING OR DECODING ; SPEECH OR VOICE PROCESSING ; SPEECH RECOGNITION</subject><creationdate>2022</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20220215&amp;DB=EPODOC&amp;CC=US&amp;NR=11250839B2$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,780,885,25562,76317</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20220215&amp;DB=EPODOC&amp;CC=US&amp;NR=11250839B2$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>Suwandy, Tien Widya</creatorcontrib><creatorcontrib>Taniguchi, David Shigeru</creatorcontrib><creatorcontrib>Yang, Hung-chih</creatorcontrib><creatorcontrib>Marcjan, Cezary Antoni</creatorcontrib><title>Natural language processing models for conversational computing</title><description>In non-limiting examples of the present disclosure, systems, methods and devices for training conversational language models are presented. An embedding library may be generated and maintained. Exemplary target inputs and associated intent types may be received. The target inputs may be encoded into contextual embeddings. The embeddings may be added to the embedding library. When a conversational entity receives a new natural language input, that new input may be encoded into a contextual embedding. The new embedding may be added to the embedding library. A similarity score model may be applied to the new embedding and one or more embeddings for the exemplary target inputs. Similarity scores may be calculated based on the application of the similarity score model. A response may be generated by the conversational entity for an intent type for which a similarity score exceeds a threshold value.</description><subject>ACOUSTICS</subject><subject>MUSICAL INSTRUMENTS</subject><subject>PHYSICS</subject><subject>SPEECH ANALYSIS OR SYNTHESIS</subject><subject>SPEECH OR AUDIO CODING OR DECODING</subject><subject>SPEECH OR VOICE PROCESSING</subject><subject>SPEECH RECOGNITION</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2022</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNrjZLD3SywpLUrMUchJzEsvTUxPVSgoyk9OLS7OzEtXyM1PSc0pVkjLL1JIzs8rSy0qTizJzM8Dqk7Ozy0oLQGq4WFgTUvMKU7lhdLcDIpuriHOHrqpBfnxqcUFicmpeakl8aHBhoZGpgYWxpZORsbEqAEA9oEx9Q</recordid><startdate>20220215</startdate><enddate>20220215</enddate><creator>Suwandy, Tien Widya</creator><creator>Taniguchi, David Shigeru</creator><creator>Yang, Hung-chih</creator><creator>Marcjan, Cezary Antoni</creator><scope>EVB</scope></search><sort><creationdate>20220215</creationdate><title>Natural language processing models for conversational computing</title><author>Suwandy, Tien Widya ; Taniguchi, David Shigeru ; Yang, Hung-chih ; Marcjan, Cezary Antoni</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_US11250839B23</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>eng</language><creationdate>2022</creationdate><topic>ACOUSTICS</topic><topic>MUSICAL INSTRUMENTS</topic><topic>PHYSICS</topic><topic>SPEECH ANALYSIS OR SYNTHESIS</topic><topic>SPEECH OR AUDIO CODING OR DECODING</topic><topic>SPEECH OR VOICE PROCESSING</topic><topic>SPEECH RECOGNITION</topic><toplevel>online_resources</toplevel><creatorcontrib>Suwandy, Tien Widya</creatorcontrib><creatorcontrib>Taniguchi, David Shigeru</creatorcontrib><creatorcontrib>Yang, Hung-chih</creatorcontrib><creatorcontrib>Marcjan, Cezary Antoni</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Suwandy, Tien Widya</au><au>Taniguchi, David Shigeru</au><au>Yang, Hung-chih</au><au>Marcjan, Cezary Antoni</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Natural language processing models for conversational computing</title><date>2022-02-15</date><risdate>2022</risdate><abstract>In non-limiting examples of the present disclosure, systems, methods and devices for training conversational language models are presented. An embedding library may be generated and maintained. Exemplary target inputs and associated intent types may be received. The target inputs may be encoded into contextual embeddings. The embeddings may be added to the embedding library. When a conversational entity receives a new natural language input, that new input may be encoded into a contextual embedding. The new embedding may be added to the embedding library. A similarity score model may be applied to the new embedding and one or more embeddings for the exemplary target inputs. Similarity scores may be calculated based on the application of the similarity score model. A response may be generated by the conversational entity for an intent type for which a similarity score exceeds a threshold value.</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language eng
recordid cdi_epo_espacenet_US11250839B2
source esp@cenet
subjects ACOUSTICS
MUSICAL INSTRUMENTS
PHYSICS
SPEECH ANALYSIS OR SYNTHESIS
SPEECH OR AUDIO CODING OR DECODING
SPEECH OR VOICE PROCESSING
SPEECH RECOGNITION
title Natural language processing models for conversational computing
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-13T12%3A47%3A57IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=Suwandy,%20Tien%20Widya&rft.date=2022-02-15&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3EUS11250839B2%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true