CONTEXT AWARE INTERACTIVE ROBOT
A plurality of images captured using a camera included in a robotic system are analyzed. A spatial map is generated using a sensor included in the robotic system. A semantic location map is generated using at least the analyzed plurality of captured images and the generated spatial map. A natural la...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Patent |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Shinn, Hong Shik Cui, Run Yu, Hye Jun Chung, Won Taek |
description | A plurality of images captured using a camera included in a robotic system are analyzed. A spatial map is generated using a sensor included in the robotic system. A semantic location map is generated using at least the analyzed plurality of captured images and the generated spatial map. A natural language input referencing a desired product item is received from a user. A speech recognition result is recognized from the natural language input and sent to a reasoning engine. In response to sending the recognized speech recognition result, one or more commands for the robotic system are received from the reasoning engine. The received one or more commands are performed and feedback to the user based on at least one of the one or more commands is provided. |
format | Patent |
fullrecord | <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_US2019206400A1</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>US2019206400A1</sourcerecordid><originalsourceid>FETCH-epo_espacenet_US2019206400A13</originalsourceid><addsrcrecordid>eNrjZJB39vcLcY0IUXAMdwxyVfAEcoIcnUM8w1wVgvyd_EN4GFjTEnOKU3mhNDeDsptriLOHbmpBfnxqcUFicmpeakl8aLCRgaGlkYGZiYGBo6ExcaoAaKMiow</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>CONTEXT AWARE INTERACTIVE ROBOT</title><source>esp@cenet</source><creator>Shinn, Hong Shik ; Cui, Run ; Yu, Hye Jun ; Chung, Won Taek</creator><creatorcontrib>Shinn, Hong Shik ; Cui, Run ; Yu, Hye Jun ; Chung, Won Taek</creatorcontrib><description>A plurality of images captured using a camera included in a robotic system are analyzed. A spatial map is generated using a sensor included in the robotic system. A semantic location map is generated using at least the analyzed plurality of captured images and the generated spatial map. A natural language input referencing a desired product item is received from a user. A speech recognition result is recognized from the natural language input and sent to a reasoning engine. In response to sending the recognized speech recognition result, one or more commands for the robotic system are received from the reasoning engine. The received one or more commands are performed and feedback to the user based on at least one of the one or more commands is provided.</description><language>eng</language><subject>ACOUSTICS ; CALCULATING ; CHAMBERS PROVIDED WITH MANIPULATION DEVICES ; COMPUTING ; CONTROLLING ; COUNTING ; HAND TOOLS ; HANDLING RECORD CARRIERS ; MANIPULATORS ; MUSICAL INSTRUMENTS ; PERFORMING OPERATIONS ; PHYSICS ; PORTABLE POWER-DRIVEN TOOLS ; PRESENTATION OF DATA ; RECOGNITION OF DATA ; RECORD CARRIERS ; REGULATING ; SPEECH ANALYSIS OR SYNTHESIS ; SPEECH OR AUDIO CODING OR DECODING ; SPEECH OR VOICE PROCESSING ; SPEECH RECOGNITION ; SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES ; TRANSPORTING</subject><creationdate>2019</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&date=20190704&DB=EPODOC&CC=US&NR=2019206400A1$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,777,882,25545,76296</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&date=20190704&DB=EPODOC&CC=US&NR=2019206400A1$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>Shinn, Hong Shik</creatorcontrib><creatorcontrib>Cui, Run</creatorcontrib><creatorcontrib>Yu, Hye Jun</creatorcontrib><creatorcontrib>Chung, Won Taek</creatorcontrib><title>CONTEXT AWARE INTERACTIVE ROBOT</title><description>A plurality of images captured using a camera included in a robotic system are analyzed. A spatial map is generated using a sensor included in the robotic system. A semantic location map is generated using at least the analyzed plurality of captured images and the generated spatial map. A natural language input referencing a desired product item is received from a user. A speech recognition result is recognized from the natural language input and sent to a reasoning engine. In response to sending the recognized speech recognition result, one or more commands for the robotic system are received from the reasoning engine. The received one or more commands are performed and feedback to the user based on at least one of the one or more commands is provided.</description><subject>ACOUSTICS</subject><subject>CALCULATING</subject><subject>CHAMBERS PROVIDED WITH MANIPULATION DEVICES</subject><subject>COMPUTING</subject><subject>CONTROLLING</subject><subject>COUNTING</subject><subject>HAND TOOLS</subject><subject>HANDLING RECORD CARRIERS</subject><subject>MANIPULATORS</subject><subject>MUSICAL INSTRUMENTS</subject><subject>PERFORMING OPERATIONS</subject><subject>PHYSICS</subject><subject>PORTABLE POWER-DRIVEN TOOLS</subject><subject>PRESENTATION OF DATA</subject><subject>RECOGNITION OF DATA</subject><subject>RECORD CARRIERS</subject><subject>REGULATING</subject><subject>SPEECH ANALYSIS OR SYNTHESIS</subject><subject>SPEECH OR AUDIO CODING OR DECODING</subject><subject>SPEECH OR VOICE PROCESSING</subject><subject>SPEECH RECOGNITION</subject><subject>SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES</subject><subject>TRANSPORTING</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2019</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNrjZJB39vcLcY0IUXAMdwxyVfAEcoIcnUM8w1wVgvyd_EN4GFjTEnOKU3mhNDeDsptriLOHbmpBfnxqcUFicmpeakl8aLCRgaGlkYGZiYGBo6ExcaoAaKMiow</recordid><startdate>20190704</startdate><enddate>20190704</enddate><creator>Shinn, Hong Shik</creator><creator>Cui, Run</creator><creator>Yu, Hye Jun</creator><creator>Chung, Won Taek</creator><scope>EVB</scope></search><sort><creationdate>20190704</creationdate><title>CONTEXT AWARE INTERACTIVE ROBOT</title><author>Shinn, Hong Shik ; Cui, Run ; Yu, Hye Jun ; Chung, Won Taek</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_US2019206400A13</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>eng</language><creationdate>2019</creationdate><topic>ACOUSTICS</topic><topic>CALCULATING</topic><topic>CHAMBERS PROVIDED WITH MANIPULATION DEVICES</topic><topic>COMPUTING</topic><topic>CONTROLLING</topic><topic>COUNTING</topic><topic>HAND TOOLS</topic><topic>HANDLING RECORD CARRIERS</topic><topic>MANIPULATORS</topic><topic>MUSICAL INSTRUMENTS</topic><topic>PERFORMING OPERATIONS</topic><topic>PHYSICS</topic><topic>PORTABLE POWER-DRIVEN TOOLS</topic><topic>PRESENTATION OF DATA</topic><topic>RECOGNITION OF DATA</topic><topic>RECORD CARRIERS</topic><topic>REGULATING</topic><topic>SPEECH ANALYSIS OR SYNTHESIS</topic><topic>SPEECH OR AUDIO CODING OR DECODING</topic><topic>SPEECH OR VOICE PROCESSING</topic><topic>SPEECH RECOGNITION</topic><topic>SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES</topic><topic>TRANSPORTING</topic><toplevel>online_resources</toplevel><creatorcontrib>Shinn, Hong Shik</creatorcontrib><creatorcontrib>Cui, Run</creatorcontrib><creatorcontrib>Yu, Hye Jun</creatorcontrib><creatorcontrib>Chung, Won Taek</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Shinn, Hong Shik</au><au>Cui, Run</au><au>Yu, Hye Jun</au><au>Chung, Won Taek</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>CONTEXT AWARE INTERACTIVE ROBOT</title><date>2019-07-04</date><risdate>2019</risdate><abstract>A plurality of images captured using a camera included in a robotic system are analyzed. A spatial map is generated using a sensor included in the robotic system. A semantic location map is generated using at least the analyzed plurality of captured images and the generated spatial map. A natural language input referencing a desired product item is received from a user. A speech recognition result is recognized from the natural language input and sent to a reasoning engine. In response to sending the recognized speech recognition result, one or more commands for the robotic system are received from the reasoning engine. The received one or more commands are performed and feedback to the user based on at least one of the one or more commands is provided.</abstract><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | |
ispartof | |
issn | |
language | eng |
recordid | cdi_epo_espacenet_US2019206400A1 |
source | esp@cenet |
subjects | ACOUSTICS CALCULATING CHAMBERS PROVIDED WITH MANIPULATION DEVICES COMPUTING CONTROLLING COUNTING HAND TOOLS HANDLING RECORD CARRIERS MANIPULATORS MUSICAL INSTRUMENTS PERFORMING OPERATIONS PHYSICS PORTABLE POWER-DRIVEN TOOLS PRESENTATION OF DATA RECOGNITION OF DATA RECORD CARRIERS REGULATING SPEECH ANALYSIS OR SYNTHESIS SPEECH OR AUDIO CODING OR DECODING SPEECH OR VOICE PROCESSING SPEECH RECOGNITION SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES TRANSPORTING |
title | CONTEXT AWARE INTERACTIVE ROBOT |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-19T02%3A12%3A37IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=Shinn,%20Hong%20Shik&rft.date=2019-07-04&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3EUS2019206400A1%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |