ROBOT AND METHOD FOR CONTROLLING THEREOF

A problem to be solved by the present invention is to provide a robot capable of detecting a user who has an interaction intention by itself. The robot according to an embodiment of the present invention comprises: a user sensing unit including at least one sensor sensing the user; a face detecting...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: YONGJIN PARK, JINHO SOHN, TAEGIL CHO, JUNGKWAN SON, MINOOK KIM, TACKSUNG CHOI, SEWAN GU
Format: Patent
Sprache:eng ; kor
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator YONGJIN PARK
JINHO SOHN
TAEGIL CHO
JUNGKWAN SON
MINOOK KIM
TACKSUNG CHOI
SEWAN GU
description A problem to be solved by the present invention is to provide a robot capable of detecting a user who has an interaction intention by itself. The robot according to an embodiment of the present invention comprises: a user sensing unit including at least one sensor sensing the user; a face detecting unit which acquires an image including the face of the user sensed by the user sensing unit; a control unit which detects the interaction intention of the user from the acquired image; and at least one of a speaker and a display for outputting at least one of a voice and a screen for inducing interaction of the user if the interaction intention is detected. 본 발명의 실시 예에 따른 로봇은, 사용자를 감지하는 적어도 하나의 센서를 포함하는 사용자 감지부, 상기 사용자 감지부에 의해 감지된 사용자의 얼굴을 포함하는 이미지를 획득하는 얼굴 검출부, 상기 획득된 이미지로부터 상기 사용자의 인터랙션 의도를 검출하는 제어부, 및 상기 인터랙션 의도가 검출된 경우, 상기 사용자의 인터랙션 유도를 위한 음성과 화면 중 적어도 하나를 출력하기 위한 스피커와 디스플레이 중 적어도 하나를 포함한다.
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_KR20200046186A</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>KR20200046186A</sourcerecordid><originalsourceid>FETCH-epo_espacenet_KR20200046186A3</originalsourceid><addsrcrecordid>eNrjZNAI8nfyD1Fw9HNR8HUN8fB3UXDzD1Jw9vcLCfL38fH0c1cI8XANcvV342FgTUvMKU7lhdLcDMpuriHOHrqpBfnxqcUFicmpeakl8d5BRgZGBgYGJmaGFmaOxsSpAgCN7STe</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>ROBOT AND METHOD FOR CONTROLLING THEREOF</title><source>esp@cenet</source><creator>YONGJIN PARK ; JINHO SOHN ; TAEGIL CHO ; JUNGKWAN SON ; MINOOK KIM ; TACKSUNG CHOI ; SEWAN GU</creator><creatorcontrib>YONGJIN PARK ; JINHO SOHN ; TAEGIL CHO ; JUNGKWAN SON ; MINOOK KIM ; TACKSUNG CHOI ; SEWAN GU</creatorcontrib><description>A problem to be solved by the present invention is to provide a robot capable of detecting a user who has an interaction intention by itself. The robot according to an embodiment of the present invention comprises: a user sensing unit including at least one sensor sensing the user; a face detecting unit which acquires an image including the face of the user sensed by the user sensing unit; a control unit which detects the interaction intention of the user from the acquired image; and at least one of a speaker and a display for outputting at least one of a voice and a screen for inducing interaction of the user if the interaction intention is detected. 본 발명의 실시 예에 따른 로봇은, 사용자를 감지하는 적어도 하나의 센서를 포함하는 사용자 감지부, 상기 사용자 감지부에 의해 감지된 사용자의 얼굴을 포함하는 이미지를 획득하는 얼굴 검출부, 상기 획득된 이미지로부터 상기 사용자의 인터랙션 의도를 검출하는 제어부, 및 상기 인터랙션 의도가 검출된 경우, 상기 사용자의 인터랙션 유도를 위한 음성과 화면 중 적어도 하나를 출력하기 위한 스피커와 디스플레이 중 적어도 하나를 포함한다.</description><language>eng ; kor</language><subject>CHAMBERS PROVIDED WITH MANIPULATION DEVICES ; HAND TOOLS ; MANIPULATORS ; PERFORMING OPERATIONS ; PORTABLE POWER-DRIVEN TOOLS ; TRANSPORTING</subject><creationdate>2020</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20200507&amp;DB=EPODOC&amp;CC=KR&amp;NR=20200046186A$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,780,885,25563,76418</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20200507&amp;DB=EPODOC&amp;CC=KR&amp;NR=20200046186A$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>YONGJIN PARK</creatorcontrib><creatorcontrib>JINHO SOHN</creatorcontrib><creatorcontrib>TAEGIL CHO</creatorcontrib><creatorcontrib>JUNGKWAN SON</creatorcontrib><creatorcontrib>MINOOK KIM</creatorcontrib><creatorcontrib>TACKSUNG CHOI</creatorcontrib><creatorcontrib>SEWAN GU</creatorcontrib><title>ROBOT AND METHOD FOR CONTROLLING THEREOF</title><description>A problem to be solved by the present invention is to provide a robot capable of detecting a user who has an interaction intention by itself. The robot according to an embodiment of the present invention comprises: a user sensing unit including at least one sensor sensing the user; a face detecting unit which acquires an image including the face of the user sensed by the user sensing unit; a control unit which detects the interaction intention of the user from the acquired image; and at least one of a speaker and a display for outputting at least one of a voice and a screen for inducing interaction of the user if the interaction intention is detected. 본 발명의 실시 예에 따른 로봇은, 사용자를 감지하는 적어도 하나의 센서를 포함하는 사용자 감지부, 상기 사용자 감지부에 의해 감지된 사용자의 얼굴을 포함하는 이미지를 획득하는 얼굴 검출부, 상기 획득된 이미지로부터 상기 사용자의 인터랙션 의도를 검출하는 제어부, 및 상기 인터랙션 의도가 검출된 경우, 상기 사용자의 인터랙션 유도를 위한 음성과 화면 중 적어도 하나를 출력하기 위한 스피커와 디스플레이 중 적어도 하나를 포함한다.</description><subject>CHAMBERS PROVIDED WITH MANIPULATION DEVICES</subject><subject>HAND TOOLS</subject><subject>MANIPULATORS</subject><subject>PERFORMING OPERATIONS</subject><subject>PORTABLE POWER-DRIVEN TOOLS</subject><subject>TRANSPORTING</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2020</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNrjZNAI8nfyD1Fw9HNR8HUN8fB3UXDzD1Jw9vcLCfL38fH0c1cI8XANcvV342FgTUvMKU7lhdLcDMpuriHOHrqpBfnxqcUFicmpeakl8d5BRgZGBgYGJmaGFmaOxsSpAgCN7STe</recordid><startdate>20200507</startdate><enddate>20200507</enddate><creator>YONGJIN PARK</creator><creator>JINHO SOHN</creator><creator>TAEGIL CHO</creator><creator>JUNGKWAN SON</creator><creator>MINOOK KIM</creator><creator>TACKSUNG CHOI</creator><creator>SEWAN GU</creator><scope>EVB</scope></search><sort><creationdate>20200507</creationdate><title>ROBOT AND METHOD FOR CONTROLLING THEREOF</title><author>YONGJIN PARK ; JINHO SOHN ; TAEGIL CHO ; JUNGKWAN SON ; MINOOK KIM ; TACKSUNG CHOI ; SEWAN GU</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_KR20200046186A3</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>eng ; kor</language><creationdate>2020</creationdate><topic>CHAMBERS PROVIDED WITH MANIPULATION DEVICES</topic><topic>HAND TOOLS</topic><topic>MANIPULATORS</topic><topic>PERFORMING OPERATIONS</topic><topic>PORTABLE POWER-DRIVEN TOOLS</topic><topic>TRANSPORTING</topic><toplevel>online_resources</toplevel><creatorcontrib>YONGJIN PARK</creatorcontrib><creatorcontrib>JINHO SOHN</creatorcontrib><creatorcontrib>TAEGIL CHO</creatorcontrib><creatorcontrib>JUNGKWAN SON</creatorcontrib><creatorcontrib>MINOOK KIM</creatorcontrib><creatorcontrib>TACKSUNG CHOI</creatorcontrib><creatorcontrib>SEWAN GU</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>YONGJIN PARK</au><au>JINHO SOHN</au><au>TAEGIL CHO</au><au>JUNGKWAN SON</au><au>MINOOK KIM</au><au>TACKSUNG CHOI</au><au>SEWAN GU</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>ROBOT AND METHOD FOR CONTROLLING THEREOF</title><date>2020-05-07</date><risdate>2020</risdate><abstract>A problem to be solved by the present invention is to provide a robot capable of detecting a user who has an interaction intention by itself. The robot according to an embodiment of the present invention comprises: a user sensing unit including at least one sensor sensing the user; a face detecting unit which acquires an image including the face of the user sensed by the user sensing unit; a control unit which detects the interaction intention of the user from the acquired image; and at least one of a speaker and a display for outputting at least one of a voice and a screen for inducing interaction of the user if the interaction intention is detected. 본 발명의 실시 예에 따른 로봇은, 사용자를 감지하는 적어도 하나의 센서를 포함하는 사용자 감지부, 상기 사용자 감지부에 의해 감지된 사용자의 얼굴을 포함하는 이미지를 획득하는 얼굴 검출부, 상기 획득된 이미지로부터 상기 사용자의 인터랙션 의도를 검출하는 제어부, 및 상기 인터랙션 의도가 검출된 경우, 상기 사용자의 인터랙션 유도를 위한 음성과 화면 중 적어도 하나를 출력하기 위한 스피커와 디스플레이 중 적어도 하나를 포함한다.</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language eng ; kor
recordid cdi_epo_espacenet_KR20200046186A
source esp@cenet
subjects CHAMBERS PROVIDED WITH MANIPULATION DEVICES
HAND TOOLS
MANIPULATORS
PERFORMING OPERATIONS
PORTABLE POWER-DRIVEN TOOLS
TRANSPORTING
title ROBOT AND METHOD FOR CONTROLLING THEREOF
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-09T04%3A15%3A32IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=YONGJIN%20PARK&rft.date=2020-05-07&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3EKR20200046186A%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true