Robot Introspection via Wrench-based Action Grammars

Robotic failure is all too common in unstructured robot tasks. Despite well designed controllers, robots often fail due to unexpected events. How do robots measure unexpected events? Many do not. Most robots are driven by the senseplan- act paradigm, however more recently robots are working with a s...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2016-09
Hauptverfasser: Rojas, Juan, Huang, Zhengjie, Luo, Shuangqi, Yunlong Du Wenwei Kuang, Zhu, Dingqiao, Harada, Kensuke
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Rojas, Juan
Huang, Zhengjie
Luo, Shuangqi
Yunlong Du Wenwei Kuang
Zhu, Dingqiao
Harada, Kensuke
description Robotic failure is all too common in unstructured robot tasks. Despite well designed controllers, robots often fail due to unexpected events. How do robots measure unexpected events? Many do not. Most robots are driven by the senseplan- act paradigm, however more recently robots are working with a sense-plan-act-verify paradigm. In this work we present a principled methodology to bootstrap robot introspection for contact tasks. In effect, we are trying to answer the question, what did the robot do? To this end, we hypothesize that all noisy wrench data inherently contains patterns that can be effectively represented by a vocabulary. The vocabulary is generated by meaningfully segmenting the data and then encoding it. When the wrench information represents a sequence of sub-tasks, we can think of the vocabulary forming sets of words or sentences, such that each subtask is uniquely represented by a word set. Such sets can be classified using statistical or machine learning techniques. We use SVMs and Mondrian Forests to classify contacts tasks both in simulation and in real robots for one and dual arm scenarios showing the general robustness of the approach. The contribution of our work is the presentation of a simple but generalizable semantic scheme that enables a robot to understand its high level state. This verification mechanism can provide feedback for high-level planners or reasoning systems that use semantic descriptors as well. The code, data, and other supporting documentation can be found at: http://www.juanrojas.net/2017icra_wrench_introspection.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2080349394</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2080349394</sourcerecordid><originalsourceid>FETCH-proquest_journals_20803493943</originalsourceid><addsrcrecordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mQwCcpPyi9R8MwrKcovLkhNLsnMz1Moy0xUCC9KzUvO0E1KLE5NUXCEiLsXJebmJhYV8zCwpiXmFKfyQmluBmU31xBnD92CovzC0tTikvis_NKiPKBUvJGBhYGxiaWxpYkxcaoAVRU0jw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2080349394</pqid></control><display><type>article</type><title>Robot Introspection via Wrench-based Action Grammars</title><source>Free E- Journals</source><creator>Rojas, Juan ; Huang, Zhengjie ; Luo, Shuangqi ; Yunlong Du Wenwei Kuang ; Zhu, Dingqiao ; Harada, Kensuke</creator><creatorcontrib>Rojas, Juan ; Huang, Zhengjie ; Luo, Shuangqi ; Yunlong Du Wenwei Kuang ; Zhu, Dingqiao ; Harada, Kensuke</creatorcontrib><description>Robotic failure is all too common in unstructured robot tasks. Despite well designed controllers, robots often fail due to unexpected events. How do robots measure unexpected events? Many do not. Most robots are driven by the senseplan- act paradigm, however more recently robots are working with a sense-plan-act-verify paradigm. In this work we present a principled methodology to bootstrap robot introspection for contact tasks. In effect, we are trying to answer the question, what did the robot do? To this end, we hypothesize that all noisy wrench data inherently contains patterns that can be effectively represented by a vocabulary. The vocabulary is generated by meaningfully segmenting the data and then encoding it. When the wrench information represents a sequence of sub-tasks, we can think of the vocabulary forming sets of words or sentences, such that each subtask is uniquely represented by a word set. Such sets can be classified using statistical or machine learning techniques. We use SVMs and Mondrian Forests to classify contacts tasks both in simulation and in real robots for one and dual arm scenarios showing the general robustness of the approach. The contribution of our work is the presentation of a simple but generalizable semantic scheme that enables a robot to understand its high level state. This verification mechanism can provide feedback for high-level planners or reasoning systems that use semantic descriptors as well. The code, data, and other supporting documentation can be found at: http://www.juanrojas.net/2017icra_wrench_introspection.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Grammars ; Level (quantity) ; Machine learning ; Robots ; Semantics ; Sentences</subject><ispartof>arXiv.org, 2016-09</ispartof><rights>2016. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Rojas, Juan</creatorcontrib><creatorcontrib>Huang, Zhengjie</creatorcontrib><creatorcontrib>Luo, Shuangqi</creatorcontrib><creatorcontrib>Yunlong Du Wenwei Kuang</creatorcontrib><creatorcontrib>Zhu, Dingqiao</creatorcontrib><creatorcontrib>Harada, Kensuke</creatorcontrib><title>Robot Introspection via Wrench-based Action Grammars</title><title>arXiv.org</title><description>Robotic failure is all too common in unstructured robot tasks. Despite well designed controllers, robots often fail due to unexpected events. How do robots measure unexpected events? Many do not. Most robots are driven by the senseplan- act paradigm, however more recently robots are working with a sense-plan-act-verify paradigm. In this work we present a principled methodology to bootstrap robot introspection for contact tasks. In effect, we are trying to answer the question, what did the robot do? To this end, we hypothesize that all noisy wrench data inherently contains patterns that can be effectively represented by a vocabulary. The vocabulary is generated by meaningfully segmenting the data and then encoding it. When the wrench information represents a sequence of sub-tasks, we can think of the vocabulary forming sets of words or sentences, such that each subtask is uniquely represented by a word set. Such sets can be classified using statistical or machine learning techniques. We use SVMs and Mondrian Forests to classify contacts tasks both in simulation and in real robots for one and dual arm scenarios showing the general robustness of the approach. The contribution of our work is the presentation of a simple but generalizable semantic scheme that enables a robot to understand its high level state. This verification mechanism can provide feedback for high-level planners or reasoning systems that use semantic descriptors as well. The code, data, and other supporting documentation can be found at: http://www.juanrojas.net/2017icra_wrench_introspection.</description><subject>Grammars</subject><subject>Level (quantity)</subject><subject>Machine learning</subject><subject>Robots</subject><subject>Semantics</subject><subject>Sentences</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2016</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mQwCcpPyi9R8MwrKcovLkhNLsnMz1Moy0xUCC9KzUvO0E1KLE5NUXCEiLsXJebmJhYV8zCwpiXmFKfyQmluBmU31xBnD92CovzC0tTikvis_NKiPKBUvJGBhYGxiaWxpYkxcaoAVRU0jw</recordid><startdate>20160916</startdate><enddate>20160916</enddate><creator>Rojas, Juan</creator><creator>Huang, Zhengjie</creator><creator>Luo, Shuangqi</creator><creator>Yunlong Du Wenwei Kuang</creator><creator>Zhu, Dingqiao</creator><creator>Harada, Kensuke</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20160916</creationdate><title>Robot Introspection via Wrench-based Action Grammars</title><author>Rojas, Juan ; Huang, Zhengjie ; Luo, Shuangqi ; Yunlong Du Wenwei Kuang ; Zhu, Dingqiao ; Harada, Kensuke</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_20803493943</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2016</creationdate><topic>Grammars</topic><topic>Level (quantity)</topic><topic>Machine learning</topic><topic>Robots</topic><topic>Semantics</topic><topic>Sentences</topic><toplevel>online_resources</toplevel><creatorcontrib>Rojas, Juan</creatorcontrib><creatorcontrib>Huang, Zhengjie</creatorcontrib><creatorcontrib>Luo, Shuangqi</creatorcontrib><creatorcontrib>Yunlong Du Wenwei Kuang</creatorcontrib><creatorcontrib>Zhu, Dingqiao</creatorcontrib><creatorcontrib>Harada, Kensuke</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Rojas, Juan</au><au>Huang, Zhengjie</au><au>Luo, Shuangqi</au><au>Yunlong Du Wenwei Kuang</au><au>Zhu, Dingqiao</au><au>Harada, Kensuke</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Robot Introspection via Wrench-based Action Grammars</atitle><jtitle>arXiv.org</jtitle><date>2016-09-16</date><risdate>2016</risdate><eissn>2331-8422</eissn><abstract>Robotic failure is all too common in unstructured robot tasks. Despite well designed controllers, robots often fail due to unexpected events. How do robots measure unexpected events? Many do not. Most robots are driven by the senseplan- act paradigm, however more recently robots are working with a sense-plan-act-verify paradigm. In this work we present a principled methodology to bootstrap robot introspection for contact tasks. In effect, we are trying to answer the question, what did the robot do? To this end, we hypothesize that all noisy wrench data inherently contains patterns that can be effectively represented by a vocabulary. The vocabulary is generated by meaningfully segmenting the data and then encoding it. When the wrench information represents a sequence of sub-tasks, we can think of the vocabulary forming sets of words or sentences, such that each subtask is uniquely represented by a word set. Such sets can be classified using statistical or machine learning techniques. We use SVMs and Mondrian Forests to classify contacts tasks both in simulation and in real robots for one and dual arm scenarios showing the general robustness of the approach. The contribution of our work is the presentation of a simple but generalizable semantic scheme that enables a robot to understand its high level state. This verification mechanism can provide feedback for high-level planners or reasoning systems that use semantic descriptors as well. The code, data, and other supporting documentation can be found at: http://www.juanrojas.net/2017icra_wrench_introspection.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2016-09
issn 2331-8422
language eng
recordid cdi_proquest_journals_2080349394
source Free E- Journals
subjects Grammars
Level (quantity)
Machine learning
Robots
Semantics
Sentences
title Robot Introspection via Wrench-based Action Grammars
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-10T22%3A55%3A40IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Robot%20Introspection%20via%20Wrench-based%20Action%20Grammars&rft.jtitle=arXiv.org&rft.au=Rojas,%20Juan&rft.date=2016-09-16&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2080349394%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2080349394&rft_id=info:pmid/&rfr_iscdi=true