Visual Task Progress Estimation with Appearance Invariant Embeddings for Robot Control and Planning

One of the challenges of full autonomy is to have a robot capable of manipulating its current environment to achieve another environment configuration. This paper is a step towards this challenge, focusing on the visual understanding of the task. Our approach trains a deep neural network to represen...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2020-11
Hauptverfasser: Maeda, Guilherme, Väätäinen, Joni, Yoshida, Hironori
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Maeda, Guilherme
Väätäinen, Joni
Yoshida, Hironori
description One of the challenges of full autonomy is to have a robot capable of manipulating its current environment to achieve another environment configuration. This paper is a step towards this challenge, focusing on the visual understanding of the task. Our approach trains a deep neural network to represent images as measurable features that are useful to estimate the progress (or phase) of a task. The training uses numerous variations of images of identical tasks when taken under the same phase index. The goal is to make the network sensitive to differences in task progress but insensitive to the appearance of the images. To this end, our method builds upon Time-Contrastive Networks (TCNs) to train a network using only discrete snapshots taken at different stages of a task. A robot can then solve long-horizon tasks by using the trained network to identify the progress of the current task and by iteratively calling a motion planner until the task is solved. We quantify the granularity achieved by the network in two simulated environments. In the first, to detect the number of objects in a scene and in the second to measure the volume of particles in a cup. Our experiments leverage this granularity to make a mobile robot move a desired number of objects into a storage area and to control the amount of pouring in a cup.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2378463572</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2378463572</sourcerecordid><originalsourceid>FETCH-proquest_journals_23784635723</originalsourceid><addsrcrecordid>eNqNysEOwUAQgOGNRELwDpM4N6ldbV1FKtwaEVcZuq2tmqmdLa_PwQM4_YfvH6ixNmYRrZZaj9RMpInjWKeZThIzVteTkx5bOKLcofBceysCuQT3wOCY4O3CDdZdZ9EjXS3s6YXeIQXIHxdblo5qgYo9HPjCATZMwXMLSCUULRJ9faqGFbZiZ79O1HybHze7qPP87K2Ec8O9py-dtclWy9QkmTb_XR_mKkYh</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2378463572</pqid></control><display><type>article</type><title>Visual Task Progress Estimation with Appearance Invariant Embeddings for Robot Control and Planning</title><source>Free E- Journals</source><creator>Maeda, Guilherme ; Väätäinen, Joni ; Yoshida, Hironori</creator><creatorcontrib>Maeda, Guilherme ; Väätäinen, Joni ; Yoshida, Hironori</creatorcontrib><description>One of the challenges of full autonomy is to have a robot capable of manipulating its current environment to achieve another environment configuration. This paper is a step towards this challenge, focusing on the visual understanding of the task. Our approach trains a deep neural network to represent images as measurable features that are useful to estimate the progress (or phase) of a task. The training uses numerous variations of images of identical tasks when taken under the same phase index. The goal is to make the network sensitive to differences in task progress but insensitive to the appearance of the images. To this end, our method builds upon Time-Contrastive Networks (TCNs) to train a network using only discrete snapshots taken at different stages of a task. A robot can then solve long-horizon tasks by using the trained network to identify the progress of the current task and by iteratively calling a motion planner until the task is solved. We quantify the granularity achieved by the network in two simulated environments. In the first, to detect the number of objects in a scene and in the second to measure the volume of particles in a cup. Our experiments leverage this granularity to make a mobile robot move a desired number of objects into a storage area and to control the amount of pouring in a cup.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Artificial neural networks ; Computer simulation ; Luminosity ; Machine learning ; Object recognition ; Robot control ; Robot dynamics ; Robots ; Vision ; Visual tasks</subject><ispartof>arXiv.org, 2020-11</ispartof><rights>2020. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Maeda, Guilherme</creatorcontrib><creatorcontrib>Väätäinen, Joni</creatorcontrib><creatorcontrib>Yoshida, Hironori</creatorcontrib><title>Visual Task Progress Estimation with Appearance Invariant Embeddings for Robot Control and Planning</title><title>arXiv.org</title><description>One of the challenges of full autonomy is to have a robot capable of manipulating its current environment to achieve another environment configuration. This paper is a step towards this challenge, focusing on the visual understanding of the task. Our approach trains a deep neural network to represent images as measurable features that are useful to estimate the progress (or phase) of a task. The training uses numerous variations of images of identical tasks when taken under the same phase index. The goal is to make the network sensitive to differences in task progress but insensitive to the appearance of the images. To this end, our method builds upon Time-Contrastive Networks (TCNs) to train a network using only discrete snapshots taken at different stages of a task. A robot can then solve long-horizon tasks by using the trained network to identify the progress of the current task and by iteratively calling a motion planner until the task is solved. We quantify the granularity achieved by the network in two simulated environments. In the first, to detect the number of objects in a scene and in the second to measure the volume of particles in a cup. Our experiments leverage this granularity to make a mobile robot move a desired number of objects into a storage area and to control the amount of pouring in a cup.</description><subject>Artificial neural networks</subject><subject>Computer simulation</subject><subject>Luminosity</subject><subject>Machine learning</subject><subject>Object recognition</subject><subject>Robot control</subject><subject>Robot dynamics</subject><subject>Robots</subject><subject>Vision</subject><subject>Visual tasks</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNysEOwUAQgOGNRELwDpM4N6ldbV1FKtwaEVcZuq2tmqmdLa_PwQM4_YfvH6ixNmYRrZZaj9RMpInjWKeZThIzVteTkx5bOKLcofBceysCuQT3wOCY4O3CDdZdZ9EjXS3s6YXeIQXIHxdblo5qgYo9HPjCATZMwXMLSCUULRJ9faqGFbZiZ79O1HybHze7qPP87K2Ec8O9py-dtclWy9QkmTb_XR_mKkYh</recordid><startdate>20201122</startdate><enddate>20201122</enddate><creator>Maeda, Guilherme</creator><creator>Väätäinen, Joni</creator><creator>Yoshida, Hironori</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20201122</creationdate><title>Visual Task Progress Estimation with Appearance Invariant Embeddings for Robot Control and Planning</title><author>Maeda, Guilherme ; Väätäinen, Joni ; Yoshida, Hironori</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_23784635723</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Artificial neural networks</topic><topic>Computer simulation</topic><topic>Luminosity</topic><topic>Machine learning</topic><topic>Object recognition</topic><topic>Robot control</topic><topic>Robot dynamics</topic><topic>Robots</topic><topic>Vision</topic><topic>Visual tasks</topic><toplevel>online_resources</toplevel><creatorcontrib>Maeda, Guilherme</creatorcontrib><creatorcontrib>Väätäinen, Joni</creatorcontrib><creatorcontrib>Yoshida, Hironori</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Maeda, Guilherme</au><au>Väätäinen, Joni</au><au>Yoshida, Hironori</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Visual Task Progress Estimation with Appearance Invariant Embeddings for Robot Control and Planning</atitle><jtitle>arXiv.org</jtitle><date>2020-11-22</date><risdate>2020</risdate><eissn>2331-8422</eissn><abstract>One of the challenges of full autonomy is to have a robot capable of manipulating its current environment to achieve another environment configuration. This paper is a step towards this challenge, focusing on the visual understanding of the task. Our approach trains a deep neural network to represent images as measurable features that are useful to estimate the progress (or phase) of a task. The training uses numerous variations of images of identical tasks when taken under the same phase index. The goal is to make the network sensitive to differences in task progress but insensitive to the appearance of the images. To this end, our method builds upon Time-Contrastive Networks (TCNs) to train a network using only discrete snapshots taken at different stages of a task. A robot can then solve long-horizon tasks by using the trained network to identify the progress of the current task and by iteratively calling a motion planner until the task is solved. We quantify the granularity achieved by the network in two simulated environments. In the first, to detect the number of objects in a scene and in the second to measure the volume of particles in a cup. Our experiments leverage this granularity to make a mobile robot move a desired number of objects into a storage area and to control the amount of pouring in a cup.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2020-11
issn 2331-8422
language eng
recordid cdi_proquest_journals_2378463572
source Free E- Journals
subjects Artificial neural networks
Computer simulation
Luminosity
Machine learning
Object recognition
Robot control
Robot dynamics
Robots
Vision
Visual tasks
title Visual Task Progress Estimation with Appearance Invariant Embeddings for Robot Control and Planning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T16%3A08%3A39IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Visual%20Task%20Progress%20Estimation%20with%20Appearance%20Invariant%20Embeddings%20for%20Robot%20Control%20and%20Planning&rft.jtitle=arXiv.org&rft.au=Maeda,%20Guilherme&rft.date=2020-11-22&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2378463572%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2378463572&rft_id=info:pmid/&rfr_iscdi=true