Extracting Contact and Motion from Manipulation Videos
When we physically interact with our environment using our hands, we touch objects and force them to move: contact and motion are defining properties of manipulation. In this paper, we present an active, bottom-up method for the detection of actor-object contacts and the extraction of moved objects...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Zampogiannis, Konstantinos Ganguly, Kanishka Fermuller, Cornelia Aloimonos, Yiannis |
description | When we physically interact with our environment using our hands, we touch
objects and force them to move: contact and motion are defining properties of
manipulation. In this paper, we present an active, bottom-up method for the
detection of actor-object contacts and the extraction of moved objects and
their motions in RGBD videos of manipulation actions. At the core of our
approach lies non-rigid registration: we continuously warp a point cloud model
of the observed scene to the current video frame, generating a set of dense 3D
point trajectories. Under loose assumptions, we employ simple point cloud
segmentation techniques to extract the actor and subsequently detect
actor-environment contacts based on the estimated trajectories. For each such
interaction, using the detected contact as an attention mechanism, we obtain an
initial motion segment for the manipulated object by clustering trajectories in
the contact area vicinity and then we jointly refine the object segment and
estimate its 6DOF pose in all observed frames. Because of its generality and
the fundamental, yet highly informative, nature of its outputs, our approach is
applicable to a wide range of perception and planning tasks. We qualitatively
evaluate our method on a number of input sequences and present a comprehensive
robot imitation learning example, in which we demonstrate the crucial role of
our outputs in developing action representations/plans from observation. |
doi_str_mv | 10.48550/arxiv.1807.04870 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1807_04870</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1807_04870</sourcerecordid><originalsourceid>FETCH-LOGICAL-a670-7d4c2118b4fcc08f9497fe2f122f4b1168c400d79dc3a6f0f525499e03f7dbe73</originalsourceid><addsrcrecordid>eNotj71uwyAUhVkyVEkfoFN4AbsXjA2MlZX-SImyRFmta-BWSAlExK3St2_rdjpHZ_h0PsYeBNTKtC08YrnFz1oY0DUoo-GOdZvbVNBNMb3zPqfpp3JMnu_yFHPiVPKZ7zDFy8cJ5-UYfcjXFVsQnq7h_j-X7PC8OfSv1Xb_8tY_bSvsNFTaKyeFMKMi58CQVVZTkCSkJDUK0RmnALy23jXYEVArW2VtgIa0H4Nulmz9h52PD5cSz1i-hl-BYRZovgED5UAq</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Extracting Contact and Motion from Manipulation Videos</title><source>arXiv.org</source><creator>Zampogiannis, Konstantinos ; Ganguly, Kanishka ; Fermuller, Cornelia ; Aloimonos, Yiannis</creator><creatorcontrib>Zampogiannis, Konstantinos ; Ganguly, Kanishka ; Fermuller, Cornelia ; Aloimonos, Yiannis</creatorcontrib><description>When we physically interact with our environment using our hands, we touch
objects and force them to move: contact and motion are defining properties of
manipulation. In this paper, we present an active, bottom-up method for the
detection of actor-object contacts and the extraction of moved objects and
their motions in RGBD videos of manipulation actions. At the core of our
approach lies non-rigid registration: we continuously warp a point cloud model
of the observed scene to the current video frame, generating a set of dense 3D
point trajectories. Under loose assumptions, we employ simple point cloud
segmentation techniques to extract the actor and subsequently detect
actor-environment contacts based on the estimated trajectories. For each such
interaction, using the detected contact as an attention mechanism, we obtain an
initial motion segment for the manipulated object by clustering trajectories in
the contact area vicinity and then we jointly refine the object segment and
estimate its 6DOF pose in all observed frames. Because of its generality and
the fundamental, yet highly informative, nature of its outputs, our approach is
applicable to a wide range of perception and planning tasks. We qualitatively
evaluate our method on a number of input sequences and present a comprehensive
robot imitation learning example, in which we demonstrate the crucial role of
our outputs in developing action representations/plans from observation.</description><identifier>DOI: 10.48550/arxiv.1807.04870</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Robotics</subject><creationdate>2018-07</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,781,886</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1807.04870$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1807.04870$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Zampogiannis, Konstantinos</creatorcontrib><creatorcontrib>Ganguly, Kanishka</creatorcontrib><creatorcontrib>Fermuller, Cornelia</creatorcontrib><creatorcontrib>Aloimonos, Yiannis</creatorcontrib><title>Extracting Contact and Motion from Manipulation Videos</title><description>When we physically interact with our environment using our hands, we touch
objects and force them to move: contact and motion are defining properties of
manipulation. In this paper, we present an active, bottom-up method for the
detection of actor-object contacts and the extraction of moved objects and
their motions in RGBD videos of manipulation actions. At the core of our
approach lies non-rigid registration: we continuously warp a point cloud model
of the observed scene to the current video frame, generating a set of dense 3D
point trajectories. Under loose assumptions, we employ simple point cloud
segmentation techniques to extract the actor and subsequently detect
actor-environment contacts based on the estimated trajectories. For each such
interaction, using the detected contact as an attention mechanism, we obtain an
initial motion segment for the manipulated object by clustering trajectories in
the contact area vicinity and then we jointly refine the object segment and
estimate its 6DOF pose in all observed frames. Because of its generality and
the fundamental, yet highly informative, nature of its outputs, our approach is
applicable to a wide range of perception and planning tasks. We qualitatively
evaluate our method on a number of input sequences and present a comprehensive
robot imitation learning example, in which we demonstrate the crucial role of
our outputs in developing action representations/plans from observation.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Robotics</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj71uwyAUhVkyVEkfoFN4AbsXjA2MlZX-SImyRFmta-BWSAlExK3St2_rdjpHZ_h0PsYeBNTKtC08YrnFz1oY0DUoo-GOdZvbVNBNMb3zPqfpp3JMnu_yFHPiVPKZ7zDFy8cJ5-UYfcjXFVsQnq7h_j-X7PC8OfSv1Xb_8tY_bSvsNFTaKyeFMKMi58CQVVZTkCSkJDUK0RmnALy23jXYEVArW2VtgIa0H4Nulmz9h52PD5cSz1i-hl-BYRZovgED5UAq</recordid><startdate>20180712</startdate><enddate>20180712</enddate><creator>Zampogiannis, Konstantinos</creator><creator>Ganguly, Kanishka</creator><creator>Fermuller, Cornelia</creator><creator>Aloimonos, Yiannis</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20180712</creationdate><title>Extracting Contact and Motion from Manipulation Videos</title><author>Zampogiannis, Konstantinos ; Ganguly, Kanishka ; Fermuller, Cornelia ; Aloimonos, Yiannis</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a670-7d4c2118b4fcc08f9497fe2f122f4b1168c400d79dc3a6f0f525499e03f7dbe73</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Robotics</topic><toplevel>online_resources</toplevel><creatorcontrib>Zampogiannis, Konstantinos</creatorcontrib><creatorcontrib>Ganguly, Kanishka</creatorcontrib><creatorcontrib>Fermuller, Cornelia</creatorcontrib><creatorcontrib>Aloimonos, Yiannis</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zampogiannis, Konstantinos</au><au>Ganguly, Kanishka</au><au>Fermuller, Cornelia</au><au>Aloimonos, Yiannis</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Extracting Contact and Motion from Manipulation Videos</atitle><date>2018-07-12</date><risdate>2018</risdate><abstract>When we physically interact with our environment using our hands, we touch
objects and force them to move: contact and motion are defining properties of
manipulation. In this paper, we present an active, bottom-up method for the
detection of actor-object contacts and the extraction of moved objects and
their motions in RGBD videos of manipulation actions. At the core of our
approach lies non-rigid registration: we continuously warp a point cloud model
of the observed scene to the current video frame, generating a set of dense 3D
point trajectories. Under loose assumptions, we employ simple point cloud
segmentation techniques to extract the actor and subsequently detect
actor-environment contacts based on the estimated trajectories. For each such
interaction, using the detected contact as an attention mechanism, we obtain an
initial motion segment for the manipulated object by clustering trajectories in
the contact area vicinity and then we jointly refine the object segment and
estimate its 6DOF pose in all observed frames. Because of its generality and
the fundamental, yet highly informative, nature of its outputs, our approach is
applicable to a wide range of perception and planning tasks. We qualitatively
evaluate our method on a number of input sequences and present a comprehensive
robot imitation learning example, in which we demonstrate the crucial role of
our outputs in developing action representations/plans from observation.</abstract><doi>10.48550/arxiv.1807.04870</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.1807.04870 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_1807_04870 |
source | arXiv.org |
subjects | Computer Science - Computer Vision and Pattern Recognition Computer Science - Robotics |
title | Extracting Contact and Motion from Manipulation Videos |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-16T23%3A30%3A27IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Extracting%20Contact%20and%20Motion%20from%20Manipulation%20Videos&rft.au=Zampogiannis,%20Konstantinos&rft.date=2018-07-12&rft_id=info:doi/10.48550/arxiv.1807.04870&rft_dat=%3Carxiv_GOX%3E1807_04870%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |