Self-supervised Wide Baseline Visual Servoing via 3D Equivariance

One of the challenging input settings for visual servoing is when the initial and goal camera views are far apart. Such settings are difficult because the wide baseline can cause drastic changes in object appearance and cause occlusions. This paper presents a novel self-supervised visual servoing me...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Huh, Jinwook, Hong, Jungseok, Garg, Suveer, Park, Hyun Soo, Isler, Volkan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Huh, Jinwook
Hong, Jungseok
Garg, Suveer
Park, Hyun Soo
Isler, Volkan
description One of the challenging input settings for visual servoing is when the initial and goal camera views are far apart. Such settings are difficult because the wide baseline can cause drastic changes in object appearance and cause occlusions. This paper presents a novel self-supervised visual servoing method for wide baseline images which does not require 3D ground truth supervision. Existing approaches that regress absolute camera pose with respect to an object require 3D ground truth data of the object in the forms of 3D bounding boxes or meshes. We learn a coherent visual representation by leveraging a geometric property called 3D equivariance-the representation is transformed in a predictable way as a function of 3D transformation. To ensure that the feature-space is faithful to the underlying geodesic space, a geodesic preserving constraint is applied in conjunction with the equivariance. We design a Siamese network that can effectively enforce these two geometric properties without requiring 3D supervision. With the learned model, the relative transformation can be inferred simply by following the gradient in the learned space and used as feedback for closed-loop visual servoing. Our method is evaluated on objects from the YCB dataset, showing meaningful outperformance on a visual servoing task, or object alignment task with respect to state-of-the-art approaches that use 3D supervision. Ours yields more than 35% average distance error reduction and more than 90% success rate with 3cm error tolerance.
doi_str_mv 10.48550/arxiv.2209.05432
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2209_05432</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2209_05432</sourcerecordid><originalsourceid>FETCH-LOGICAL-a672-a738aa53facfdca2c088620cec2ff83568f10f7a82d52218872a46aa99a937033</originalsourceid><addsrcrecordid>eNotz71OwzAYhWEvDKhwAUz4BhKcz3HsjKWUH6lSh1Z0jI4cu_qkEIqtRHD3QGE6wysd6RHiplJl7YxRd0ifPJdEqi2VqTVdiuUuDLHI0ymkmXPo5YH7IO-Rw8BjkK-cJwxy91PfeTzKmSH1g1x_TDwjMUYfrsRFxJDD9f8uxP5xvV89F5vt08tquSnQWCpgtQOMjvCx9yCvnGtI-eApRqdN42KlooWj3hBVzllC3QBti1ZbpfVC3P7dng3dKfEb0lf3a-nOFv0NOHRDoQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Self-supervised Wide Baseline Visual Servoing via 3D Equivariance</title><source>arXiv.org</source><creator>Huh, Jinwook ; Hong, Jungseok ; Garg, Suveer ; Park, Hyun Soo ; Isler, Volkan</creator><creatorcontrib>Huh, Jinwook ; Hong, Jungseok ; Garg, Suveer ; Park, Hyun Soo ; Isler, Volkan</creatorcontrib><description>One of the challenging input settings for visual servoing is when the initial and goal camera views are far apart. Such settings are difficult because the wide baseline can cause drastic changes in object appearance and cause occlusions. This paper presents a novel self-supervised visual servoing method for wide baseline images which does not require 3D ground truth supervision. Existing approaches that regress absolute camera pose with respect to an object require 3D ground truth data of the object in the forms of 3D bounding boxes or meshes. We learn a coherent visual representation by leveraging a geometric property called 3D equivariance-the representation is transformed in a predictable way as a function of 3D transformation. To ensure that the feature-space is faithful to the underlying geodesic space, a geodesic preserving constraint is applied in conjunction with the equivariance. We design a Siamese network that can effectively enforce these two geometric properties without requiring 3D supervision. With the learned model, the relative transformation can be inferred simply by following the gradient in the learned space and used as feedback for closed-loop visual servoing. Our method is evaluated on objects from the YCB dataset, showing meaningful outperformance on a visual servoing task, or object alignment task with respect to state-of-the-art approaches that use 3D supervision. Ours yields more than 35% average distance error reduction and more than 90% success rate with 3cm error tolerance.</description><identifier>DOI: 10.48550/arxiv.2209.05432</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Robotics</subject><creationdate>2022-09</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2209.05432$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2209.05432$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Huh, Jinwook</creatorcontrib><creatorcontrib>Hong, Jungseok</creatorcontrib><creatorcontrib>Garg, Suveer</creatorcontrib><creatorcontrib>Park, Hyun Soo</creatorcontrib><creatorcontrib>Isler, Volkan</creatorcontrib><title>Self-supervised Wide Baseline Visual Servoing via 3D Equivariance</title><description>One of the challenging input settings for visual servoing is when the initial and goal camera views are far apart. Such settings are difficult because the wide baseline can cause drastic changes in object appearance and cause occlusions. This paper presents a novel self-supervised visual servoing method for wide baseline images which does not require 3D ground truth supervision. Existing approaches that regress absolute camera pose with respect to an object require 3D ground truth data of the object in the forms of 3D bounding boxes or meshes. We learn a coherent visual representation by leveraging a geometric property called 3D equivariance-the representation is transformed in a predictable way as a function of 3D transformation. To ensure that the feature-space is faithful to the underlying geodesic space, a geodesic preserving constraint is applied in conjunction with the equivariance. We design a Siamese network that can effectively enforce these two geometric properties without requiring 3D supervision. With the learned model, the relative transformation can be inferred simply by following the gradient in the learned space and used as feedback for closed-loop visual servoing. Our method is evaluated on objects from the YCB dataset, showing meaningful outperformance on a visual servoing task, or object alignment task with respect to state-of-the-art approaches that use 3D supervision. Ours yields more than 35% average distance error reduction and more than 90% success rate with 3cm error tolerance.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Robotics</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz71OwzAYhWEvDKhwAUz4BhKcz3HsjKWUH6lSh1Z0jI4cu_qkEIqtRHD3QGE6wysd6RHiplJl7YxRd0ifPJdEqi2VqTVdiuUuDLHI0ymkmXPo5YH7IO-Rw8BjkK-cJwxy91PfeTzKmSH1g1x_TDwjMUYfrsRFxJDD9f8uxP5xvV89F5vt08tquSnQWCpgtQOMjvCx9yCvnGtI-eApRqdN42KlooWj3hBVzllC3QBti1ZbpfVC3P7dng3dKfEb0lf3a-nOFv0NOHRDoQ</recordid><startdate>20220912</startdate><enddate>20220912</enddate><creator>Huh, Jinwook</creator><creator>Hong, Jungseok</creator><creator>Garg, Suveer</creator><creator>Park, Hyun Soo</creator><creator>Isler, Volkan</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20220912</creationdate><title>Self-supervised Wide Baseline Visual Servoing via 3D Equivariance</title><author>Huh, Jinwook ; Hong, Jungseok ; Garg, Suveer ; Park, Hyun Soo ; Isler, Volkan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a672-a738aa53facfdca2c088620cec2ff83568f10f7a82d52218872a46aa99a937033</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Robotics</topic><toplevel>online_resources</toplevel><creatorcontrib>Huh, Jinwook</creatorcontrib><creatorcontrib>Hong, Jungseok</creatorcontrib><creatorcontrib>Garg, Suveer</creatorcontrib><creatorcontrib>Park, Hyun Soo</creatorcontrib><creatorcontrib>Isler, Volkan</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Huh, Jinwook</au><au>Hong, Jungseok</au><au>Garg, Suveer</au><au>Park, Hyun Soo</au><au>Isler, Volkan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Self-supervised Wide Baseline Visual Servoing via 3D Equivariance</atitle><date>2022-09-12</date><risdate>2022</risdate><abstract>One of the challenging input settings for visual servoing is when the initial and goal camera views are far apart. Such settings are difficult because the wide baseline can cause drastic changes in object appearance and cause occlusions. This paper presents a novel self-supervised visual servoing method for wide baseline images which does not require 3D ground truth supervision. Existing approaches that regress absolute camera pose with respect to an object require 3D ground truth data of the object in the forms of 3D bounding boxes or meshes. We learn a coherent visual representation by leveraging a geometric property called 3D equivariance-the representation is transformed in a predictable way as a function of 3D transformation. To ensure that the feature-space is faithful to the underlying geodesic space, a geodesic preserving constraint is applied in conjunction with the equivariance. We design a Siamese network that can effectively enforce these two geometric properties without requiring 3D supervision. With the learned model, the relative transformation can be inferred simply by following the gradient in the learned space and used as feedback for closed-loop visual servoing. Our method is evaluated on objects from the YCB dataset, showing meaningful outperformance on a visual servoing task, or object alignment task with respect to state-of-the-art approaches that use 3D supervision. Ours yields more than 35% average distance error reduction and more than 90% success rate with 3cm error tolerance.</abstract><doi>10.48550/arxiv.2209.05432</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2209.05432
ispartof
issn
language eng
recordid cdi_arxiv_primary_2209_05432
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computer Vision and Pattern Recognition
Computer Science - Robotics
title Self-supervised Wide Baseline Visual Servoing via 3D Equivariance
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-25T21%3A33%3A18IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Self-supervised%20Wide%20Baseline%20Visual%20Servoing%20via%203D%20Equivariance&rft.au=Huh,%20Jinwook&rft.date=2022-09-12&rft_id=info:doi/10.48550/arxiv.2209.05432&rft_dat=%3Carxiv_GOX%3E2209_05432%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true