MoCap-less Quantitative Evaluation of Ego-Pose Estimation Without Ground Truth Measurements

The emergence of data-driven approaches for control and planning in robotics have highlighted the need for developing experimental robotic platforms for data collection. However, their implementation is often complex and expensive, in particular for flying and terrestrial robots where the precise es...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2022-02
Hauptverfasser: Possamaï, Quentin, Steeven Janny, Bono, Guillaume, Nadri, Madiha, Bako, Laurent, Wolf, Christian
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Possamaï, Quentin
Steeven Janny
Bono, Guillaume
Nadri, Madiha
Bako, Laurent
Wolf, Christian
description The emergence of data-driven approaches for control and planning in robotics have highlighted the need for developing experimental robotic platforms for data collection. However, their implementation is often complex and expensive, in particular for flying and terrestrial robots where the precise estimation of the position requires motion capture devices (MoCap) or Lidar. In order to simplify the use of a robotic platform dedicated to research on a wide range of indoor and outdoor environments, we present a data validation tool for ego-pose estimation that does not require any equipment other than the on-board camera. The method and tool allow a rapid, visual and quantitative evaluation of the quality of ego-pose sensors and are sensitive to different sources of flaws in the acquisition chain, ranging from desynchronization of the sensor flows to misevaluation of the geometric parameters of the robotic platform. Using computer vision, the information from the sensors is used to calculate the motion of a semantic scene point through its projection to the 2D image space of the on-board camera. The deviations of these keypoints from references created with a semi-automatic tool allow rapid and simple quality assessment of the data collected on the platform. To demonstrate the performance of our method, we evaluate it on two challenging standard UAV datasets as well as one dataset taken from a terrestrial robot.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2624802156</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2624802156</sourcerecordid><originalsourceid>FETCH-proquest_journals_26248021563</originalsourceid><addsrcrecordid>eNqNisEKgkAURYcgSMp_GGgtTDNq7sVqIxQILVrIQGMqOs_mvfH7E-oDWp3LuWfFAqnUIcpiKTcsROyFEDI9yiRRAXuUkOspGgwiv3ltqSNN3Wx4MevBLxMsh4YXL4iugItG6savvnfUgid-duDtk1fOU8tLo9E7MxpLuGPrRg9owh-3bH8qqvwSTQ7e3iDVPXhnl6uWqYwzIQ9Jqv6rPpRmQ1c</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2624802156</pqid></control><display><type>article</type><title>MoCap-less Quantitative Evaluation of Ego-Pose Estimation Without Ground Truth Measurements</title><source>Free E- Journals</source><creator>Possamaï, Quentin ; Steeven Janny ; Bono, Guillaume ; Nadri, Madiha ; Bako, Laurent ; Wolf, Christian</creator><creatorcontrib>Possamaï, Quentin ; Steeven Janny ; Bono, Guillaume ; Nadri, Madiha ; Bako, Laurent ; Wolf, Christian</creatorcontrib><description>The emergence of data-driven approaches for control and planning in robotics have highlighted the need for developing experimental robotic platforms for data collection. However, their implementation is often complex and expensive, in particular for flying and terrestrial robots where the precise estimation of the position requires motion capture devices (MoCap) or Lidar. In order to simplify the use of a robotic platform dedicated to research on a wide range of indoor and outdoor environments, we present a data validation tool for ego-pose estimation that does not require any equipment other than the on-board camera. The method and tool allow a rapid, visual and quantitative evaluation of the quality of ego-pose sensors and are sensitive to different sources of flaws in the acquisition chain, ranging from desynchronization of the sensor flows to misevaluation of the geometric parameters of the robotic platform. Using computer vision, the information from the sensors is used to calculate the motion of a semantic scene point through its projection to the 2D image space of the on-board camera. The deviations of these keypoints from references created with a semi-automatic tool allow rapid and simple quality assessment of the data collected on the platform. To demonstrate the performance of our method, we evaluate it on two challenging standard UAV datasets as well as one dataset taken from a terrestrial robot.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Cameras ; Computer vision ; Data collection ; Datasets ; Indoor environments ; Motion capture ; Pose estimation ; Quality assessment ; Quantitative analysis ; Robotics ; Robots ; Sensors</subject><ispartof>arXiv.org, 2022-02</ispartof><rights>2022. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Possamaï, Quentin</creatorcontrib><creatorcontrib>Steeven Janny</creatorcontrib><creatorcontrib>Bono, Guillaume</creatorcontrib><creatorcontrib>Nadri, Madiha</creatorcontrib><creatorcontrib>Bako, Laurent</creatorcontrib><creatorcontrib>Wolf, Christian</creatorcontrib><title>MoCap-less Quantitative Evaluation of Ego-Pose Estimation Without Ground Truth Measurements</title><title>arXiv.org</title><description>The emergence of data-driven approaches for control and planning in robotics have highlighted the need for developing experimental robotic platforms for data collection. However, their implementation is often complex and expensive, in particular for flying and terrestrial robots where the precise estimation of the position requires motion capture devices (MoCap) or Lidar. In order to simplify the use of a robotic platform dedicated to research on a wide range of indoor and outdoor environments, we present a data validation tool for ego-pose estimation that does not require any equipment other than the on-board camera. The method and tool allow a rapid, visual and quantitative evaluation of the quality of ego-pose sensors and are sensitive to different sources of flaws in the acquisition chain, ranging from desynchronization of the sensor flows to misevaluation of the geometric parameters of the robotic platform. Using computer vision, the information from the sensors is used to calculate the motion of a semantic scene point through its projection to the 2D image space of the on-board camera. The deviations of these keypoints from references created with a semi-automatic tool allow rapid and simple quality assessment of the data collected on the platform. To demonstrate the performance of our method, we evaluate it on two challenging standard UAV datasets as well as one dataset taken from a terrestrial robot.</description><subject>Cameras</subject><subject>Computer vision</subject><subject>Data collection</subject><subject>Datasets</subject><subject>Indoor environments</subject><subject>Motion capture</subject><subject>Pose estimation</subject><subject>Quality assessment</subject><subject>Quantitative analysis</subject><subject>Robotics</subject><subject>Robots</subject><subject>Sensors</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNisEKgkAURYcgSMp_GGgtTDNq7sVqIxQILVrIQGMqOs_mvfH7E-oDWp3LuWfFAqnUIcpiKTcsROyFEDI9yiRRAXuUkOspGgwiv3ltqSNN3Wx4MevBLxMsh4YXL4iugItG6savvnfUgid-duDtk1fOU8tLo9E7MxpLuGPrRg9owh-3bH8qqvwSTQ7e3iDVPXhnl6uWqYwzIQ9Jqv6rPpRmQ1c</recordid><startdate>20220201</startdate><enddate>20220201</enddate><creator>Possamaï, Quentin</creator><creator>Steeven Janny</creator><creator>Bono, Guillaume</creator><creator>Nadri, Madiha</creator><creator>Bako, Laurent</creator><creator>Wolf, Christian</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20220201</creationdate><title>MoCap-less Quantitative Evaluation of Ego-Pose Estimation Without Ground Truth Measurements</title><author>Possamaï, Quentin ; Steeven Janny ; Bono, Guillaume ; Nadri, Madiha ; Bako, Laurent ; Wolf, Christian</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_26248021563</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Cameras</topic><topic>Computer vision</topic><topic>Data collection</topic><topic>Datasets</topic><topic>Indoor environments</topic><topic>Motion capture</topic><topic>Pose estimation</topic><topic>Quality assessment</topic><topic>Quantitative analysis</topic><topic>Robotics</topic><topic>Robots</topic><topic>Sensors</topic><toplevel>online_resources</toplevel><creatorcontrib>Possamaï, Quentin</creatorcontrib><creatorcontrib>Steeven Janny</creatorcontrib><creatorcontrib>Bono, Guillaume</creatorcontrib><creatorcontrib>Nadri, Madiha</creatorcontrib><creatorcontrib>Bako, Laurent</creatorcontrib><creatorcontrib>Wolf, Christian</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Possamaï, Quentin</au><au>Steeven Janny</au><au>Bono, Guillaume</au><au>Nadri, Madiha</au><au>Bako, Laurent</au><au>Wolf, Christian</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>MoCap-less Quantitative Evaluation of Ego-Pose Estimation Without Ground Truth Measurements</atitle><jtitle>arXiv.org</jtitle><date>2022-02-01</date><risdate>2022</risdate><eissn>2331-8422</eissn><abstract>The emergence of data-driven approaches for control and planning in robotics have highlighted the need for developing experimental robotic platforms for data collection. However, their implementation is often complex and expensive, in particular for flying and terrestrial robots where the precise estimation of the position requires motion capture devices (MoCap) or Lidar. In order to simplify the use of a robotic platform dedicated to research on a wide range of indoor and outdoor environments, we present a data validation tool for ego-pose estimation that does not require any equipment other than the on-board camera. The method and tool allow a rapid, visual and quantitative evaluation of the quality of ego-pose sensors and are sensitive to different sources of flaws in the acquisition chain, ranging from desynchronization of the sensor flows to misevaluation of the geometric parameters of the robotic platform. Using computer vision, the information from the sensors is used to calculate the motion of a semantic scene point through its projection to the 2D image space of the on-board camera. The deviations of these keypoints from references created with a semi-automatic tool allow rapid and simple quality assessment of the data collected on the platform. To demonstrate the performance of our method, we evaluate it on two challenging standard UAV datasets as well as one dataset taken from a terrestrial robot.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2022-02
issn 2331-8422
language eng
recordid cdi_proquest_journals_2624802156
source Free E- Journals
subjects Cameras
Computer vision
Data collection
Datasets
Indoor environments
Motion capture
Pose estimation
Quality assessment
Quantitative analysis
Robotics
Robots
Sensors
title MoCap-less Quantitative Evaluation of Ego-Pose Estimation Without Ground Truth Measurements
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-22T05%3A04%3A39IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=MoCap-less%20Quantitative%20Evaluation%20of%20Ego-Pose%20Estimation%20Without%20Ground%20Truth%20Measurements&rft.jtitle=arXiv.org&rft.au=Possama%C3%AF,%20Quentin&rft.date=2022-02-01&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2624802156%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2624802156&rft_id=info:pmid/&rfr_iscdi=true