Cooperative vision-aided inertial navigation using overlapping views
In this paper, we study the problem of Cooperative Localization (CL) for two robots, each equipped with an Inertial Measurement Unit (IMU) and a camera. We present an algorithm that enables the robots to exploit common features, observed over a sliding-window time horizon, in order to improve the lo...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 943 |
---|---|
container_issue | |
container_start_page | 936 |
container_title | |
container_volume | |
creator | Melnyk, I. V. Hesch, J. A. Roumeliotis, S. I. |
description | In this paper, we study the problem of Cooperative Localization (CL) for two robots, each equipped with an Inertial Measurement Unit (IMU) and a camera. We present an algorithm that enables the robots to exploit common features, observed over a sliding-window time horizon, in order to improve the localization accuracy of both vehicles. In contrast to existing CL methods, which require robot-to-robot distance and/or bearing measurements to resolve the robots' relative position and orientation (pose), our approach recovers the relative pose through indirect information from the commonly observed features. Moreover, we analyze the system observability properties to determine how many degrees of freedom (d.o.f.) of the relative transformation can be computed under different measurement scenarios. Lastly, we present simulation results to evaluate the performance of the proposed method. |
doi_str_mv | 10.1109/ICRA.2012.6225219 |
format | Conference Proceeding |
fullrecord | <record><control><sourceid>ieee_6IE</sourceid><recordid>TN_cdi_ieee_primary_6225219</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>6225219</ieee_id><sourcerecordid>6225219</sourcerecordid><originalsourceid>FETCH-LOGICAL-c266t-965c6676961e601d83b923aa844ce7f73d6d0b1fd170f30c8a14c3fd4e8622753</originalsourceid><addsrcrecordid>eNpVkN1Kw0AUhNc_MNY-gHiTF0g9Z3dzNntZ4l-hIIiCd2WbPSkrMQnZGPHtrdgbr2ZgPgZmhLhCWCCCvVmVz8uFBJQLkjKXaI_E3JoCNRmFGjQdi0TmxmRQmLeTf5mypyJByCHTRtpzcRHjOwAoRZSI27Lreh7cGCZOpxBD12YuePZpaHkYg2vS1k1htwe6Nv2Mod2l3cRD4_r-10-Bv-KlOKtdE3l-0Jl4vb97KR-z9dPDqlyus0oSjZmlvCIyZAmZAH2htlYq5wqtKza1UZ48bLH2aKBWUBUOdaVqr7nYjza5monrv97AzJt-CB9u-N4cDlE_BUZQEw</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Cooperative vision-aided inertial navigation using overlapping views</title><source>IEEE Electronic Library (IEL) Conference Proceedings</source><creator>Melnyk, I. V. ; Hesch, J. A. ; Roumeliotis, S. I.</creator><creatorcontrib>Melnyk, I. V. ; Hesch, J. A. ; Roumeliotis, S. I.</creatorcontrib><description>In this paper, we study the problem of Cooperative Localization (CL) for two robots, each equipped with an Inertial Measurement Unit (IMU) and a camera. We present an algorithm that enables the robots to exploit common features, observed over a sliding-window time horizon, in order to improve the localization accuracy of both vehicles. In contrast to existing CL methods, which require robot-to-robot distance and/or bearing measurements to resolve the robots' relative position and orientation (pose), our approach recovers the relative pose through indirect information from the commonly observed features. Moreover, we analyze the system observability properties to determine how many degrees of freedom (d.o.f.) of the relative transformation can be computed under different measurement scenarios. Lastly, we present simulation results to evaluate the performance of the proposed method.</description><identifier>ISSN: 1050-4729</identifier><identifier>ISBN: 9781467314039</identifier><identifier>ISBN: 146731403X</identifier><identifier>EISSN: 2577-087X</identifier><identifier>EISBN: 9781467314046</identifier><identifier>EISBN: 1467315788</identifier><identifier>EISBN: 1467314056</identifier><identifier>EISBN: 9781467314053</identifier><identifier>EISBN: 9781467315784</identifier><identifier>EISBN: 1467314048</identifier><identifier>DOI: 10.1109/ICRA.2012.6225219</identifier><language>eng</language><publisher>IEEE</publisher><subject>Cameras ; Covariance matrix ; Robot kinematics ; Robot vision systems ; Vectors ; Vehicles</subject><ispartof>2012 IEEE International Conference on Robotics and Automation, 2012, p.936-943</ispartof><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c266t-965c6676961e601d83b923aa844ce7f73d6d0b1fd170f30c8a14c3fd4e8622753</citedby></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/6225219$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,2057,27924,54919</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/6225219$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Melnyk, I. V.</creatorcontrib><creatorcontrib>Hesch, J. A.</creatorcontrib><creatorcontrib>Roumeliotis, S. I.</creatorcontrib><title>Cooperative vision-aided inertial navigation using overlapping views</title><title>2012 IEEE International Conference on Robotics and Automation</title><addtitle>ICRA</addtitle><description>In this paper, we study the problem of Cooperative Localization (CL) for two robots, each equipped with an Inertial Measurement Unit (IMU) and a camera. We present an algorithm that enables the robots to exploit common features, observed over a sliding-window time horizon, in order to improve the localization accuracy of both vehicles. In contrast to existing CL methods, which require robot-to-robot distance and/or bearing measurements to resolve the robots' relative position and orientation (pose), our approach recovers the relative pose through indirect information from the commonly observed features. Moreover, we analyze the system observability properties to determine how many degrees of freedom (d.o.f.) of the relative transformation can be computed under different measurement scenarios. Lastly, we present simulation results to evaluate the performance of the proposed method.</description><subject>Cameras</subject><subject>Covariance matrix</subject><subject>Robot kinematics</subject><subject>Robot vision systems</subject><subject>Vectors</subject><subject>Vehicles</subject><issn>1050-4729</issn><issn>2577-087X</issn><isbn>9781467314039</isbn><isbn>146731403X</isbn><isbn>9781467314046</isbn><isbn>1467315788</isbn><isbn>1467314056</isbn><isbn>9781467314053</isbn><isbn>9781467315784</isbn><isbn>1467314048</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2012</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><sourceid>RIE</sourceid><recordid>eNpVkN1Kw0AUhNc_MNY-gHiTF0g9Z3dzNntZ4l-hIIiCd2WbPSkrMQnZGPHtrdgbr2ZgPgZmhLhCWCCCvVmVz8uFBJQLkjKXaI_E3JoCNRmFGjQdi0TmxmRQmLeTf5mypyJByCHTRtpzcRHjOwAoRZSI27Lreh7cGCZOpxBD12YuePZpaHkYg2vS1k1htwe6Nv2Mod2l3cRD4_r-10-Bv-KlOKtdE3l-0Jl4vb97KR-z9dPDqlyus0oSjZmlvCIyZAmZAH2htlYq5wqtKza1UZ48bLH2aKBWUBUOdaVqr7nYjza5monrv97AzJt-CB9u-N4cDlE_BUZQEw</recordid><startdate>20120101</startdate><enddate>20120101</enddate><creator>Melnyk, I. V.</creator><creator>Hesch, J. A.</creator><creator>Roumeliotis, S. I.</creator><general>IEEE</general><scope>6IE</scope><scope>6IH</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIO</scope></search><sort><creationdate>20120101</creationdate><title>Cooperative vision-aided inertial navigation using overlapping views</title><author>Melnyk, I. V. ; Hesch, J. A. ; Roumeliotis, S. I.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c266t-965c6676961e601d83b923aa844ce7f73d6d0b1fd170f30c8a14c3fd4e8622753</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2012</creationdate><topic>Cameras</topic><topic>Covariance matrix</topic><topic>Robot kinematics</topic><topic>Robot vision systems</topic><topic>Vectors</topic><topic>Vehicles</topic><toplevel>online_resources</toplevel><creatorcontrib>Melnyk, I. V.</creatorcontrib><creatorcontrib>Hesch, J. A.</creatorcontrib><creatorcontrib>Roumeliotis, S. I.</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan (POP) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library (IEL)</collection><collection>IEEE Proceedings Order Plans (POP) 1998-present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Melnyk, I. V.</au><au>Hesch, J. A.</au><au>Roumeliotis, S. I.</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Cooperative vision-aided inertial navigation using overlapping views</atitle><btitle>2012 IEEE International Conference on Robotics and Automation</btitle><stitle>ICRA</stitle><date>2012-01-01</date><risdate>2012</risdate><spage>936</spage><epage>943</epage><pages>936-943</pages><issn>1050-4729</issn><eissn>2577-087X</eissn><isbn>9781467314039</isbn><isbn>146731403X</isbn><eisbn>9781467314046</eisbn><eisbn>1467315788</eisbn><eisbn>1467314056</eisbn><eisbn>9781467314053</eisbn><eisbn>9781467315784</eisbn><eisbn>1467314048</eisbn><abstract>In this paper, we study the problem of Cooperative Localization (CL) for two robots, each equipped with an Inertial Measurement Unit (IMU) and a camera. We present an algorithm that enables the robots to exploit common features, observed over a sliding-window time horizon, in order to improve the localization accuracy of both vehicles. In contrast to existing CL methods, which require robot-to-robot distance and/or bearing measurements to resolve the robots' relative position and orientation (pose), our approach recovers the relative pose through indirect information from the commonly observed features. Moreover, we analyze the system observability properties to determine how many degrees of freedom (d.o.f.) of the relative transformation can be computed under different measurement scenarios. Lastly, we present simulation results to evaluate the performance of the proposed method.</abstract><pub>IEEE</pub><doi>10.1109/ICRA.2012.6225219</doi><tpages>8</tpages><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1050-4729 |
ispartof | 2012 IEEE International Conference on Robotics and Automation, 2012, p.936-943 |
issn | 1050-4729 2577-087X |
language | eng |
recordid | cdi_ieee_primary_6225219 |
source | IEEE Electronic Library (IEL) Conference Proceedings |
subjects | Cameras Covariance matrix Robot kinematics Robot vision systems Vectors Vehicles |
title | Cooperative vision-aided inertial navigation using overlapping views |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-08T13%3A47%3A43IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_6IE&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Cooperative%20vision-aided%20inertial%20navigation%20using%20overlapping%20views&rft.btitle=2012%20IEEE%20International%20Conference%20on%20Robotics%20and%20Automation&rft.au=Melnyk,%20I.%20V.&rft.date=2012-01-01&rft.spage=936&rft.epage=943&rft.pages=936-943&rft.issn=1050-4729&rft.eissn=2577-087X&rft.isbn=9781467314039&rft.isbn_list=146731403X&rft_id=info:doi/10.1109/ICRA.2012.6225219&rft_dat=%3Cieee_6IE%3E6225219%3C/ieee_6IE%3E%3Curl%3E%3C/url%3E&rft.eisbn=9781467314046&rft.eisbn_list=1467315788&rft.eisbn_list=1467314056&rft.eisbn_list=9781467314053&rft.eisbn_list=9781467315784&rft.eisbn_list=1467314048&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=6225219&rfr_iscdi=true |