IQ-VIO: adaptive visual inertial odometry via interference quantization under dynamic environments

Vision-based localization is susceptible to interference from dynamic objects in the environment, resulting in decreased localization accuracy and even tracking loss. Hence, sensor fusion with IMUs or motor encoders has been widely adopted to improve positioning accuracy and robustness in dynamic en...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Intelligent service robotics 2023-11, Vol.16 (5), p.565-581
Hauptverfasser: Zhang, Huikun, Ye, Feng, Lai, Yizong, Li, Kuo, Xu, Jinze
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 581
container_issue 5
container_start_page 565
container_title Intelligent service robotics
container_volume 16
creator Zhang, Huikun
Ye, Feng
Lai, Yizong
Li, Kuo
Xu, Jinze
description Vision-based localization is susceptible to interference from dynamic objects in the environment, resulting in decreased localization accuracy and even tracking loss. Hence, sensor fusion with IMUs or motor encoders has been widely adopted to improve positioning accuracy and robustness in dynamic environments. Commonly used loose coupling fusion localization methods cannot completely eliminate the error introduced by dynamic objects. In this paper, we propose a novel adaptive visual inertial odometry via interference quantization, namely IQ-VIO. To quantify the confidence of pose estimation through vision frames analysis, we firstly introduce the feature coverage and the dynamic scene interference index based on image information entropy. Then, based on the interference index, we further establish the IQ-VIO multi-sensor fusion model, which can adaptively adjust the measurement error covariance matrix of an extended Kalman filter to suppress and eliminate the impact of dynamic objects on localization. We verify IQ-VIO algorithm on KAIST Urban dataset and actual scenes. Results show that our method achieves favorable performance against other algorithms. Especially under challenging scenes such as low texture, the RPE of our algorithm decreases at least twenty percent. Our approach can effectively eliminate the impact of dynamic objects in the scenes and obtain higher positioning accuracy and robustness than conventional methods.
doi_str_mv 10.1007/s11370-023-00478-2
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2918595187</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2918595187</sourcerecordid><originalsourceid>FETCH-LOGICAL-c358t-faeb923b5cb7d51fa841891b0d659a8151e974c65050ab1f2f118aa76dff5e533</originalsourceid><addsrcrecordid>eNp9kE9LAzEQxRdRsFa_gKcFz6uZZLPJepPin0KhCOo1ZHcnktJm2yRbqJ_e6IrePM1j5r038MuySyDXQIi4CQBMkIJQVhBSClnQo2wCsoKCClke_2pRnWZnIawIqaCkbJI18-fibb68zXWnt9HuMd_bMOh1bh36aJPou36D0R_SQadtRG_Qo2sx3w3aRfuho-1dPrgOfd4dnN7YNke3t753G3QxnGcnRq8DXvzMafb6cP8yeyoWy8f57G5RtIzLWBiNTU1Zw9tGdByMliXIGhrSVbzWEjhgLcq24oQT3YChBkBqLarOGI6csWl2NfZufb8bMES16gfv0ktFa5C85iBFctHR1fo-BI9Gbb3daH9QQNQXSzWyVIml-mapaAqxMRSS2b2j_6v-J_UJrfB4mg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2918595187</pqid></control><display><type>article</type><title>IQ-VIO: adaptive visual inertial odometry via interference quantization under dynamic environments</title><source>SpringerNature Complete Journals</source><source>ProQuest Central UK/Ireland</source><source>ProQuest Central</source><creator>Zhang, Huikun ; Ye, Feng ; Lai, Yizong ; Li, Kuo ; Xu, Jinze</creator><creatorcontrib>Zhang, Huikun ; Ye, Feng ; Lai, Yizong ; Li, Kuo ; Xu, Jinze</creatorcontrib><description>Vision-based localization is susceptible to interference from dynamic objects in the environment, resulting in decreased localization accuracy and even tracking loss. Hence, sensor fusion with IMUs or motor encoders has been widely adopted to improve positioning accuracy and robustness in dynamic environments. Commonly used loose coupling fusion localization methods cannot completely eliminate the error introduced by dynamic objects. In this paper, we propose a novel adaptive visual inertial odometry via interference quantization, namely IQ-VIO. To quantify the confidence of pose estimation through vision frames analysis, we firstly introduce the feature coverage and the dynamic scene interference index based on image information entropy. Then, based on the interference index, we further establish the IQ-VIO multi-sensor fusion model, which can adaptively adjust the measurement error covariance matrix of an extended Kalman filter to suppress and eliminate the impact of dynamic objects on localization. We verify IQ-VIO algorithm on KAIST Urban dataset and actual scenes. Results show that our method achieves favorable performance against other algorithms. Especially under challenging scenes such as low texture, the RPE of our algorithm decreases at least twenty percent. Our approach can effectively eliminate the impact of dynamic objects in the scenes and obtain higher positioning accuracy and robustness than conventional methods.</description><identifier>ISSN: 1861-2776</identifier><identifier>EISSN: 1861-2784</identifier><identifier>DOI: 10.1007/s11370-023-00478-2</identifier><language>eng</language><publisher>Berlin/Heidelberg: Springer Berlin Heidelberg</publisher><subject>Accuracy ; Algorithms ; Artificial Intelligence ; Control ; Covariance matrix ; Deep learning ; Dynamical Systems ; Engineering ; Entropy ; Entropy (Information theory) ; Error analysis ; Extended Kalman filter ; Geometry ; Interference ; Localization ; Measurement ; Mechatronics ; Methods ; Multisensor fusion ; Original Research Paper ; Pose estimation ; Robotics ; Robotics and Automation ; Robustness ; Semantics ; Sensors ; User Interfaces and Human Computer Interaction ; Vibration ; Vision</subject><ispartof>Intelligent service robotics, 2023-11, Vol.16 (5), p.565-581</ispartof><rights>The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c358t-faeb923b5cb7d51fa841891b0d659a8151e974c65050ab1f2f118aa76dff5e533</citedby><cites>FETCH-LOGICAL-c358t-faeb923b5cb7d51fa841891b0d659a8151e974c65050ab1f2f118aa76dff5e533</cites><orcidid>0000-0002-3238-5975</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11370-023-00478-2$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/2918595187?pq-origsite=primo$$EHTML$$P50$$Gproquest$$H</linktohtml><link.rule.ids>314,780,784,21388,27924,27925,33744,41488,42557,43805,51319,64385,64389,72469</link.rule.ids></links><search><creatorcontrib>Zhang, Huikun</creatorcontrib><creatorcontrib>Ye, Feng</creatorcontrib><creatorcontrib>Lai, Yizong</creatorcontrib><creatorcontrib>Li, Kuo</creatorcontrib><creatorcontrib>Xu, Jinze</creatorcontrib><title>IQ-VIO: adaptive visual inertial odometry via interference quantization under dynamic environments</title><title>Intelligent service robotics</title><addtitle>Intel Serv Robotics</addtitle><description>Vision-based localization is susceptible to interference from dynamic objects in the environment, resulting in decreased localization accuracy and even tracking loss. Hence, sensor fusion with IMUs or motor encoders has been widely adopted to improve positioning accuracy and robustness in dynamic environments. Commonly used loose coupling fusion localization methods cannot completely eliminate the error introduced by dynamic objects. In this paper, we propose a novel adaptive visual inertial odometry via interference quantization, namely IQ-VIO. To quantify the confidence of pose estimation through vision frames analysis, we firstly introduce the feature coverage and the dynamic scene interference index based on image information entropy. Then, based on the interference index, we further establish the IQ-VIO multi-sensor fusion model, which can adaptively adjust the measurement error covariance matrix of an extended Kalman filter to suppress and eliminate the impact of dynamic objects on localization. We verify IQ-VIO algorithm on KAIST Urban dataset and actual scenes. Results show that our method achieves favorable performance against other algorithms. Especially under challenging scenes such as low texture, the RPE of our algorithm decreases at least twenty percent. Our approach can effectively eliminate the impact of dynamic objects in the scenes and obtain higher positioning accuracy and robustness than conventional methods.</description><subject>Accuracy</subject><subject>Algorithms</subject><subject>Artificial Intelligence</subject><subject>Control</subject><subject>Covariance matrix</subject><subject>Deep learning</subject><subject>Dynamical Systems</subject><subject>Engineering</subject><subject>Entropy</subject><subject>Entropy (Information theory)</subject><subject>Error analysis</subject><subject>Extended Kalman filter</subject><subject>Geometry</subject><subject>Interference</subject><subject>Localization</subject><subject>Measurement</subject><subject>Mechatronics</subject><subject>Methods</subject><subject>Multisensor fusion</subject><subject>Original Research Paper</subject><subject>Pose estimation</subject><subject>Robotics</subject><subject>Robotics and Automation</subject><subject>Robustness</subject><subject>Semantics</subject><subject>Sensors</subject><subject>User Interfaces and Human Computer Interaction</subject><subject>Vibration</subject><subject>Vision</subject><issn>1861-2776</issn><issn>1861-2784</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><recordid>eNp9kE9LAzEQxRdRsFa_gKcFz6uZZLPJepPin0KhCOo1ZHcnktJm2yRbqJ_e6IrePM1j5r038MuySyDXQIi4CQBMkIJQVhBSClnQo2wCsoKCClke_2pRnWZnIawIqaCkbJI18-fibb68zXWnt9HuMd_bMOh1bh36aJPou36D0R_SQadtRG_Qo2sx3w3aRfuho-1dPrgOfd4dnN7YNke3t753G3QxnGcnRq8DXvzMafb6cP8yeyoWy8f57G5RtIzLWBiNTU1Zw9tGdByMliXIGhrSVbzWEjhgLcq24oQT3YChBkBqLarOGI6csWl2NfZufb8bMES16gfv0ktFa5C85iBFctHR1fo-BI9Gbb3daH9QQNQXSzWyVIml-mapaAqxMRSS2b2j_6v-J_UJrfB4mg</recordid><startdate>20231101</startdate><enddate>20231101</enddate><creator>Zhang, Huikun</creator><creator>Ye, Feng</creator><creator>Lai, Yizong</creator><creator>Li, Kuo</creator><creator>Xu, Jinze</creator><general>Springer Berlin Heidelberg</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K7-</scope><scope>L6V</scope><scope>M7S</scope><scope>P5Z</scope><scope>P62</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PTHSS</scope><orcidid>https://orcid.org/0000-0002-3238-5975</orcidid></search><sort><creationdate>20231101</creationdate><title>IQ-VIO: adaptive visual inertial odometry via interference quantization under dynamic environments</title><author>Zhang, Huikun ; Ye, Feng ; Lai, Yizong ; Li, Kuo ; Xu, Jinze</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c358t-faeb923b5cb7d51fa841891b0d659a8151e974c65050ab1f2f118aa76dff5e533</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Accuracy</topic><topic>Algorithms</topic><topic>Artificial Intelligence</topic><topic>Control</topic><topic>Covariance matrix</topic><topic>Deep learning</topic><topic>Dynamical Systems</topic><topic>Engineering</topic><topic>Entropy</topic><topic>Entropy (Information theory)</topic><topic>Error analysis</topic><topic>Extended Kalman filter</topic><topic>Geometry</topic><topic>Interference</topic><topic>Localization</topic><topic>Measurement</topic><topic>Mechatronics</topic><topic>Methods</topic><topic>Multisensor fusion</topic><topic>Original Research Paper</topic><topic>Pose estimation</topic><topic>Robotics</topic><topic>Robotics and Automation</topic><topic>Robustness</topic><topic>Semantics</topic><topic>Sensors</topic><topic>User Interfaces and Human Computer Interaction</topic><topic>Vibration</topic><topic>Vision</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zhang, Huikun</creatorcontrib><creatorcontrib>Ye, Feng</creatorcontrib><creatorcontrib>Lai, Yizong</creatorcontrib><creatorcontrib>Li, Kuo</creatorcontrib><creatorcontrib>Xu, Jinze</creatorcontrib><collection>CrossRef</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer Science Database</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>Engineering Collection</collection><jtitle>Intelligent service robotics</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zhang, Huikun</au><au>Ye, Feng</au><au>Lai, Yizong</au><au>Li, Kuo</au><au>Xu, Jinze</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>IQ-VIO: adaptive visual inertial odometry via interference quantization under dynamic environments</atitle><jtitle>Intelligent service robotics</jtitle><stitle>Intel Serv Robotics</stitle><date>2023-11-01</date><risdate>2023</risdate><volume>16</volume><issue>5</issue><spage>565</spage><epage>581</epage><pages>565-581</pages><issn>1861-2776</issn><eissn>1861-2784</eissn><abstract>Vision-based localization is susceptible to interference from dynamic objects in the environment, resulting in decreased localization accuracy and even tracking loss. Hence, sensor fusion with IMUs or motor encoders has been widely adopted to improve positioning accuracy and robustness in dynamic environments. Commonly used loose coupling fusion localization methods cannot completely eliminate the error introduced by dynamic objects. In this paper, we propose a novel adaptive visual inertial odometry via interference quantization, namely IQ-VIO. To quantify the confidence of pose estimation through vision frames analysis, we firstly introduce the feature coverage and the dynamic scene interference index based on image information entropy. Then, based on the interference index, we further establish the IQ-VIO multi-sensor fusion model, which can adaptively adjust the measurement error covariance matrix of an extended Kalman filter to suppress and eliminate the impact of dynamic objects on localization. We verify IQ-VIO algorithm on KAIST Urban dataset and actual scenes. Results show that our method achieves favorable performance against other algorithms. Especially under challenging scenes such as low texture, the RPE of our algorithm decreases at least twenty percent. Our approach can effectively eliminate the impact of dynamic objects in the scenes and obtain higher positioning accuracy and robustness than conventional methods.</abstract><cop>Berlin/Heidelberg</cop><pub>Springer Berlin Heidelberg</pub><doi>10.1007/s11370-023-00478-2</doi><tpages>17</tpages><orcidid>https://orcid.org/0000-0002-3238-5975</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 1861-2776
ispartof Intelligent service robotics, 2023-11, Vol.16 (5), p.565-581
issn 1861-2776
1861-2784
language eng
recordid cdi_proquest_journals_2918595187
source SpringerNature Complete Journals; ProQuest Central UK/Ireland; ProQuest Central
subjects Accuracy
Algorithms
Artificial Intelligence
Control
Covariance matrix
Deep learning
Dynamical Systems
Engineering
Entropy
Entropy (Information theory)
Error analysis
Extended Kalman filter
Geometry
Interference
Localization
Measurement
Mechatronics
Methods
Multisensor fusion
Original Research Paper
Pose estimation
Robotics
Robotics and Automation
Robustness
Semantics
Sensors
User Interfaces and Human Computer Interaction
Vibration
Vision
title IQ-VIO: adaptive visual inertial odometry via interference quantization under dynamic environments
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-28T01%3A55%3A49IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=IQ-VIO:%20adaptive%20visual%20inertial%20odometry%20via%20interference%20quantization%20under%20dynamic%20environments&rft.jtitle=Intelligent%20service%20robotics&rft.au=Zhang,%20Huikun&rft.date=2023-11-01&rft.volume=16&rft.issue=5&rft.spage=565&rft.epage=581&rft.pages=565-581&rft.issn=1861-2776&rft.eissn=1861-2784&rft_id=info:doi/10.1007/s11370-023-00478-2&rft_dat=%3Cproquest_cross%3E2918595187%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2918595187&rft_id=info:pmid/&rfr_iscdi=true