Effective Feature-Based Downward-Facing Monocular Visual Odometry
To achieve accurate pose estimation for robots in industrial applications and services, this brief proposes an effective feature-based downward-facing monocular visual odometry technology that uses an affordable sensor system and a systematic optimization approach. To extract more effective features...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on control systems technology 2024-01, Vol.32 (1), p.266-273 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 273 |
---|---|
container_issue | 1 |
container_start_page | 266 |
container_title | IEEE transactions on control systems technology |
container_volume | 32 |
creator | Lee, Hoyong Lee, Hakjun Kwak, Inveom Sung, Chiwon Han, Soohee |
description | To achieve accurate pose estimation for robots in industrial applications and services, this brief proposes an effective feature-based downward-facing monocular visual odometry technology that uses an affordable sensor system and a systematic optimization approach. To extract more effective features simply and efficiently from images of the ground, even for small mobile systems, the proposed visual odometry system is designed in a lightweight and cost-effective manner; we used an easily available LED, a single-channel time-of-flight (ToF) sensor, and a monocular camera. From the extracted features, the potentially irrelevant ones are removed in advance, using a masking algorithm and measured velocity. This enhances feature efficiency and reduces the computational burden. Finally, the optimal pose estimate is explicitly obtained by solving a nonconvex optimization problem, to make the best use of the features. The experiments' results show that our proposed method improves feature tracking ability and pose estimation accuracy. |
doi_str_mv | 10.1109/TCST.2023.3294843 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_ieee_primary_10195209</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10195209</ieee_id><sourcerecordid>2907542823</sourcerecordid><originalsourceid>FETCH-LOGICAL-c294t-5a991ebba5660e38419057f9108b8085fe44858c886874f3f3218ec2f5df432d3</originalsourceid><addsrcrecordid>eNpNkE1PAjEQhhujiYj-ABMPm3gu9nvbIyKoCYaD6LUp3alZAltsdyX8e5fgwdPM4XnfmTwI3VIyopSYh-XkfTlihPERZ0Zowc_QgEqpMdFKnvc7URwrydUlusp5TQgVkpUDNJ6GAL6tf6CYgWu7BPjRZaiKp7hv9i5VeOZ83XwVb7GJvtu4VHzWuXObYlHFLbTpcI0ugttkuPmbQ_Qxmy4nL3i-eH6djOfY9w-1WDpjKKxWTipFgGtBDZFlMJTolSZaBhBCS-21VroUgQfOqAbPgqyC4KziQ3R_6t2l-N1Bbu06dqnpT1pmSCkF04z3FD1RPsWcEwS7S_XWpYOlxB5N2aMpezRl_0z1mbtTpgaAfzw1khHDfwHLbGNB</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2907542823</pqid></control><display><type>article</type><title>Effective Feature-Based Downward-Facing Monocular Visual Odometry</title><source>IEEE Electronic Library (IEL)</source><creator>Lee, Hoyong ; Lee, Hakjun ; Kwak, Inveom ; Sung, Chiwon ; Han, Soohee</creator><creatorcontrib>Lee, Hoyong ; Lee, Hakjun ; Kwak, Inveom ; Sung, Chiwon ; Han, Soohee</creatorcontrib><description>To achieve accurate pose estimation for robots in industrial applications and services, this brief proposes an effective feature-based downward-facing monocular visual odometry technology that uses an affordable sensor system and a systematic optimization approach. To extract more effective features simply and efficiently from images of the ground, even for small mobile systems, the proposed visual odometry system is designed in a lightweight and cost-effective manner; we used an easily available LED, a single-channel time-of-flight (ToF) sensor, and a monocular camera. From the extracted features, the potentially irrelevant ones are removed in advance, using a masking algorithm and measured velocity. This enhances feature efficiency and reduces the computational burden. Finally, the optimal pose estimate is explicitly obtained by solving a nonconvex optimization problem, to make the best use of the features. The experiments' results show that our proposed method improves feature tracking ability and pose estimation accuracy.</description><identifier>ISSN: 1063-6536</identifier><identifier>EISSN: 1558-0865</identifier><identifier>DOI: 10.1109/TCST.2023.3294843</identifier><identifier>CODEN: IETTE2</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Algorithms ; Cameras ; Downward-facing camera ; Feature extraction ; Industrial applications ; Light emitting diodes ; masking ; monocular visual odometry ; nonconvex optimization ; Optimization ; Optimization methods ; Pose estimation ; robot ; Robot vision systems ; Visual odometry</subject><ispartof>IEEE transactions on control systems technology, 2024-01, Vol.32 (1), p.266-273</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c294t-5a991ebba5660e38419057f9108b8085fe44858c886874f3f3218ec2f5df432d3</citedby><cites>FETCH-LOGICAL-c294t-5a991ebba5660e38419057f9108b8085fe44858c886874f3f3218ec2f5df432d3</cites><orcidid>0000-0002-9817-664X ; 0000-0002-5150-4263 ; 0000-0002-8521-7115 ; 0000-0002-9831-3499</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10195209$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10195209$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Lee, Hoyong</creatorcontrib><creatorcontrib>Lee, Hakjun</creatorcontrib><creatorcontrib>Kwak, Inveom</creatorcontrib><creatorcontrib>Sung, Chiwon</creatorcontrib><creatorcontrib>Han, Soohee</creatorcontrib><title>Effective Feature-Based Downward-Facing Monocular Visual Odometry</title><title>IEEE transactions on control systems technology</title><addtitle>TCST</addtitle><description>To achieve accurate pose estimation for robots in industrial applications and services, this brief proposes an effective feature-based downward-facing monocular visual odometry technology that uses an affordable sensor system and a systematic optimization approach. To extract more effective features simply and efficiently from images of the ground, even for small mobile systems, the proposed visual odometry system is designed in a lightweight and cost-effective manner; we used an easily available LED, a single-channel time-of-flight (ToF) sensor, and a monocular camera. From the extracted features, the potentially irrelevant ones are removed in advance, using a masking algorithm and measured velocity. This enhances feature efficiency and reduces the computational burden. Finally, the optimal pose estimate is explicitly obtained by solving a nonconvex optimization problem, to make the best use of the features. The experiments' results show that our proposed method improves feature tracking ability and pose estimation accuracy.</description><subject>Algorithms</subject><subject>Cameras</subject><subject>Downward-facing camera</subject><subject>Feature extraction</subject><subject>Industrial applications</subject><subject>Light emitting diodes</subject><subject>masking</subject><subject>monocular visual odometry</subject><subject>nonconvex optimization</subject><subject>Optimization</subject><subject>Optimization methods</subject><subject>Pose estimation</subject><subject>robot</subject><subject>Robot vision systems</subject><subject>Visual odometry</subject><issn>1063-6536</issn><issn>1558-0865</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkE1PAjEQhhujiYj-ABMPm3gu9nvbIyKoCYaD6LUp3alZAltsdyX8e5fgwdPM4XnfmTwI3VIyopSYh-XkfTlihPERZ0Zowc_QgEqpMdFKnvc7URwrydUlusp5TQgVkpUDNJ6GAL6tf6CYgWu7BPjRZaiKp7hv9i5VeOZ83XwVb7GJvtu4VHzWuXObYlHFLbTpcI0ugttkuPmbQ_Qxmy4nL3i-eH6djOfY9w-1WDpjKKxWTipFgGtBDZFlMJTolSZaBhBCS-21VroUgQfOqAbPgqyC4KziQ3R_6t2l-N1Bbu06dqnpT1pmSCkF04z3FD1RPsWcEwS7S_XWpYOlxB5N2aMpezRl_0z1mbtTpgaAfzw1khHDfwHLbGNB</recordid><startdate>202401</startdate><enddate>202401</enddate><creator>Lee, Hoyong</creator><creator>Lee, Hakjun</creator><creator>Kwak, Inveom</creator><creator>Sung, Chiwon</creator><creator>Han, Soohee</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>7TB</scope><scope>8FD</scope><scope>FR3</scope><scope>L7M</scope><orcidid>https://orcid.org/0000-0002-9817-664X</orcidid><orcidid>https://orcid.org/0000-0002-5150-4263</orcidid><orcidid>https://orcid.org/0000-0002-8521-7115</orcidid><orcidid>https://orcid.org/0000-0002-9831-3499</orcidid></search><sort><creationdate>202401</creationdate><title>Effective Feature-Based Downward-Facing Monocular Visual Odometry</title><author>Lee, Hoyong ; Lee, Hakjun ; Kwak, Inveom ; Sung, Chiwon ; Han, Soohee</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c294t-5a991ebba5660e38419057f9108b8085fe44858c886874f3f3218ec2f5df432d3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Algorithms</topic><topic>Cameras</topic><topic>Downward-facing camera</topic><topic>Feature extraction</topic><topic>Industrial applications</topic><topic>Light emitting diodes</topic><topic>masking</topic><topic>monocular visual odometry</topic><topic>nonconvex optimization</topic><topic>Optimization</topic><topic>Optimization methods</topic><topic>Pose estimation</topic><topic>robot</topic><topic>Robot vision systems</topic><topic>Visual odometry</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Lee, Hoyong</creatorcontrib><creatorcontrib>Lee, Hakjun</creatorcontrib><creatorcontrib>Kwak, Inveom</creatorcontrib><creatorcontrib>Sung, Chiwon</creatorcontrib><creatorcontrib>Han, Soohee</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Electronics & Communications Abstracts</collection><collection>Mechanical & Transportation Engineering Abstracts</collection><collection>Technology Research Database</collection><collection>Engineering Research Database</collection><collection>Advanced Technologies Database with Aerospace</collection><jtitle>IEEE transactions on control systems technology</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Lee, Hoyong</au><au>Lee, Hakjun</au><au>Kwak, Inveom</au><au>Sung, Chiwon</au><au>Han, Soohee</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Effective Feature-Based Downward-Facing Monocular Visual Odometry</atitle><jtitle>IEEE transactions on control systems technology</jtitle><stitle>TCST</stitle><date>2024-01</date><risdate>2024</risdate><volume>32</volume><issue>1</issue><spage>266</spage><epage>273</epage><pages>266-273</pages><issn>1063-6536</issn><eissn>1558-0865</eissn><coden>IETTE2</coden><abstract>To achieve accurate pose estimation for robots in industrial applications and services, this brief proposes an effective feature-based downward-facing monocular visual odometry technology that uses an affordable sensor system and a systematic optimization approach. To extract more effective features simply and efficiently from images of the ground, even for small mobile systems, the proposed visual odometry system is designed in a lightweight and cost-effective manner; we used an easily available LED, a single-channel time-of-flight (ToF) sensor, and a monocular camera. From the extracted features, the potentially irrelevant ones are removed in advance, using a masking algorithm and measured velocity. This enhances feature efficiency and reduces the computational burden. Finally, the optimal pose estimate is explicitly obtained by solving a nonconvex optimization problem, to make the best use of the features. The experiments' results show that our proposed method improves feature tracking ability and pose estimation accuracy.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TCST.2023.3294843</doi><tpages>8</tpages><orcidid>https://orcid.org/0000-0002-9817-664X</orcidid><orcidid>https://orcid.org/0000-0002-5150-4263</orcidid><orcidid>https://orcid.org/0000-0002-8521-7115</orcidid><orcidid>https://orcid.org/0000-0002-9831-3499</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1063-6536 |
ispartof | IEEE transactions on control systems technology, 2024-01, Vol.32 (1), p.266-273 |
issn | 1063-6536 1558-0865 |
language | eng |
recordid | cdi_ieee_primary_10195209 |
source | IEEE Electronic Library (IEL) |
subjects | Algorithms Cameras Downward-facing camera Feature extraction Industrial applications Light emitting diodes masking monocular visual odometry nonconvex optimization Optimization Optimization methods Pose estimation robot Robot vision systems Visual odometry |
title | Effective Feature-Based Downward-Facing Monocular Visual Odometry |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T11%3A56%3A48IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Effective%20Feature-Based%20Downward-Facing%20Monocular%20Visual%20Odometry&rft.jtitle=IEEE%20transactions%20on%20control%20systems%20technology&rft.au=Lee,%20Hoyong&rft.date=2024-01&rft.volume=32&rft.issue=1&rft.spage=266&rft.epage=273&rft.pages=266-273&rft.issn=1063-6536&rft.eissn=1558-0865&rft.coden=IETTE2&rft_id=info:doi/10.1109/TCST.2023.3294843&rft_dat=%3Cproquest_RIE%3E2907542823%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2907542823&rft_id=info:pmid/&rft_ieee_id=10195209&rfr_iscdi=true |