Vision-based localization algorithm based on landmark matching, triangulation, reconstruction, and comparison

Many generic position-estimation algorithms are vulnerable to ambiguity introduced by nonunique landmarks. Also, the available high-dimensional image data is not fully used when these techniques are extended to vision-based localization. This paper presents the landmark matching, triangulation, reco...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on robotics 2005-04, Vol.21 (2), p.217-226
Hauptverfasser: Yuen, D.C.K., MacDonald, B.A.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 226
container_issue 2
container_start_page 217
container_title IEEE transactions on robotics
container_volume 21
creator Yuen, D.C.K.
MacDonald, B.A.
description Many generic position-estimation algorithms are vulnerable to ambiguity introduced by nonunique landmarks. Also, the available high-dimensional image data is not fully used when these techniques are extended to vision-based localization. This paper presents the landmark matching, triangulation, reconstruction, and comparison (LTRQ global localization algorithm, which is reasonably immune to ambiguous landmark matches. It extracts natural landmarks for the (rough) matching stage before generating the list of possible position estimates through triangulation. Reconstruction and comparison then rank the possible estimates. The LTRC algorithm has been implemented using an interpreted language, onto a robot equipped with a panoramic vision system. Empirical data shows remarkable improvement in accuracy when compared with the established random sample consensus method. LTRC is also robust against inaccurate map data.
doi_str_mv 10.1109/TRO.2004.835452
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_crossref_primary_10_1109_TRO_2004_835452</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>1416973</ieee_id><sourcerecordid>28076659</sourcerecordid><originalsourceid>FETCH-LOGICAL-c516t-a2a819f988e46773094496caf21df4e1c8b4da6491cafe32b049e0d573673c2f3</originalsourceid><addsrcrecordid>eNqFkctLxDAQxoso-Dx78FIEPdndyaNpchTxBYIgq9eQTdM12jZr0h70r3e0guDFUyYzv2-YmS_LDgnMCAE1XzzczygAn0lW8pJuZDtEcVIAF3IT47KkBQMlt7PdlF4AKFfAdrLuyScf-mJpkqvzNljT-g8zYCo37SpEPzx3-VTEVGv6ujPxNe_MYJ99vzrLh-hNvxrbb81ZHp0NfRriaKc_CnIburWJPoV-P9tqTJvcwc-7lz1eXS4uboq7--vbi_O7wpZEDIWhRhLVKCkdF1WFY3OuhDUNJXXDHbFyyWsjuCKYc4wugSsHdVkxUTFLG7aXnU591zG8jS4NuvPJuhbnd2FMmiqQwLDzv6CESohSIXj8B3wJY-xxCU2BCCglEITmE2RjSCm6Rq-jx3u9awL6yySNJukvk_RkEipOftqahKdvoumtT78yISSuBMgdTZx3zv2WORGqYuwT6T6bZw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>201605801</pqid></control><display><type>article</type><title>Vision-based localization algorithm based on landmark matching, triangulation, reconstruction, and comparison</title><source>IEEE Electronic Library (IEL)</source><creator>Yuen, D.C.K. ; MacDonald, B.A.</creator><creatorcontrib>Yuen, D.C.K. ; MacDonald, B.A.</creatorcontrib><description>Many generic position-estimation algorithms are vulnerable to ambiguity introduced by nonunique landmarks. Also, the available high-dimensional image data is not fully used when these techniques are extended to vision-based localization. This paper presents the landmark matching, triangulation, reconstruction, and comparison (LTRQ global localization algorithm, which is reasonably immune to ambiguous landmark matches. It extracts natural landmarks for the (rough) matching stage before generating the list of possible position estimates through triangulation. Reconstruction and comparison then rank the possible estimates. The LTRC algorithm has been implemented using an interpreted language, onto a robot equipped with a panoramic vision system. Empirical data shows remarkable improvement in accuracy when compared with the established random sample consensus method. LTRC is also robust against inaccurate map data.</description><identifier>ISSN: 1552-3098</identifier><identifier>EISSN: 1941-0468</identifier><identifier>DOI: 10.1109/TRO.2004.835452</identifier><identifier>CODEN: ITREAE</identifier><language>eng</language><publisher>New York, NY: IEEE</publisher><subject>Algorithms ; and comparison (LTRC) ; Applied sciences ; Comparative analysis ; Computer science; control theory; systems ; Control theory. Systems ; Data mining ; Exact sciences and technology ; Image reconstruction ; Image sensors ; Insects ; Landmark matching ; Machine vision ; Miscellaneous ; Mobile robots ; natural landmark ; Navigation ; panoramic image ; random sample consensus (RANSAC) ; reconstruction ; Robot localization ; Robot sensing systems ; Robot vision systems ; Robotics ; Robots ; triangulation ; Vision systems ; vision-based localization</subject><ispartof>IEEE transactions on robotics, 2005-04, Vol.21 (2), p.217-226</ispartof><rights>Copyright Institute of Electrical and Electronics Engineers, Inc. (IEEE) Apr 2005</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c516t-a2a819f988e46773094496caf21df4e1c8b4da6491cafe32b049e0d573673c2f3</citedby><cites>FETCH-LOGICAL-c516t-a2a819f988e46773094496caf21df4e1c8b4da6491cafe32b049e0d573673c2f3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/1416973$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/1416973$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttp://pascal-francis.inist.fr/vibad/index.php?action=getRecordDetail&amp;idt=16686730$$DView record in Pascal Francis$$Hfree_for_read</backlink></links><search><creatorcontrib>Yuen, D.C.K.</creatorcontrib><creatorcontrib>MacDonald, B.A.</creatorcontrib><title>Vision-based localization algorithm based on landmark matching, triangulation, reconstruction, and comparison</title><title>IEEE transactions on robotics</title><addtitle>TRO</addtitle><description>Many generic position-estimation algorithms are vulnerable to ambiguity introduced by nonunique landmarks. Also, the available high-dimensional image data is not fully used when these techniques are extended to vision-based localization. This paper presents the landmark matching, triangulation, reconstruction, and comparison (LTRQ global localization algorithm, which is reasonably immune to ambiguous landmark matches. It extracts natural landmarks for the (rough) matching stage before generating the list of possible position estimates through triangulation. Reconstruction and comparison then rank the possible estimates. The LTRC algorithm has been implemented using an interpreted language, onto a robot equipped with a panoramic vision system. Empirical data shows remarkable improvement in accuracy when compared with the established random sample consensus method. LTRC is also robust against inaccurate map data.</description><subject>Algorithms</subject><subject>and comparison (LTRC)</subject><subject>Applied sciences</subject><subject>Comparative analysis</subject><subject>Computer science; control theory; systems</subject><subject>Control theory. Systems</subject><subject>Data mining</subject><subject>Exact sciences and technology</subject><subject>Image reconstruction</subject><subject>Image sensors</subject><subject>Insects</subject><subject>Landmark matching</subject><subject>Machine vision</subject><subject>Miscellaneous</subject><subject>Mobile robots</subject><subject>natural landmark</subject><subject>Navigation</subject><subject>panoramic image</subject><subject>random sample consensus (RANSAC)</subject><subject>reconstruction</subject><subject>Robot localization</subject><subject>Robot sensing systems</subject><subject>Robot vision systems</subject><subject>Robotics</subject><subject>Robots</subject><subject>triangulation</subject><subject>Vision systems</subject><subject>vision-based localization</subject><issn>1552-3098</issn><issn>1941-0468</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2005</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNqFkctLxDAQxoso-Dx78FIEPdndyaNpchTxBYIgq9eQTdM12jZr0h70r3e0guDFUyYzv2-YmS_LDgnMCAE1XzzczygAn0lW8pJuZDtEcVIAF3IT47KkBQMlt7PdlF4AKFfAdrLuyScf-mJpkqvzNljT-g8zYCo37SpEPzx3-VTEVGv6ujPxNe_MYJ99vzrLh-hNvxrbb81ZHp0NfRriaKc_CnIburWJPoV-P9tqTJvcwc-7lz1eXS4uboq7--vbi_O7wpZEDIWhRhLVKCkdF1WFY3OuhDUNJXXDHbFyyWsjuCKYc4wugSsHdVkxUTFLG7aXnU591zG8jS4NuvPJuhbnd2FMmiqQwLDzv6CESohSIXj8B3wJY-xxCU2BCCglEITmE2RjSCm6Rq-jx3u9awL6yySNJukvk_RkEipOftqahKdvoumtT78yISSuBMgdTZx3zv2WORGqYuwT6T6bZw</recordid><startdate>20050401</startdate><enddate>20050401</enddate><creator>Yuen, D.C.K.</creator><creator>MacDonald, B.A.</creator><general>IEEE</general><general>Institute of Electrical and Electronics Engineers</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>IQODW</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7TB</scope><scope>8FD</scope><scope>FR3</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>H8D</scope></search><sort><creationdate>20050401</creationdate><title>Vision-based localization algorithm based on landmark matching, triangulation, reconstruction, and comparison</title><author>Yuen, D.C.K. ; MacDonald, B.A.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c516t-a2a819f988e46773094496caf21df4e1c8b4da6491cafe32b049e0d573673c2f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2005</creationdate><topic>Algorithms</topic><topic>and comparison (LTRC)</topic><topic>Applied sciences</topic><topic>Comparative analysis</topic><topic>Computer science; control theory; systems</topic><topic>Control theory. Systems</topic><topic>Data mining</topic><topic>Exact sciences and technology</topic><topic>Image reconstruction</topic><topic>Image sensors</topic><topic>Insects</topic><topic>Landmark matching</topic><topic>Machine vision</topic><topic>Miscellaneous</topic><topic>Mobile robots</topic><topic>natural landmark</topic><topic>Navigation</topic><topic>panoramic image</topic><topic>random sample consensus (RANSAC)</topic><topic>reconstruction</topic><topic>Robot localization</topic><topic>Robot sensing systems</topic><topic>Robot vision systems</topic><topic>Robotics</topic><topic>Robots</topic><topic>triangulation</topic><topic>Vision systems</topic><topic>vision-based localization</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Yuen, D.C.K.</creatorcontrib><creatorcontrib>MacDonald, B.A.</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>Pascal-Francis</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Mechanical &amp; Transportation Engineering Abstracts</collection><collection>Technology Research Database</collection><collection>Engineering Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Aerospace Database</collection><jtitle>IEEE transactions on robotics</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Yuen, D.C.K.</au><au>MacDonald, B.A.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Vision-based localization algorithm based on landmark matching, triangulation, reconstruction, and comparison</atitle><jtitle>IEEE transactions on robotics</jtitle><stitle>TRO</stitle><date>2005-04-01</date><risdate>2005</risdate><volume>21</volume><issue>2</issue><spage>217</spage><epage>226</epage><pages>217-226</pages><issn>1552-3098</issn><eissn>1941-0468</eissn><coden>ITREAE</coden><abstract>Many generic position-estimation algorithms are vulnerable to ambiguity introduced by nonunique landmarks. Also, the available high-dimensional image data is not fully used when these techniques are extended to vision-based localization. This paper presents the landmark matching, triangulation, reconstruction, and comparison (LTRQ global localization algorithm, which is reasonably immune to ambiguous landmark matches. It extracts natural landmarks for the (rough) matching stage before generating the list of possible position estimates through triangulation. Reconstruction and comparison then rank the possible estimates. The LTRC algorithm has been implemented using an interpreted language, onto a robot equipped with a panoramic vision system. Empirical data shows remarkable improvement in accuracy when compared with the established random sample consensus method. LTRC is also robust against inaccurate map data.</abstract><cop>New York, NY</cop><pub>IEEE</pub><doi>10.1109/TRO.2004.835452</doi><tpages>10</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1552-3098
ispartof IEEE transactions on robotics, 2005-04, Vol.21 (2), p.217-226
issn 1552-3098
1941-0468
language eng
recordid cdi_crossref_primary_10_1109_TRO_2004_835452
source IEEE Electronic Library (IEL)
subjects Algorithms
and comparison (LTRC)
Applied sciences
Comparative analysis
Computer science
control theory
systems
Control theory. Systems
Data mining
Exact sciences and technology
Image reconstruction
Image sensors
Insects
Landmark matching
Machine vision
Miscellaneous
Mobile robots
natural landmark
Navigation
panoramic image
random sample consensus (RANSAC)
reconstruction
Robot localization
Robot sensing systems
Robot vision systems
Robotics
Robots
triangulation
Vision systems
vision-based localization
title Vision-based localization algorithm based on landmark matching, triangulation, reconstruction, and comparison
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-05T06%3A45%3A55IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Vision-based%20localization%20algorithm%20based%20on%20landmark%20matching,%20triangulation,%20reconstruction,%20and%20comparison&rft.jtitle=IEEE%20transactions%20on%20robotics&rft.au=Yuen,%20D.C.K.&rft.date=2005-04-01&rft.volume=21&rft.issue=2&rft.spage=217&rft.epage=226&rft.pages=217-226&rft.issn=1552-3098&rft.eissn=1941-0468&rft.coden=ITREAE&rft_id=info:doi/10.1109/TRO.2004.835452&rft_dat=%3Cproquest_RIE%3E28076659%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=201605801&rft_id=info:pmid/&rft_ieee_id=1416973&rfr_iscdi=true