Autonomous Mobile Robot Localization and Navigation Using a Hierarchical Map Representation Primarily Guided by Vision
While impressive progress has recently been made with autonomous vehicles, both indoors and on streets, autonomous localization and navigation in less constrained and more dynamic environments, such as outdoor pedestrian and bicycle‐friendly sites, remains a challenging problem. We describe a new ap...
Gespeichert in:
Veröffentlicht in: | Journal of field robotics 2014-05, Vol.31 (3), p.408-440 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 440 |
---|---|
container_issue | 3 |
container_start_page | 408 |
container_title | Journal of field robotics |
container_volume | 31 |
creator | Siagian, Christian Chang, Chin Kai Itti, Laurent |
description | While impressive progress has recently been made with autonomous vehicles, both indoors and on streets, autonomous localization and navigation in less constrained and more dynamic environments, such as outdoor pedestrian and bicycle‐friendly sites, remains a challenging problem. We describe a new approach that utilizes several visual perception modules—place recognition, landmark recognition, and road lane detection—supplemented by proximity cues from a planar laser range finder for obstacle avoidance. At the core of our system is a new hybrid topological/grid‐occupancy map that integrates the outputs from all perceptual modules, despite different latencies and time scales. Our approach allows for real‐time performance through a combination of fast but shallow processing modules that update the map's state while slower but more discriminating modules are still computing. We validated our system using a ground vehicle that autonomously traversed three outdoor routes several times, each 400 m or longer, on a university campus. The routes featured different road types, environmental hazards, moving pedestrians, and service vehicles. In total, the robot logged over 10 km of successful recorded experiments, driving within a median of 1.37 m laterally of the center of the road, and localizing within 0.97 m (median) longitudinally of its true location along the route. |
doi_str_mv | 10.1002/rob.21505 |
format | Article |
fullrecord | <record><control><sourceid>proquest_wiley</sourceid><recordid>TN_cdi_proquest_miscellaneous_1541450843</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3267860131</sourcerecordid><originalsourceid>FETCH-LOGICAL-c4005-334e2fff2bff1086fa476a2c38e2d99325e19d1743e837eb25799f3f6cdb54d73</originalsourceid><addsrcrecordid>eNpdkMtOwzAQRSMEElBY8AeW2LBJ8TOJlzxboAVUtcDOchIbDGlc7AQoX49pUBes5o7m3NHMjaIDBPsIQnzsbN7HiEG2Ee0gxpKY8iTdXGvGt6Nd718hpCTjbCf6OGkbW9u5bT0Y29xUCkxsbhswsoWszLdsjK2BrEtwKz_Mc9fOvKmfgQRDo5x0xYsJKBjLBZiohVNe1U3H3Tszl85USzBoTalKkC_Bg_FhtBdtaVl5tf9Xe9Hs8mJ6NoxHd4Ors5NRXFAIWUwIVVhrjXOtEcwSLWmaSFyQTOGSc4KZQrxEKSUqI6nKMUs510QnRZkzWqakFx11exfOvrfKN2JufKGqStYqvCwQo4gymFES0MN_6KttXR2uCxQiGDMGcaCOO-ozRLUUi9WHS4Gg-I1fhPjFKn4xuTtdieCIO4fxjfpaO6R7E0lKUiYebwfi_Ikk05sHLq7JD9oaib8</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1513225502</pqid></control><display><type>article</type><title>Autonomous Mobile Robot Localization and Navigation Using a Hierarchical Map Representation Primarily Guided by Vision</title><source>Wiley Online Library Journals Frontfile Complete</source><creator>Siagian, Christian ; Chang, Chin Kai ; Itti, Laurent</creator><creatorcontrib>Siagian, Christian ; Chang, Chin Kai ; Itti, Laurent</creatorcontrib><description>While impressive progress has recently been made with autonomous vehicles, both indoors and on streets, autonomous localization and navigation in less constrained and more dynamic environments, such as outdoor pedestrian and bicycle‐friendly sites, remains a challenging problem. We describe a new approach that utilizes several visual perception modules—place recognition, landmark recognition, and road lane detection—supplemented by proximity cues from a planar laser range finder for obstacle avoidance. At the core of our system is a new hybrid topological/grid‐occupancy map that integrates the outputs from all perceptual modules, despite different latencies and time scales. Our approach allows for real‐time performance through a combination of fast but shallow processing modules that update the map's state while slower but more discriminating modules are still computing. We validated our system using a ground vehicle that autonomously traversed three outdoor routes several times, each 400 m or longer, on a university campus. The routes featured different road types, environmental hazards, moving pedestrians, and service vehicles. In total, the robot logged over 10 km of successful recorded experiments, driving within a median of 1.37 m laterally of the center of the road, and localizing within 0.97 m (median) longitudinally of its true location along the route.</description><identifier>ISSN: 1556-4959</identifier><identifier>EISSN: 1556-4967</identifier><identifier>DOI: 10.1002/rob.21505</identifier><language>eng</language><publisher>Hoboken: Blackwell Publishing Ltd</publisher><subject>Autonomous ; Modules ; Navigation ; Position (location) ; Recognition ; Roads ; Robots ; Vehicles</subject><ispartof>Journal of field robotics, 2014-05, Vol.31 (3), p.408-440</ispartof><rights>2014 Wiley Periodicals, Inc.</rights><rights>Copyright © 2014 Wiley Periodicals, Inc.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c4005-334e2fff2bff1086fa476a2c38e2d99325e19d1743e837eb25799f3f6cdb54d73</citedby></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://onlinelibrary.wiley.com/doi/pdf/10.1002%2Frob.21505$$EPDF$$P50$$Gwiley$$H</linktopdf><linktohtml>$$Uhttps://onlinelibrary.wiley.com/doi/full/10.1002%2Frob.21505$$EHTML$$P50$$Gwiley$$H</linktohtml><link.rule.ids>314,776,780,1411,27901,27902,45550,45551</link.rule.ids></links><search><creatorcontrib>Siagian, Christian</creatorcontrib><creatorcontrib>Chang, Chin Kai</creatorcontrib><creatorcontrib>Itti, Laurent</creatorcontrib><title>Autonomous Mobile Robot Localization and Navigation Using a Hierarchical Map Representation Primarily Guided by Vision</title><title>Journal of field robotics</title><addtitle>J. Field Robotics</addtitle><description>While impressive progress has recently been made with autonomous vehicles, both indoors and on streets, autonomous localization and navigation in less constrained and more dynamic environments, such as outdoor pedestrian and bicycle‐friendly sites, remains a challenging problem. We describe a new approach that utilizes several visual perception modules—place recognition, landmark recognition, and road lane detection—supplemented by proximity cues from a planar laser range finder for obstacle avoidance. At the core of our system is a new hybrid topological/grid‐occupancy map that integrates the outputs from all perceptual modules, despite different latencies and time scales. Our approach allows for real‐time performance through a combination of fast but shallow processing modules that update the map's state while slower but more discriminating modules are still computing. We validated our system using a ground vehicle that autonomously traversed three outdoor routes several times, each 400 m or longer, on a university campus. The routes featured different road types, environmental hazards, moving pedestrians, and service vehicles. In total, the robot logged over 10 km of successful recorded experiments, driving within a median of 1.37 m laterally of the center of the road, and localizing within 0.97 m (median) longitudinally of its true location along the route.</description><subject>Autonomous</subject><subject>Modules</subject><subject>Navigation</subject><subject>Position (location)</subject><subject>Recognition</subject><subject>Roads</subject><subject>Robots</subject><subject>Vehicles</subject><issn>1556-4959</issn><issn>1556-4967</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2014</creationdate><recordtype>article</recordtype><recordid>eNpdkMtOwzAQRSMEElBY8AeW2LBJ8TOJlzxboAVUtcDOchIbDGlc7AQoX49pUBes5o7m3NHMjaIDBPsIQnzsbN7HiEG2Ee0gxpKY8iTdXGvGt6Nd718hpCTjbCf6OGkbW9u5bT0Y29xUCkxsbhswsoWszLdsjK2BrEtwKz_Mc9fOvKmfgQRDo5x0xYsJKBjLBZiohVNe1U3H3Tszl85USzBoTalKkC_Bg_FhtBdtaVl5tf9Xe9Hs8mJ6NoxHd4Ors5NRXFAIWUwIVVhrjXOtEcwSLWmaSFyQTOGSc4KZQrxEKSUqI6nKMUs510QnRZkzWqakFx11exfOvrfKN2JufKGqStYqvCwQo4gymFES0MN_6KttXR2uCxQiGDMGcaCOO-ozRLUUi9WHS4Gg-I1fhPjFKn4xuTtdieCIO4fxjfpaO6R7E0lKUiYebwfi_Ikk05sHLq7JD9oaib8</recordid><startdate>201405</startdate><enddate>201405</enddate><creator>Siagian, Christian</creator><creator>Chang, Chin Kai</creator><creator>Itti, Laurent</creator><general>Blackwell Publishing Ltd</general><general>Wiley Subscription Services, Inc</general><scope>BSCLL</scope><scope>7SC</scope><scope>7SP</scope><scope>7TB</scope><scope>8FD</scope><scope>F28</scope><scope>FR3</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>201405</creationdate><title>Autonomous Mobile Robot Localization and Navigation Using a Hierarchical Map Representation Primarily Guided by Vision</title><author>Siagian, Christian ; Chang, Chin Kai ; Itti, Laurent</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c4005-334e2fff2bff1086fa476a2c38e2d99325e19d1743e837eb25799f3f6cdb54d73</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2014</creationdate><topic>Autonomous</topic><topic>Modules</topic><topic>Navigation</topic><topic>Position (location)</topic><topic>Recognition</topic><topic>Roads</topic><topic>Robots</topic><topic>Vehicles</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Siagian, Christian</creatorcontrib><creatorcontrib>Chang, Chin Kai</creatorcontrib><creatorcontrib>Itti, Laurent</creatorcontrib><collection>Istex</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Mechanical & Transportation Engineering Abstracts</collection><collection>Technology Research Database</collection><collection>ANTE: Abstracts in New Technology & Engineering</collection><collection>Engineering Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Journal of field robotics</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Siagian, Christian</au><au>Chang, Chin Kai</au><au>Itti, Laurent</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Autonomous Mobile Robot Localization and Navigation Using a Hierarchical Map Representation Primarily Guided by Vision</atitle><jtitle>Journal of field robotics</jtitle><addtitle>J. Field Robotics</addtitle><date>2014-05</date><risdate>2014</risdate><volume>31</volume><issue>3</issue><spage>408</spage><epage>440</epage><pages>408-440</pages><issn>1556-4959</issn><eissn>1556-4967</eissn><abstract>While impressive progress has recently been made with autonomous vehicles, both indoors and on streets, autonomous localization and navigation in less constrained and more dynamic environments, such as outdoor pedestrian and bicycle‐friendly sites, remains a challenging problem. We describe a new approach that utilizes several visual perception modules—place recognition, landmark recognition, and road lane detection—supplemented by proximity cues from a planar laser range finder for obstacle avoidance. At the core of our system is a new hybrid topological/grid‐occupancy map that integrates the outputs from all perceptual modules, despite different latencies and time scales. Our approach allows for real‐time performance through a combination of fast but shallow processing modules that update the map's state while slower but more discriminating modules are still computing. We validated our system using a ground vehicle that autonomously traversed three outdoor routes several times, each 400 m or longer, on a university campus. The routes featured different road types, environmental hazards, moving pedestrians, and service vehicles. In total, the robot logged over 10 km of successful recorded experiments, driving within a median of 1.37 m laterally of the center of the road, and localizing within 0.97 m (median) longitudinally of its true location along the route.</abstract><cop>Hoboken</cop><pub>Blackwell Publishing Ltd</pub><doi>10.1002/rob.21505</doi><tpages>33</tpages></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1556-4959 |
ispartof | Journal of field robotics, 2014-05, Vol.31 (3), p.408-440 |
issn | 1556-4959 1556-4967 |
language | eng |
recordid | cdi_proquest_miscellaneous_1541450843 |
source | Wiley Online Library Journals Frontfile Complete |
subjects | Autonomous Modules Navigation Position (location) Recognition Roads Robots Vehicles |
title | Autonomous Mobile Robot Localization and Navigation Using a Hierarchical Map Representation Primarily Guided by Vision |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-01T15%3A01%3A01IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_wiley&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Autonomous%20Mobile%20Robot%20Localization%20and%20Navigation%20Using%20a%20Hierarchical%20Map%20Representation%20Primarily%20Guided%20by%20Vision&rft.jtitle=Journal%20of%20field%20robotics&rft.au=Siagian,%20Christian&rft.date=2014-05&rft.volume=31&rft.issue=3&rft.spage=408&rft.epage=440&rft.pages=408-440&rft.issn=1556-4959&rft.eissn=1556-4967&rft_id=info:doi/10.1002/rob.21505&rft_dat=%3Cproquest_wiley%3E3267860131%3C/proquest_wiley%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=1513225502&rft_id=info:pmid/&rfr_iscdi=true |