Online, self-supervised vision-based terrain classification in unstructured environments
Outdoor, unstructured and cross-country environments introduce several challenging problems such as highly complex scene geometry, ground cover variation, uncontrolled lighting, weather conditions and shadows for vision-based terrain classification of Unmanned Ground Vehicles (UGVs). Color stereo vi...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 3105 |
---|---|
container_issue | |
container_start_page | 3100 |
container_title | |
container_volume | |
creator | Moghadam, P. Wijesoma, W.S. |
description | Outdoor, unstructured and cross-country environments introduce several challenging problems such as highly complex scene geometry, ground cover variation, uncontrolled lighting, weather conditions and shadows for vision-based terrain classification of Unmanned Ground Vehicles (UGVs). Color stereo vision is mostly used for UGVs, but the present stereo vision technologies and processing algorithms are limited by cameras' field of view and maximum range, which causes the vehicles to get caught in cul-de-sacs that could possibly be avoided if the vehicle had access to information or could make inferences about the terrain well beyond the range of the vision system. The philosophy underlying the proposed strategy in this paper is to use the near-field stereo information associated with the terrain appearance to train a classifier to classify the far-field terrain well beyond the stereo range for each incoming image. To date, strategies based on this concept are limited to using single model construction and classification per frame. Although this single-model-per-frame approach can adapt to the changing environments concurrently, it lacks memory or history of past information. The approach described in this study is to use an online, self-supervised learning algorithm that exploits multiple frames to develop adaptive models that can classify different terrains the robot traverses. Preliminary but promising results of the paradigm proposed is presented using real data sets from the DARPA-LAGR project, which is the current gold standard for vision-based terrain classification using machine-learning techniques. This is followed by a proposal for future work on the development of robust terrain classifiers based on the proposed methodology. |
doi_str_mv | 10.1109/ICSMC.2009.5345942 |
format | Conference Proceeding |
fullrecord | <record><control><sourceid>ieee_6IE</sourceid><recordid>TN_cdi_ieee_primary_5345942</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>5345942</ieee_id><sourcerecordid>5345942</sourcerecordid><originalsourceid>FETCH-LOGICAL-i175t-19b6d58cbd5b75bf92e6f89268c82fab0382a7414e646e2ba98d653a81ab058f3</originalsourceid><addsrcrecordid>eNpVkMtOwzAQRc2jEm3pD8AmH4CLPbEde4kiHpWKugCk7io7GUtGqVvZSSX-niDYsJmruWd0FkPIDWdLzpm5X9Vvr_USGDNLWQppBJyRhak0FyAEVEaYczIFWVWUKykv_rESLsmUMwXUAGwnZDZqtGFKaXZFZjl_MgZMcD0l203sQsS7ImPnaR6OmE4hY1uMMxwidfZn6TElG2LRdDbn4ENj-xEWYzPE3Keh6Yc0nmE8hXSIe4x9viYTb7uMi7-ck4-nx_f6ha43z6v6YU0Dr2RPuXGqlbpxrXSVdN4AKq8NKN1o8NaxUoOtBBeohEJw1uhWydJqPjKpfTknt7_egIi7Ywp7m752fx8rvwFCNlsO</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Online, self-supervised vision-based terrain classification in unstructured environments</title><source>IEEE Electronic Library (IEL) Conference Proceedings</source><creator>Moghadam, P. ; Wijesoma, W.S.</creator><creatorcontrib>Moghadam, P. ; Wijesoma, W.S.</creatorcontrib><description>Outdoor, unstructured and cross-country environments introduce several challenging problems such as highly complex scene geometry, ground cover variation, uncontrolled lighting, weather conditions and shadows for vision-based terrain classification of Unmanned Ground Vehicles (UGVs). Color stereo vision is mostly used for UGVs, but the present stereo vision technologies and processing algorithms are limited by cameras' field of view and maximum range, which causes the vehicles to get caught in cul-de-sacs that could possibly be avoided if the vehicle had access to information or could make inferences about the terrain well beyond the range of the vision system. The philosophy underlying the proposed strategy in this paper is to use the near-field stereo information associated with the terrain appearance to train a classifier to classify the far-field terrain well beyond the stereo range for each incoming image. To date, strategies based on this concept are limited to using single model construction and classification per frame. Although this single-model-per-frame approach can adapt to the changing environments concurrently, it lacks memory or history of past information. The approach described in this study is to use an online, self-supervised learning algorithm that exploits multiple frames to develop adaptive models that can classify different terrains the robot traverses. Preliminary but promising results of the paradigm proposed is presented using real data sets from the DARPA-LAGR project, which is the current gold standard for vision-based terrain classification using machine-learning techniques. This is followed by a proposal for future work on the development of robust terrain classifiers based on the proposed methodology.</description><identifier>ISSN: 1062-922X</identifier><identifier>ISBN: 9781424427932</identifier><identifier>ISBN: 1424427932</identifier><identifier>EISSN: 2577-1655</identifier><identifier>EISBN: 9781424427949</identifier><identifier>EISBN: 1424427940</identifier><identifier>DOI: 10.1109/ICSMC.2009.5345942</identifier><identifier>LCCN: 2008906680</identifier><language>eng</language><publisher>IEEE</publisher><subject>Cameras ; Color ; Geometry ; History ; Inference algorithms ; Land vehicles ; Layout ; Machine vision ; online ; Robots ; self-supervised learning ; Stereo vision</subject><ispartof>2009 IEEE International Conference on Systems, Man and Cybernetics, 2009, p.3100-3105</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/5345942$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,2058,27925,54920</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/5345942$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Moghadam, P.</creatorcontrib><creatorcontrib>Wijesoma, W.S.</creatorcontrib><title>Online, self-supervised vision-based terrain classification in unstructured environments</title><title>2009 IEEE International Conference on Systems, Man and Cybernetics</title><addtitle>ICSMC</addtitle><description>Outdoor, unstructured and cross-country environments introduce several challenging problems such as highly complex scene geometry, ground cover variation, uncontrolled lighting, weather conditions and shadows for vision-based terrain classification of Unmanned Ground Vehicles (UGVs). Color stereo vision is mostly used for UGVs, but the present stereo vision technologies and processing algorithms are limited by cameras' field of view and maximum range, which causes the vehicles to get caught in cul-de-sacs that could possibly be avoided if the vehicle had access to information or could make inferences about the terrain well beyond the range of the vision system. The philosophy underlying the proposed strategy in this paper is to use the near-field stereo information associated with the terrain appearance to train a classifier to classify the far-field terrain well beyond the stereo range for each incoming image. To date, strategies based on this concept are limited to using single model construction and classification per frame. Although this single-model-per-frame approach can adapt to the changing environments concurrently, it lacks memory or history of past information. The approach described in this study is to use an online, self-supervised learning algorithm that exploits multiple frames to develop adaptive models that can classify different terrains the robot traverses. Preliminary but promising results of the paradigm proposed is presented using real data sets from the DARPA-LAGR project, which is the current gold standard for vision-based terrain classification using machine-learning techniques. This is followed by a proposal for future work on the development of robust terrain classifiers based on the proposed methodology.</description><subject>Cameras</subject><subject>Color</subject><subject>Geometry</subject><subject>History</subject><subject>Inference algorithms</subject><subject>Land vehicles</subject><subject>Layout</subject><subject>Machine vision</subject><subject>online</subject><subject>Robots</subject><subject>self-supervised learning</subject><subject>Stereo vision</subject><issn>1062-922X</issn><issn>2577-1655</issn><isbn>9781424427932</isbn><isbn>1424427932</isbn><isbn>9781424427949</isbn><isbn>1424427940</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2009</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><sourceid>RIE</sourceid><recordid>eNpVkMtOwzAQRc2jEm3pD8AmH4CLPbEde4kiHpWKugCk7io7GUtGqVvZSSX-niDYsJmruWd0FkPIDWdLzpm5X9Vvr_USGDNLWQppBJyRhak0FyAEVEaYczIFWVWUKykv_rESLsmUMwXUAGwnZDZqtGFKaXZFZjl_MgZMcD0l203sQsS7ImPnaR6OmE4hY1uMMxwidfZn6TElG2LRdDbn4ENj-xEWYzPE3Keh6Yc0nmE8hXSIe4x9viYTb7uMi7-ck4-nx_f6ha43z6v6YU0Dr2RPuXGqlbpxrXSVdN4AKq8NKN1o8NaxUoOtBBeohEJw1uhWydJqPjKpfTknt7_egIi7Ywp7m752fx8rvwFCNlsO</recordid><startdate>200910</startdate><enddate>200910</enddate><creator>Moghadam, P.</creator><creator>Wijesoma, W.S.</creator><general>IEEE</general><scope>6IE</scope><scope>6IH</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIO</scope></search><sort><creationdate>200910</creationdate><title>Online, self-supervised vision-based terrain classification in unstructured environments</title><author>Moghadam, P. ; Wijesoma, W.S.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i175t-19b6d58cbd5b75bf92e6f89268c82fab0382a7414e646e2ba98d653a81ab058f3</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2009</creationdate><topic>Cameras</topic><topic>Color</topic><topic>Geometry</topic><topic>History</topic><topic>Inference algorithms</topic><topic>Land vehicles</topic><topic>Layout</topic><topic>Machine vision</topic><topic>online</topic><topic>Robots</topic><topic>self-supervised learning</topic><topic>Stereo vision</topic><toplevel>online_resources</toplevel><creatorcontrib>Moghadam, P.</creatorcontrib><creatorcontrib>Wijesoma, W.S.</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan (POP) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library (IEL)</collection><collection>IEEE Proceedings Order Plans (POP) 1998-present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Moghadam, P.</au><au>Wijesoma, W.S.</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Online, self-supervised vision-based terrain classification in unstructured environments</atitle><btitle>2009 IEEE International Conference on Systems, Man and Cybernetics</btitle><stitle>ICSMC</stitle><date>2009-10</date><risdate>2009</risdate><spage>3100</spage><epage>3105</epage><pages>3100-3105</pages><issn>1062-922X</issn><eissn>2577-1655</eissn><isbn>9781424427932</isbn><isbn>1424427932</isbn><eisbn>9781424427949</eisbn><eisbn>1424427940</eisbn><abstract>Outdoor, unstructured and cross-country environments introduce several challenging problems such as highly complex scene geometry, ground cover variation, uncontrolled lighting, weather conditions and shadows for vision-based terrain classification of Unmanned Ground Vehicles (UGVs). Color stereo vision is mostly used for UGVs, but the present stereo vision technologies and processing algorithms are limited by cameras' field of view and maximum range, which causes the vehicles to get caught in cul-de-sacs that could possibly be avoided if the vehicle had access to information or could make inferences about the terrain well beyond the range of the vision system. The philosophy underlying the proposed strategy in this paper is to use the near-field stereo information associated with the terrain appearance to train a classifier to classify the far-field terrain well beyond the stereo range for each incoming image. To date, strategies based on this concept are limited to using single model construction and classification per frame. Although this single-model-per-frame approach can adapt to the changing environments concurrently, it lacks memory or history of past information. The approach described in this study is to use an online, self-supervised learning algorithm that exploits multiple frames to develop adaptive models that can classify different terrains the robot traverses. Preliminary but promising results of the paradigm proposed is presented using real data sets from the DARPA-LAGR project, which is the current gold standard for vision-based terrain classification using machine-learning techniques. This is followed by a proposal for future work on the development of robust terrain classifiers based on the proposed methodology.</abstract><pub>IEEE</pub><doi>10.1109/ICSMC.2009.5345942</doi><tpages>6</tpages></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1062-922X |
ispartof | 2009 IEEE International Conference on Systems, Man and Cybernetics, 2009, p.3100-3105 |
issn | 1062-922X 2577-1655 |
language | eng |
recordid | cdi_ieee_primary_5345942 |
source | IEEE Electronic Library (IEL) Conference Proceedings |
subjects | Cameras Color Geometry History Inference algorithms Land vehicles Layout Machine vision online Robots self-supervised learning Stereo vision |
title | Online, self-supervised vision-based terrain classification in unstructured environments |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T00%3A22%3A57IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_6IE&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Online,%20self-supervised%20vision-based%20terrain%20classification%20in%20unstructured%20environments&rft.btitle=2009%20IEEE%20International%20Conference%20on%20Systems,%20Man%20and%20Cybernetics&rft.au=Moghadam,%20P.&rft.date=2009-10&rft.spage=3100&rft.epage=3105&rft.pages=3100-3105&rft.issn=1062-922X&rft.eissn=2577-1655&rft.isbn=9781424427932&rft.isbn_list=1424427932&rft_id=info:doi/10.1109/ICSMC.2009.5345942&rft_dat=%3Cieee_6IE%3E5345942%3C/ieee_6IE%3E%3Curl%3E%3C/url%3E&rft.eisbn=9781424427949&rft.eisbn_list=1424427940&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=5345942&rfr_iscdi=true |