TRAVEL: Traversable Ground and Above-Ground Object Segmentation Using Graph Representation of 3D LiDAR Scans

Perception of traversable regions and objects of interest from a 3D point cloud is one of the critical tasks in autonomous navigation. A ground vehicle needs to look for traversable terrains that are explorable by wheels. Then, to make safe navigation decisions, the segmentation of objects positione...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Oh, Minho, Jung, Euigon, Lim, Hyungtae, Song, Wonho, Hu, Sumin, Lee, Eungchang Mason, Park, Junghee, Kim, Jaekyung, Lee, Jangwoo, Myung, Hyun
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Oh, Minho
Jung, Euigon
Lim, Hyungtae
Song, Wonho
Hu, Sumin
Lee, Eungchang Mason
Park, Junghee
Kim, Jaekyung
Lee, Jangwoo
Myung, Hyun
description Perception of traversable regions and objects of interest from a 3D point cloud is one of the critical tasks in autonomous navigation. A ground vehicle needs to look for traversable terrains that are explorable by wheels. Then, to make safe navigation decisions, the segmentation of objects positioned on those terrains has to be followed up. However, over-segmentation and under-segmentation can negatively influence such navigation decisions. To that end, we propose TRAVEL, which performs traversable ground detection and object clustering simultaneously using the graph representation of a 3D point cloud. To segment the traversable ground, a point cloud is encoded into a graph structure, tri-grid field, which treats each tri-grid as a node. Then, the traversable regions are searched and redefined by examining local convexity and concavity of edges that connect nodes. On the other hand, our above-ground object segmentation employs a graph structure by representing a group of horizontally neighboring 3D points in a spherical-projection space as a node and vertical/horizontal relationship between nodes as an edge. Fully leveraging the node-edge structure, the above-ground segmentation ensures real-time operation and mitigates over-segmentation. Through experiments using simulations, urban scenes, and our own datasets, we have demonstrated that our proposed traversable ground segmentation algorithm outperforms other state-of-the-art methods in terms of the conventional metrics and that our newly proposed evaluation metrics are meaningful for assessing the above-ground segmentation. We will make the code and our own dataset available to public at https://github.com/url-kaist/TRAVEL.
doi_str_mv 10.48550/arxiv.2206.03190
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2206_03190</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2206_03190</sourcerecordid><originalsourceid>FETCH-LOGICAL-a670-e911fcee83533dfdf629844e1f1d6b7c77fb86a43ba26132c7a6ca918dc0ec93</originalsourceid><addsrcrecordid>eNo9j8FOhDAYhHvxYFYfwJN9AbCl0II3sruuJiSbAHolf9u_K4YFUpDo27uuGw-TSWYyk3yE3HEWxmmSsAfwX-0SRhGTIRM8Y9ekq8v8bVs80trDgn4C3SHd-eGztxROyvWwYHAJ9voDzUwrPByxn2Fuh56-Tm1_OC1gfKcljh6n_2pwVGxo0W7yklYG-umGXDnoJry9-IpUT9t6_RwU-93LOi8CkIoFmHHuDGIqEiGss05GWRrHyB23UiujlNOphFhoiCQXkVEgDWQ8tYahycSK3P-9nmmb0bdH8N_NL3VzphY_5cBSdQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>TRAVEL: Traversable Ground and Above-Ground Object Segmentation Using Graph Representation of 3D LiDAR Scans</title><source>arXiv.org</source><creator>Oh, Minho ; Jung, Euigon ; Lim, Hyungtae ; Song, Wonho ; Hu, Sumin ; Lee, Eungchang Mason ; Park, Junghee ; Kim, Jaekyung ; Lee, Jangwoo ; Myung, Hyun</creator><creatorcontrib>Oh, Minho ; Jung, Euigon ; Lim, Hyungtae ; Song, Wonho ; Hu, Sumin ; Lee, Eungchang Mason ; Park, Junghee ; Kim, Jaekyung ; Lee, Jangwoo ; Myung, Hyun</creatorcontrib><description>Perception of traversable regions and objects of interest from a 3D point cloud is one of the critical tasks in autonomous navigation. A ground vehicle needs to look for traversable terrains that are explorable by wheels. Then, to make safe navigation decisions, the segmentation of objects positioned on those terrains has to be followed up. However, over-segmentation and under-segmentation can negatively influence such navigation decisions. To that end, we propose TRAVEL, which performs traversable ground detection and object clustering simultaneously using the graph representation of a 3D point cloud. To segment the traversable ground, a point cloud is encoded into a graph structure, tri-grid field, which treats each tri-grid as a node. Then, the traversable regions are searched and redefined by examining local convexity and concavity of edges that connect nodes. On the other hand, our above-ground object segmentation employs a graph structure by representing a group of horizontally neighboring 3D points in a spherical-projection space as a node and vertical/horizontal relationship between nodes as an edge. Fully leveraging the node-edge structure, the above-ground segmentation ensures real-time operation and mitigates over-segmentation. Through experiments using simulations, urban scenes, and our own datasets, we have demonstrated that our proposed traversable ground segmentation algorithm outperforms other state-of-the-art methods in terms of the conventional metrics and that our newly proposed evaluation metrics are meaningful for assessing the above-ground segmentation. We will make the code and our own dataset available to public at https://github.com/url-kaist/TRAVEL.</description><identifier>DOI: 10.48550/arxiv.2206.03190</identifier><language>eng</language><subject>Computer Science - Robotics</subject><creationdate>2022-06</creationdate><rights>http://creativecommons.org/licenses/by-nc-nd/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2206.03190$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2206.03190$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Oh, Minho</creatorcontrib><creatorcontrib>Jung, Euigon</creatorcontrib><creatorcontrib>Lim, Hyungtae</creatorcontrib><creatorcontrib>Song, Wonho</creatorcontrib><creatorcontrib>Hu, Sumin</creatorcontrib><creatorcontrib>Lee, Eungchang Mason</creatorcontrib><creatorcontrib>Park, Junghee</creatorcontrib><creatorcontrib>Kim, Jaekyung</creatorcontrib><creatorcontrib>Lee, Jangwoo</creatorcontrib><creatorcontrib>Myung, Hyun</creatorcontrib><title>TRAVEL: Traversable Ground and Above-Ground Object Segmentation Using Graph Representation of 3D LiDAR Scans</title><description>Perception of traversable regions and objects of interest from a 3D point cloud is one of the critical tasks in autonomous navigation. A ground vehicle needs to look for traversable terrains that are explorable by wheels. Then, to make safe navigation decisions, the segmentation of objects positioned on those terrains has to be followed up. However, over-segmentation and under-segmentation can negatively influence such navigation decisions. To that end, we propose TRAVEL, which performs traversable ground detection and object clustering simultaneously using the graph representation of a 3D point cloud. To segment the traversable ground, a point cloud is encoded into a graph structure, tri-grid field, which treats each tri-grid as a node. Then, the traversable regions are searched and redefined by examining local convexity and concavity of edges that connect nodes. On the other hand, our above-ground object segmentation employs a graph structure by representing a group of horizontally neighboring 3D points in a spherical-projection space as a node and vertical/horizontal relationship between nodes as an edge. Fully leveraging the node-edge structure, the above-ground segmentation ensures real-time operation and mitigates over-segmentation. Through experiments using simulations, urban scenes, and our own datasets, we have demonstrated that our proposed traversable ground segmentation algorithm outperforms other state-of-the-art methods in terms of the conventional metrics and that our newly proposed evaluation metrics are meaningful for assessing the above-ground segmentation. We will make the code and our own dataset available to public at https://github.com/url-kaist/TRAVEL.</description><subject>Computer Science - Robotics</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNo9j8FOhDAYhHvxYFYfwJN9AbCl0II3sruuJiSbAHolf9u_K4YFUpDo27uuGw-TSWYyk3yE3HEWxmmSsAfwX-0SRhGTIRM8Y9ekq8v8bVs80trDgn4C3SHd-eGztxROyvWwYHAJ9voDzUwrPByxn2Fuh56-Tm1_OC1gfKcljh6n_2pwVGxo0W7yklYG-umGXDnoJry9-IpUT9t6_RwU-93LOi8CkIoFmHHuDGIqEiGss05GWRrHyB23UiujlNOphFhoiCQXkVEgDWQ8tYahycSK3P-9nmmb0bdH8N_NL3VzphY_5cBSdQ</recordid><startdate>20220607</startdate><enddate>20220607</enddate><creator>Oh, Minho</creator><creator>Jung, Euigon</creator><creator>Lim, Hyungtae</creator><creator>Song, Wonho</creator><creator>Hu, Sumin</creator><creator>Lee, Eungchang Mason</creator><creator>Park, Junghee</creator><creator>Kim, Jaekyung</creator><creator>Lee, Jangwoo</creator><creator>Myung, Hyun</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20220607</creationdate><title>TRAVEL: Traversable Ground and Above-Ground Object Segmentation Using Graph Representation of 3D LiDAR Scans</title><author>Oh, Minho ; Jung, Euigon ; Lim, Hyungtae ; Song, Wonho ; Hu, Sumin ; Lee, Eungchang Mason ; Park, Junghee ; Kim, Jaekyung ; Lee, Jangwoo ; Myung, Hyun</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a670-e911fcee83533dfdf629844e1f1d6b7c77fb86a43ba26132c7a6ca918dc0ec93</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Robotics</topic><toplevel>online_resources</toplevel><creatorcontrib>Oh, Minho</creatorcontrib><creatorcontrib>Jung, Euigon</creatorcontrib><creatorcontrib>Lim, Hyungtae</creatorcontrib><creatorcontrib>Song, Wonho</creatorcontrib><creatorcontrib>Hu, Sumin</creatorcontrib><creatorcontrib>Lee, Eungchang Mason</creatorcontrib><creatorcontrib>Park, Junghee</creatorcontrib><creatorcontrib>Kim, Jaekyung</creatorcontrib><creatorcontrib>Lee, Jangwoo</creatorcontrib><creatorcontrib>Myung, Hyun</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Oh, Minho</au><au>Jung, Euigon</au><au>Lim, Hyungtae</au><au>Song, Wonho</au><au>Hu, Sumin</au><au>Lee, Eungchang Mason</au><au>Park, Junghee</au><au>Kim, Jaekyung</au><au>Lee, Jangwoo</au><au>Myung, Hyun</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>TRAVEL: Traversable Ground and Above-Ground Object Segmentation Using Graph Representation of 3D LiDAR Scans</atitle><date>2022-06-07</date><risdate>2022</risdate><abstract>Perception of traversable regions and objects of interest from a 3D point cloud is one of the critical tasks in autonomous navigation. A ground vehicle needs to look for traversable terrains that are explorable by wheels. Then, to make safe navigation decisions, the segmentation of objects positioned on those terrains has to be followed up. However, over-segmentation and under-segmentation can negatively influence such navigation decisions. To that end, we propose TRAVEL, which performs traversable ground detection and object clustering simultaneously using the graph representation of a 3D point cloud. To segment the traversable ground, a point cloud is encoded into a graph structure, tri-grid field, which treats each tri-grid as a node. Then, the traversable regions are searched and redefined by examining local convexity and concavity of edges that connect nodes. On the other hand, our above-ground object segmentation employs a graph structure by representing a group of horizontally neighboring 3D points in a spherical-projection space as a node and vertical/horizontal relationship between nodes as an edge. Fully leveraging the node-edge structure, the above-ground segmentation ensures real-time operation and mitigates over-segmentation. Through experiments using simulations, urban scenes, and our own datasets, we have demonstrated that our proposed traversable ground segmentation algorithm outperforms other state-of-the-art methods in terms of the conventional metrics and that our newly proposed evaluation metrics are meaningful for assessing the above-ground segmentation. We will make the code and our own dataset available to public at https://github.com/url-kaist/TRAVEL.</abstract><doi>10.48550/arxiv.2206.03190</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2206.03190
ispartof
issn
language eng
recordid cdi_arxiv_primary_2206_03190
source arXiv.org
subjects Computer Science - Robotics
title TRAVEL: Traversable Ground and Above-Ground Object Segmentation Using Graph Representation of 3D LiDAR Scans
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-05T08%3A27%3A05IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=TRAVEL:%20Traversable%20Ground%20and%20Above-Ground%20Object%20Segmentation%20Using%20Graph%20Representation%20of%203D%20LiDAR%20Scans&rft.au=Oh,%20Minho&rft.date=2022-06-07&rft_id=info:doi/10.48550/arxiv.2206.03190&rft_dat=%3Carxiv_GOX%3E2206_03190%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true