WildScenes: A benchmark for 2D and 3D semantic segmentation in large-scale natural environments

Recent progress in semantic scene understanding has primarily been enabled by the availability of semantically annotated bi-modal (camera and LiDAR) datasets in urban environments. However, such annotated datasets are also needed for natural, unstructured environments to enable semantic perception f...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:The International journal of robotics research 2024-09
Hauptverfasser: Vidanapathirana, Kavisha, Knights, Joshua, Hausler, Stephen, Cox, Mark, Ramezani, Milad, Jooste, Jason, Griffiths, Ethan, Mohamed, Shaheer, Sridharan, Sridha, Fookes, Clinton, Moghadam, Peyman
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title The International journal of robotics research
container_volume
creator Vidanapathirana, Kavisha
Knights, Joshua
Hausler, Stephen
Cox, Mark
Ramezani, Milad
Jooste, Jason
Griffiths, Ethan
Mohamed, Shaheer
Sridharan, Sridha
Fookes, Clinton
Moghadam, Peyman
description Recent progress in semantic scene understanding has primarily been enabled by the availability of semantically annotated bi-modal (camera and LiDAR) datasets in urban environments. However, such annotated datasets are also needed for natural, unstructured environments to enable semantic perception for applications, including conservation, search and rescue, environment monitoring, and agricultural automation. Therefore, we introduce WildScenes, a bi-modal benchmark dataset consisting of multiple large-scale, sequential traversals in natural environments, including semantic annotations in high-resolution 2D images and dense 3D LiDAR point clouds, and accurate 6-DoF pose information. The data is (1) trajectory-centric with accurate localization and globally aligned point clouds, (2) calibrated and synchronized to support bi-modal training and inference, and (3) containing different natural environments over 6 months to support research on domain adaptation. Our 3D semantic labels are obtained via an efficient, automated process that transfers the human-annotated 2D labels from multiple views into 3D point cloud sequences, thus circumventing the need for expensive and time-consuming human annotation in 3D. We introduce benchmarks on 2D and 3D semantic segmentation and evaluate a variety of recent deep-learning techniques to demonstrate the challenges in semantic segmentation in natural environments. We propose train-val-test splits for standard benchmarks as well as domain adaptation benchmarks and utilize an automated split generation technique to ensure the balance of class label distributions. The WildScenes benchmark webpage is https://csiro-robotics.github.io/WildScenes , and the data is publicly available at https://data.csiro.au/collection/csiro:61541 .
doi_str_mv 10.1177/02783649241278369
format Article
fullrecord <record><control><sourceid>crossref</sourceid><recordid>TN_cdi_crossref_primary_10_1177_02783649241278369</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>10_1177_02783649241278369</sourcerecordid><originalsourceid>FETCH-LOGICAL-c170t-63925b1c49c960c8d658971f3f8c2698555c0f37626a46dea5385ff513fc2cc33</originalsourceid><addsrcrecordid>eNplkM9KAzEYxIMouFYfwFteYDVfsvnnrbRahYIHFY9L-m1So9usJKvg29tVb55mYH4MzBByDuwCQOtLxrURqrG8gR9nD0gFuoFagFaHpJryegKOyUkpr4wxoZitSPsc--4BffLlis7pxid82bn8RsOQKV9SlzoqlrT4nUtjxL3Z7nwa3RiHRGOivctbXxd0vafJjR_Z9dSnz5iHNHHllBwF1xd_9qcz8nRz_bi4rdf3q7vFfF0jaDbWSlguN4CNRasYmk5JYzUEEQxyZY2UElkQWnHlGtV5J4WRIUgQATmiEDMCv72Yh1KyD-17jvshXy2wdnqo_feQ-AZD2VhP</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>WildScenes: A benchmark for 2D and 3D semantic segmentation in large-scale natural environments</title><source>SAGE Publications</source><creator>Vidanapathirana, Kavisha ; Knights, Joshua ; Hausler, Stephen ; Cox, Mark ; Ramezani, Milad ; Jooste, Jason ; Griffiths, Ethan ; Mohamed, Shaheer ; Sridharan, Sridha ; Fookes, Clinton ; Moghadam, Peyman</creator><creatorcontrib>Vidanapathirana, Kavisha ; Knights, Joshua ; Hausler, Stephen ; Cox, Mark ; Ramezani, Milad ; Jooste, Jason ; Griffiths, Ethan ; Mohamed, Shaheer ; Sridharan, Sridha ; Fookes, Clinton ; Moghadam, Peyman</creatorcontrib><description>Recent progress in semantic scene understanding has primarily been enabled by the availability of semantically annotated bi-modal (camera and LiDAR) datasets in urban environments. However, such annotated datasets are also needed for natural, unstructured environments to enable semantic perception for applications, including conservation, search and rescue, environment monitoring, and agricultural automation. Therefore, we introduce WildScenes, a bi-modal benchmark dataset consisting of multiple large-scale, sequential traversals in natural environments, including semantic annotations in high-resolution 2D images and dense 3D LiDAR point clouds, and accurate 6-DoF pose information. The data is (1) trajectory-centric with accurate localization and globally aligned point clouds, (2) calibrated and synchronized to support bi-modal training and inference, and (3) containing different natural environments over 6 months to support research on domain adaptation. Our 3D semantic labels are obtained via an efficient, automated process that transfers the human-annotated 2D labels from multiple views into 3D point cloud sequences, thus circumventing the need for expensive and time-consuming human annotation in 3D. We introduce benchmarks on 2D and 3D semantic segmentation and evaluate a variety of recent deep-learning techniques to demonstrate the challenges in semantic segmentation in natural environments. We propose train-val-test splits for standard benchmarks as well as domain adaptation benchmarks and utilize an automated split generation technique to ensure the balance of class label distributions. The WildScenes benchmark webpage is https://csiro-robotics.github.io/WildScenes , and the data is publicly available at https://data.csiro.au/collection/csiro:61541 .</description><identifier>ISSN: 0278-3649</identifier><identifier>EISSN: 1741-3176</identifier><identifier>DOI: 10.1177/02783649241278369</identifier><language>eng</language><ispartof>The International journal of robotics research, 2024-09</ispartof><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c170t-63925b1c49c960c8d658971f3f8c2698555c0f37626a46dea5385ff513fc2cc33</cites><orcidid>0000-0002-8169-3560 ; 0000-0003-0092-0096 ; 0000-0002-8269-7918 ; 0009-0006-4871-1188 ; 0000-0001-9875-3577</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,776,780,27903,27904</link.rule.ids></links><search><creatorcontrib>Vidanapathirana, Kavisha</creatorcontrib><creatorcontrib>Knights, Joshua</creatorcontrib><creatorcontrib>Hausler, Stephen</creatorcontrib><creatorcontrib>Cox, Mark</creatorcontrib><creatorcontrib>Ramezani, Milad</creatorcontrib><creatorcontrib>Jooste, Jason</creatorcontrib><creatorcontrib>Griffiths, Ethan</creatorcontrib><creatorcontrib>Mohamed, Shaheer</creatorcontrib><creatorcontrib>Sridharan, Sridha</creatorcontrib><creatorcontrib>Fookes, Clinton</creatorcontrib><creatorcontrib>Moghadam, Peyman</creatorcontrib><title>WildScenes: A benchmark for 2D and 3D semantic segmentation in large-scale natural environments</title><title>The International journal of robotics research</title><description>Recent progress in semantic scene understanding has primarily been enabled by the availability of semantically annotated bi-modal (camera and LiDAR) datasets in urban environments. However, such annotated datasets are also needed for natural, unstructured environments to enable semantic perception for applications, including conservation, search and rescue, environment monitoring, and agricultural automation. Therefore, we introduce WildScenes, a bi-modal benchmark dataset consisting of multiple large-scale, sequential traversals in natural environments, including semantic annotations in high-resolution 2D images and dense 3D LiDAR point clouds, and accurate 6-DoF pose information. The data is (1) trajectory-centric with accurate localization and globally aligned point clouds, (2) calibrated and synchronized to support bi-modal training and inference, and (3) containing different natural environments over 6 months to support research on domain adaptation. Our 3D semantic labels are obtained via an efficient, automated process that transfers the human-annotated 2D labels from multiple views into 3D point cloud sequences, thus circumventing the need for expensive and time-consuming human annotation in 3D. We introduce benchmarks on 2D and 3D semantic segmentation and evaluate a variety of recent deep-learning techniques to demonstrate the challenges in semantic segmentation in natural environments. We propose train-val-test splits for standard benchmarks as well as domain adaptation benchmarks and utilize an automated split generation technique to ensure the balance of class label distributions. The WildScenes benchmark webpage is https://csiro-robotics.github.io/WildScenes , and the data is publicly available at https://data.csiro.au/collection/csiro:61541 .</description><issn>0278-3649</issn><issn>1741-3176</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNplkM9KAzEYxIMouFYfwFteYDVfsvnnrbRahYIHFY9L-m1So9usJKvg29tVb55mYH4MzBByDuwCQOtLxrURqrG8gR9nD0gFuoFagFaHpJryegKOyUkpr4wxoZitSPsc--4BffLlis7pxid82bn8RsOQKV9SlzoqlrT4nUtjxL3Z7nwa3RiHRGOivctbXxd0vafJjR_Z9dSnz5iHNHHllBwF1xd_9qcz8nRz_bi4rdf3q7vFfF0jaDbWSlguN4CNRasYmk5JYzUEEQxyZY2UElkQWnHlGtV5J4WRIUgQATmiEDMCv72Yh1KyD-17jvshXy2wdnqo_feQ-AZD2VhP</recordid><startdate>20240920</startdate><enddate>20240920</enddate><creator>Vidanapathirana, Kavisha</creator><creator>Knights, Joshua</creator><creator>Hausler, Stephen</creator><creator>Cox, Mark</creator><creator>Ramezani, Milad</creator><creator>Jooste, Jason</creator><creator>Griffiths, Ethan</creator><creator>Mohamed, Shaheer</creator><creator>Sridharan, Sridha</creator><creator>Fookes, Clinton</creator><creator>Moghadam, Peyman</creator><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0002-8169-3560</orcidid><orcidid>https://orcid.org/0000-0003-0092-0096</orcidid><orcidid>https://orcid.org/0000-0002-8269-7918</orcidid><orcidid>https://orcid.org/0009-0006-4871-1188</orcidid><orcidid>https://orcid.org/0000-0001-9875-3577</orcidid></search><sort><creationdate>20240920</creationdate><title>WildScenes: A benchmark for 2D and 3D semantic segmentation in large-scale natural environments</title><author>Vidanapathirana, Kavisha ; Knights, Joshua ; Hausler, Stephen ; Cox, Mark ; Ramezani, Milad ; Jooste, Jason ; Griffiths, Ethan ; Mohamed, Shaheer ; Sridharan, Sridha ; Fookes, Clinton ; Moghadam, Peyman</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c170t-63925b1c49c960c8d658971f3f8c2698555c0f37626a46dea5385ff513fc2cc33</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Vidanapathirana, Kavisha</creatorcontrib><creatorcontrib>Knights, Joshua</creatorcontrib><creatorcontrib>Hausler, Stephen</creatorcontrib><creatorcontrib>Cox, Mark</creatorcontrib><creatorcontrib>Ramezani, Milad</creatorcontrib><creatorcontrib>Jooste, Jason</creatorcontrib><creatorcontrib>Griffiths, Ethan</creatorcontrib><creatorcontrib>Mohamed, Shaheer</creatorcontrib><creatorcontrib>Sridharan, Sridha</creatorcontrib><creatorcontrib>Fookes, Clinton</creatorcontrib><creatorcontrib>Moghadam, Peyman</creatorcontrib><collection>CrossRef</collection><jtitle>The International journal of robotics research</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Vidanapathirana, Kavisha</au><au>Knights, Joshua</au><au>Hausler, Stephen</au><au>Cox, Mark</au><au>Ramezani, Milad</au><au>Jooste, Jason</au><au>Griffiths, Ethan</au><au>Mohamed, Shaheer</au><au>Sridharan, Sridha</au><au>Fookes, Clinton</au><au>Moghadam, Peyman</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>WildScenes: A benchmark for 2D and 3D semantic segmentation in large-scale natural environments</atitle><jtitle>The International journal of robotics research</jtitle><date>2024-09-20</date><risdate>2024</risdate><issn>0278-3649</issn><eissn>1741-3176</eissn><abstract>Recent progress in semantic scene understanding has primarily been enabled by the availability of semantically annotated bi-modal (camera and LiDAR) datasets in urban environments. However, such annotated datasets are also needed for natural, unstructured environments to enable semantic perception for applications, including conservation, search and rescue, environment monitoring, and agricultural automation. Therefore, we introduce WildScenes, a bi-modal benchmark dataset consisting of multiple large-scale, sequential traversals in natural environments, including semantic annotations in high-resolution 2D images and dense 3D LiDAR point clouds, and accurate 6-DoF pose information. The data is (1) trajectory-centric with accurate localization and globally aligned point clouds, (2) calibrated and synchronized to support bi-modal training and inference, and (3) containing different natural environments over 6 months to support research on domain adaptation. Our 3D semantic labels are obtained via an efficient, automated process that transfers the human-annotated 2D labels from multiple views into 3D point cloud sequences, thus circumventing the need for expensive and time-consuming human annotation in 3D. We introduce benchmarks on 2D and 3D semantic segmentation and evaluate a variety of recent deep-learning techniques to demonstrate the challenges in semantic segmentation in natural environments. We propose train-val-test splits for standard benchmarks as well as domain adaptation benchmarks and utilize an automated split generation technique to ensure the balance of class label distributions. The WildScenes benchmark webpage is https://csiro-robotics.github.io/WildScenes , and the data is publicly available at https://data.csiro.au/collection/csiro:61541 .</abstract><doi>10.1177/02783649241278369</doi><orcidid>https://orcid.org/0000-0002-8169-3560</orcidid><orcidid>https://orcid.org/0000-0003-0092-0096</orcidid><orcidid>https://orcid.org/0000-0002-8269-7918</orcidid><orcidid>https://orcid.org/0009-0006-4871-1188</orcidid><orcidid>https://orcid.org/0000-0001-9875-3577</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0278-3649
ispartof The International journal of robotics research, 2024-09
issn 0278-3649
1741-3176
language eng
recordid cdi_crossref_primary_10_1177_02783649241278369
source SAGE Publications
title WildScenes: A benchmark for 2D and 3D semantic segmentation in large-scale natural environments
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-24T19%3A26%3A11IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-crossref&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=WildScenes:%20A%20benchmark%20for%202D%20and%203D%20semantic%20segmentation%20in%20large-scale%20natural%20environments&rft.jtitle=The%20International%20journal%20of%20robotics%20research&rft.au=Vidanapathirana,%20Kavisha&rft.date=2024-09-20&rft.issn=0278-3649&rft.eissn=1741-3176&rft_id=info:doi/10.1177/02783649241278369&rft_dat=%3Ccrossref%3E10_1177_02783649241278369%3C/crossref%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true