USTC FLICAR: A sensors fusion dataset of LiDAR-inertial-camera for heavy-duty autonomous aerial work robots

In this paper, we present the USTC FLICAR Dataset, which is dedicated to the development of simultaneous localization and mapping and precise 3D reconstruction of the workspace for heavy-duty autonomous aerial work robots. In recent years, numerous public datasets have played significant roles in th...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:The International journal of robotics research 2023-09, Vol.42 (11), p.1015-1047
Hauptverfasser: Wang, Ziming, Liu, Yujiang, Duan, Yifan, Li, Xingchen, Zhang, Xinran, Ji, Jianmin, Dong, Erbao, Zhang, Yanyong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 1047
container_issue 11
container_start_page 1015
container_title The International journal of robotics research
container_volume 42
creator Wang, Ziming
Liu, Yujiang
Duan, Yifan
Li, Xingchen
Zhang, Xinran
Ji, Jianmin
Dong, Erbao
Zhang, Yanyong
description In this paper, we present the USTC FLICAR Dataset, which is dedicated to the development of simultaneous localization and mapping and precise 3D reconstruction of the workspace for heavy-duty autonomous aerial work robots. In recent years, numerous public datasets have played significant roles in the advancement of autonomous cars and unmanned aerial vehicles (UAVs). However, these two platforms differ from aerial work robots: UAVs are limited in their payload capacity, while cars are restricted to two-dimensional movements. To fill this gap, we create the “Giraffe” mapping robot based on a bucket truck, which is equipped with a variety of well-calibrated and synchronized sensors: four 3D LiDARs, two stereo cameras, two monocular cameras, Inertial Measurement Units (IMUs), and a GNSS/INS system. A laser tracker is used to record the millimeter-level ground truth positions. We also make its ground twin, the “Okapi” mapping robot, to gather data for comparison. The proposed dataset extends the typical autonomous driving sensing suite to aerial scenes, demonstrating the potential of combining autonomous driving perception systems with bucket trucks to create a versatile autonomous aerial working platform. Moreover, based on the Segment Anything Model (SAM), we produce the Semantic FLICAR dataset, which provides fine-grained semantic segmentation annotations for multimodal continuous data in both temporal and spatial dimensions. The dataset is available for download at: https://ustc-flicar.github.io/.
doi_str_mv 10.1177/02783649231195650
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2886584328</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sage_id>10.1177_02783649231195650</sage_id><sourcerecordid>2886584328</sourcerecordid><originalsourceid>FETCH-LOGICAL-c312t-f1b399b942c6dce6842c4fc917d0c5b04840389d214dfe7bdfc3d6b9edcb44123</originalsourceid><addsrcrecordid>eNp1kEtLAzEUhYMoWKs_wF3AdWpek0nclWpVKAi1XQ-ZPHT6mNQko_TfO6WCC3F1D9zvnAMHgGuCR4SU5S2mpWSCK8oIUYUo8AkYkJITxEgpTsHg8EcH4BxcpLTCGDOB1QCsl6-LCZzOnifj-R0cw-TaFGKCvktNaKHVWSeXYfBw1tyP56hpXcyN3iCjty5q6EOE705_7pHt8h7qLoc2bEOXoHax5-BXiGsYQx1yugRnXm-Su_q5Q7CcPiwmT2j28tj3z5BhhGbkSc2UqhWnRljjhOwF90aR0mJT1JhLjplUlhJuvStr6w2zolbOmppzQtkQ3BxzdzF8dC7lahW62PaVFZVSFJIzKnuKHCkTQ0rR-WoXm62O-4rg6rBp9WfT3jM6epJ-c7-p_xu-Aan5dl8</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2886584328</pqid></control><display><type>article</type><title>USTC FLICAR: A sensors fusion dataset of LiDAR-inertial-camera for heavy-duty autonomous aerial work robots</title><source>Access via SAGE</source><creator>Wang, Ziming ; Liu, Yujiang ; Duan, Yifan ; Li, Xingchen ; Zhang, Xinran ; Ji, Jianmin ; Dong, Erbao ; Zhang, Yanyong</creator><creatorcontrib>Wang, Ziming ; Liu, Yujiang ; Duan, Yifan ; Li, Xingchen ; Zhang, Xinran ; Ji, Jianmin ; Dong, Erbao ; Zhang, Yanyong</creatorcontrib><description>In this paper, we present the USTC FLICAR Dataset, which is dedicated to the development of simultaneous localization and mapping and precise 3D reconstruction of the workspace for heavy-duty autonomous aerial work robots. In recent years, numerous public datasets have played significant roles in the advancement of autonomous cars and unmanned aerial vehicles (UAVs). However, these two platforms differ from aerial work robots: UAVs are limited in their payload capacity, while cars are restricted to two-dimensional movements. To fill this gap, we create the “Giraffe” mapping robot based on a bucket truck, which is equipped with a variety of well-calibrated and synchronized sensors: four 3D LiDARs, two stereo cameras, two monocular cameras, Inertial Measurement Units (IMUs), and a GNSS/INS system. A laser tracker is used to record the millimeter-level ground truth positions. We also make its ground twin, the “Okapi” mapping robot, to gather data for comparison. The proposed dataset extends the typical autonomous driving sensing suite to aerial scenes, demonstrating the potential of combining autonomous driving perception systems with bucket trucks to create a versatile autonomous aerial working platform. Moreover, based on the Segment Anything Model (SAM), we produce the Semantic FLICAR dataset, which provides fine-grained semantic segmentation annotations for multimodal continuous data in both temporal and spatial dimensions. The dataset is available for download at: https://ustc-flicar.github.io/.</description><identifier>ISSN: 0278-3649</identifier><identifier>EISSN: 1741-3176</identifier><identifier>DOI: 10.1177/02783649231195650</identifier><language>eng</language><publisher>London, England: SAGE Publications</publisher><subject>Automobiles ; Autonomous cars ; Cameras ; Datasets ; Driving ; Image reconstruction ; Inertial platforms ; Multisensor fusion ; Robots ; Semantic segmentation ; Semantics ; Sensors ; Simultaneous localization and mapping ; Unmanned aerial vehicles</subject><ispartof>The International journal of robotics research, 2023-09, Vol.42 (11), p.1015-1047</ispartof><rights>The Author(s) 2023</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c312t-f1b399b942c6dce6842c4fc917d0c5b04840389d214dfe7bdfc3d6b9edcb44123</citedby><cites>FETCH-LOGICAL-c312t-f1b399b942c6dce6842c4fc917d0c5b04840389d214dfe7bdfc3d6b9edcb44123</cites><orcidid>0009-0005-0161-9539 ; 0000-0002-4062-9730 ; 0000-0003-0499-6848 ; 0009-0004-0754-3953</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://journals.sagepub.com/doi/pdf/10.1177/02783649231195650$$EPDF$$P50$$Gsage$$H</linktopdf><linktohtml>$$Uhttps://journals.sagepub.com/doi/10.1177/02783649231195650$$EHTML$$P50$$Gsage$$H</linktohtml><link.rule.ids>315,782,786,21828,27933,27934,43630,43631</link.rule.ids></links><search><creatorcontrib>Wang, Ziming</creatorcontrib><creatorcontrib>Liu, Yujiang</creatorcontrib><creatorcontrib>Duan, Yifan</creatorcontrib><creatorcontrib>Li, Xingchen</creatorcontrib><creatorcontrib>Zhang, Xinran</creatorcontrib><creatorcontrib>Ji, Jianmin</creatorcontrib><creatorcontrib>Dong, Erbao</creatorcontrib><creatorcontrib>Zhang, Yanyong</creatorcontrib><title>USTC FLICAR: A sensors fusion dataset of LiDAR-inertial-camera for heavy-duty autonomous aerial work robots</title><title>The International journal of robotics research</title><description>In this paper, we present the USTC FLICAR Dataset, which is dedicated to the development of simultaneous localization and mapping and precise 3D reconstruction of the workspace for heavy-duty autonomous aerial work robots. In recent years, numerous public datasets have played significant roles in the advancement of autonomous cars and unmanned aerial vehicles (UAVs). However, these two platforms differ from aerial work robots: UAVs are limited in their payload capacity, while cars are restricted to two-dimensional movements. To fill this gap, we create the “Giraffe” mapping robot based on a bucket truck, which is equipped with a variety of well-calibrated and synchronized sensors: four 3D LiDARs, two stereo cameras, two monocular cameras, Inertial Measurement Units (IMUs), and a GNSS/INS system. A laser tracker is used to record the millimeter-level ground truth positions. We also make its ground twin, the “Okapi” mapping robot, to gather data for comparison. The proposed dataset extends the typical autonomous driving sensing suite to aerial scenes, demonstrating the potential of combining autonomous driving perception systems with bucket trucks to create a versatile autonomous aerial working platform. Moreover, based on the Segment Anything Model (SAM), we produce the Semantic FLICAR dataset, which provides fine-grained semantic segmentation annotations for multimodal continuous data in both temporal and spatial dimensions. The dataset is available for download at: https://ustc-flicar.github.io/.</description><subject>Automobiles</subject><subject>Autonomous cars</subject><subject>Cameras</subject><subject>Datasets</subject><subject>Driving</subject><subject>Image reconstruction</subject><subject>Inertial platforms</subject><subject>Multisensor fusion</subject><subject>Robots</subject><subject>Semantic segmentation</subject><subject>Semantics</subject><subject>Sensors</subject><subject>Simultaneous localization and mapping</subject><subject>Unmanned aerial vehicles</subject><issn>0278-3649</issn><issn>1741-3176</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNp1kEtLAzEUhYMoWKs_wF3AdWpek0nclWpVKAi1XQ-ZPHT6mNQko_TfO6WCC3F1D9zvnAMHgGuCR4SU5S2mpWSCK8oIUYUo8AkYkJITxEgpTsHg8EcH4BxcpLTCGDOB1QCsl6-LCZzOnifj-R0cw-TaFGKCvktNaKHVWSeXYfBw1tyP56hpXcyN3iCjty5q6EOE705_7pHt8h7qLoc2bEOXoHax5-BXiGsYQx1yugRnXm-Su_q5Q7CcPiwmT2j28tj3z5BhhGbkSc2UqhWnRljjhOwF90aR0mJT1JhLjplUlhJuvStr6w2zolbOmppzQtkQ3BxzdzF8dC7lahW62PaVFZVSFJIzKnuKHCkTQ0rR-WoXm62O-4rg6rBp9WfT3jM6epJ-c7-p_xu-Aan5dl8</recordid><startdate>202309</startdate><enddate>202309</enddate><creator>Wang, Ziming</creator><creator>Liu, Yujiang</creator><creator>Duan, Yifan</creator><creator>Li, Xingchen</creator><creator>Zhang, Xinran</creator><creator>Ji, Jianmin</creator><creator>Dong, Erbao</creator><creator>Zhang, Yanyong</creator><general>SAGE Publications</general><general>SAGE PUBLICATIONS, INC</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7TB</scope><scope>8FD</scope><scope>FR3</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0009-0005-0161-9539</orcidid><orcidid>https://orcid.org/0000-0002-4062-9730</orcidid><orcidid>https://orcid.org/0000-0003-0499-6848</orcidid><orcidid>https://orcid.org/0009-0004-0754-3953</orcidid></search><sort><creationdate>202309</creationdate><title>USTC FLICAR: A sensors fusion dataset of LiDAR-inertial-camera for heavy-duty autonomous aerial work robots</title><author>Wang, Ziming ; Liu, Yujiang ; Duan, Yifan ; Li, Xingchen ; Zhang, Xinran ; Ji, Jianmin ; Dong, Erbao ; Zhang, Yanyong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c312t-f1b399b942c6dce6842c4fc917d0c5b04840389d214dfe7bdfc3d6b9edcb44123</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Automobiles</topic><topic>Autonomous cars</topic><topic>Cameras</topic><topic>Datasets</topic><topic>Driving</topic><topic>Image reconstruction</topic><topic>Inertial platforms</topic><topic>Multisensor fusion</topic><topic>Robots</topic><topic>Semantic segmentation</topic><topic>Semantics</topic><topic>Sensors</topic><topic>Simultaneous localization and mapping</topic><topic>Unmanned aerial vehicles</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Wang, Ziming</creatorcontrib><creatorcontrib>Liu, Yujiang</creatorcontrib><creatorcontrib>Duan, Yifan</creatorcontrib><creatorcontrib>Li, Xingchen</creatorcontrib><creatorcontrib>Zhang, Xinran</creatorcontrib><creatorcontrib>Ji, Jianmin</creatorcontrib><creatorcontrib>Dong, Erbao</creatorcontrib><creatorcontrib>Zhang, Yanyong</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Mechanical &amp; Transportation Engineering Abstracts</collection><collection>Technology Research Database</collection><collection>Engineering Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>The International journal of robotics research</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Wang, Ziming</au><au>Liu, Yujiang</au><au>Duan, Yifan</au><au>Li, Xingchen</au><au>Zhang, Xinran</au><au>Ji, Jianmin</au><au>Dong, Erbao</au><au>Zhang, Yanyong</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>USTC FLICAR: A sensors fusion dataset of LiDAR-inertial-camera for heavy-duty autonomous aerial work robots</atitle><jtitle>The International journal of robotics research</jtitle><date>2023-09</date><risdate>2023</risdate><volume>42</volume><issue>11</issue><spage>1015</spage><epage>1047</epage><pages>1015-1047</pages><issn>0278-3649</issn><eissn>1741-3176</eissn><abstract>In this paper, we present the USTC FLICAR Dataset, which is dedicated to the development of simultaneous localization and mapping and precise 3D reconstruction of the workspace for heavy-duty autonomous aerial work robots. In recent years, numerous public datasets have played significant roles in the advancement of autonomous cars and unmanned aerial vehicles (UAVs). However, these two platforms differ from aerial work robots: UAVs are limited in their payload capacity, while cars are restricted to two-dimensional movements. To fill this gap, we create the “Giraffe” mapping robot based on a bucket truck, which is equipped with a variety of well-calibrated and synchronized sensors: four 3D LiDARs, two stereo cameras, two monocular cameras, Inertial Measurement Units (IMUs), and a GNSS/INS system. A laser tracker is used to record the millimeter-level ground truth positions. We also make its ground twin, the “Okapi” mapping robot, to gather data for comparison. The proposed dataset extends the typical autonomous driving sensing suite to aerial scenes, demonstrating the potential of combining autonomous driving perception systems with bucket trucks to create a versatile autonomous aerial working platform. Moreover, based on the Segment Anything Model (SAM), we produce the Semantic FLICAR dataset, which provides fine-grained semantic segmentation annotations for multimodal continuous data in both temporal and spatial dimensions. The dataset is available for download at: https://ustc-flicar.github.io/.</abstract><cop>London, England</cop><pub>SAGE Publications</pub><doi>10.1177/02783649231195650</doi><tpages>33</tpages><orcidid>https://orcid.org/0009-0005-0161-9539</orcidid><orcidid>https://orcid.org/0000-0002-4062-9730</orcidid><orcidid>https://orcid.org/0000-0003-0499-6848</orcidid><orcidid>https://orcid.org/0009-0004-0754-3953</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0278-3649
ispartof The International journal of robotics research, 2023-09, Vol.42 (11), p.1015-1047
issn 0278-3649
1741-3176
language eng
recordid cdi_proquest_journals_2886584328
source Access via SAGE
subjects Automobiles
Autonomous cars
Cameras
Datasets
Driving
Image reconstruction
Inertial platforms
Multisensor fusion
Robots
Semantic segmentation
Semantics
Sensors
Simultaneous localization and mapping
Unmanned aerial vehicles
title USTC FLICAR: A sensors fusion dataset of LiDAR-inertial-camera for heavy-duty autonomous aerial work robots
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-03T11%3A10%3A39IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=USTC%20FLICAR:%20A%20sensors%20fusion%20dataset%20of%20LiDAR-inertial-camera%20for%20heavy-duty%20autonomous%20aerial%20work%20robots&rft.jtitle=The%20International%20journal%20of%20robotics%20research&rft.au=Wang,%20Ziming&rft.date=2023-09&rft.volume=42&rft.issue=11&rft.spage=1015&rft.epage=1047&rft.pages=1015-1047&rft.issn=0278-3649&rft.eissn=1741-3176&rft_id=info:doi/10.1177/02783649231195650&rft_dat=%3Cproquest_cross%3E2886584328%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2886584328&rft_id=info:pmid/&rft_sage_id=10.1177_02783649231195650&rfr_iscdi=true