aiMotive Dataset: A Multimodal Dataset for Robust Autonomous Driving with Long-Range Perception

Autonomous driving is a popular research area within the computer vision research community. Since autonomous vehicles are highly safety-critical, ensuring robustness is essential for real-world deployment. While several public multimodal datasets are accessible, they mainly comprise two sensor moda...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Matuszka, Tamás, Barton, Iván, Butykai, Ádám, Hajas, Péter, Kiss, Dávid, Kovács, Domonkos, Kunsági-Máté, Sándor, Lengyel, Péter, Németh, Gábor, Pető, Levente, Ribli, Dezső, Szeghy, Dávid, Vajna, Szabolcs, Varga, Bálint
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Matuszka, Tamás
Barton, Iván
Butykai, Ádám
Hajas, Péter
Kiss, Dávid
Kovács, Domonkos
Kunsági-Máté, Sándor
Lengyel, Péter
Németh, Gábor
Pető, Levente
Ribli, Dezső
Szeghy, Dávid
Vajna, Szabolcs
Varga, Bálint
description Autonomous driving is a popular research area within the computer vision research community. Since autonomous vehicles are highly safety-critical, ensuring robustness is essential for real-world deployment. While several public multimodal datasets are accessible, they mainly comprise two sensor modalities (camera, LiDAR) which are not well suited for adverse weather. In addition, they lack far-range annotations, making it harder to train neural networks that are the base of a highway assistant function of an autonomous vehicle. Therefore, we introduce a multimodal dataset for robust autonomous driving with long-range perception. The dataset consists of 176 scenes with synchronized and calibrated LiDAR, camera, and radar sensors covering a 360-degree field of view. The collected data was captured in highway, urban, and suburban areas during daytime, night, and rain and is annotated with 3D bounding boxes with consistent identifiers across frames. Furthermore, we trained unimodal and multimodal baseline models for 3D object detection. Data are available at \url{https://github.com/aimotive/aimotive_dataset}.
doi_str_mv 10.48550/arxiv.2211.09445
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2211_09445</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2211_09445</sourcerecordid><originalsourceid>FETCH-LOGICAL-a675-c394680a729c7a7d615075210361655e38e7adffd40b09022348c05abc4aaa843</originalsourceid><addsrcrecordid>eNo1z81OhDAYheFuXJjRC3BlbwDsf4s7MuNfwkQzmT35KAWbAJ2Ugnr3xlFXJ3kXJ3kQuqEkF0ZKcgfx0685Y5TmpBBCXqIa_D4kvzq8gwSzS_e4xPtlSH4MLQz_FXch4kNoljnhcklhCmNYZryLfvVTjz98esdVmPrsAFPv8JuL1p2SD9MVuuhgmN31327Q8fHhuH3Oqtenl21ZZaC0zCwvhDIENCusBt0qKomWjBKuqJLSceM0tF3XCtKQgjDGhbFEQmMFABjBN-j29_YsrE_RjxC_6h9pfZbybxU8Th8</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>aiMotive Dataset: A Multimodal Dataset for Robust Autonomous Driving with Long-Range Perception</title><source>arXiv.org</source><creator>Matuszka, Tamás ; Barton, Iván ; Butykai, Ádám ; Hajas, Péter ; Kiss, Dávid ; Kovács, Domonkos ; Kunsági-Máté, Sándor ; Lengyel, Péter ; Németh, Gábor ; Pető, Levente ; Ribli, Dezső ; Szeghy, Dávid ; Vajna, Szabolcs ; Varga, Bálint</creator><creatorcontrib>Matuszka, Tamás ; Barton, Iván ; Butykai, Ádám ; Hajas, Péter ; Kiss, Dávid ; Kovács, Domonkos ; Kunsági-Máté, Sándor ; Lengyel, Péter ; Németh, Gábor ; Pető, Levente ; Ribli, Dezső ; Szeghy, Dávid ; Vajna, Szabolcs ; Varga, Bálint</creatorcontrib><description>Autonomous driving is a popular research area within the computer vision research community. Since autonomous vehicles are highly safety-critical, ensuring robustness is essential for real-world deployment. While several public multimodal datasets are accessible, they mainly comprise two sensor modalities (camera, LiDAR) which are not well suited for adverse weather. In addition, they lack far-range annotations, making it harder to train neural networks that are the base of a highway assistant function of an autonomous vehicle. Therefore, we introduce a multimodal dataset for robust autonomous driving with long-range perception. The dataset consists of 176 scenes with synchronized and calibrated LiDAR, camera, and radar sensors covering a 360-degree field of view. The collected data was captured in highway, urban, and suburban areas during daytime, night, and rain and is annotated with 3D bounding boxes with consistent identifiers across frames. Furthermore, we trained unimodal and multimodal baseline models for 3D object detection. Data are available at \url{https://github.com/aimotive/aimotive_dataset}.</description><identifier>DOI: 10.48550/arxiv.2211.09445</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2022-11</creationdate><rights>http://creativecommons.org/licenses/by-nc-sa/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2211.09445$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2211.09445$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Matuszka, Tamás</creatorcontrib><creatorcontrib>Barton, Iván</creatorcontrib><creatorcontrib>Butykai, Ádám</creatorcontrib><creatorcontrib>Hajas, Péter</creatorcontrib><creatorcontrib>Kiss, Dávid</creatorcontrib><creatorcontrib>Kovács, Domonkos</creatorcontrib><creatorcontrib>Kunsági-Máté, Sándor</creatorcontrib><creatorcontrib>Lengyel, Péter</creatorcontrib><creatorcontrib>Németh, Gábor</creatorcontrib><creatorcontrib>Pető, Levente</creatorcontrib><creatorcontrib>Ribli, Dezső</creatorcontrib><creatorcontrib>Szeghy, Dávid</creatorcontrib><creatorcontrib>Vajna, Szabolcs</creatorcontrib><creatorcontrib>Varga, Bálint</creatorcontrib><title>aiMotive Dataset: A Multimodal Dataset for Robust Autonomous Driving with Long-Range Perception</title><description>Autonomous driving is a popular research area within the computer vision research community. Since autonomous vehicles are highly safety-critical, ensuring robustness is essential for real-world deployment. While several public multimodal datasets are accessible, they mainly comprise two sensor modalities (camera, LiDAR) which are not well suited for adverse weather. In addition, they lack far-range annotations, making it harder to train neural networks that are the base of a highway assistant function of an autonomous vehicle. Therefore, we introduce a multimodal dataset for robust autonomous driving with long-range perception. The dataset consists of 176 scenes with synchronized and calibrated LiDAR, camera, and radar sensors covering a 360-degree field of view. The collected data was captured in highway, urban, and suburban areas during daytime, night, and rain and is annotated with 3D bounding boxes with consistent identifiers across frames. Furthermore, we trained unimodal and multimodal baseline models for 3D object detection. Data are available at \url{https://github.com/aimotive/aimotive_dataset}.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNo1z81OhDAYheFuXJjRC3BlbwDsf4s7MuNfwkQzmT35KAWbAJ2Ugnr3xlFXJ3kXJ3kQuqEkF0ZKcgfx0685Y5TmpBBCXqIa_D4kvzq8gwSzS_e4xPtlSH4MLQz_FXch4kNoljnhcklhCmNYZryLfvVTjz98esdVmPrsAFPv8JuL1p2SD9MVuuhgmN31327Q8fHhuH3Oqtenl21ZZaC0zCwvhDIENCusBt0qKomWjBKuqJLSceM0tF3XCtKQgjDGhbFEQmMFABjBN-j29_YsrE_RjxC_6h9pfZbybxU8Th8</recordid><startdate>20221117</startdate><enddate>20221117</enddate><creator>Matuszka, Tamás</creator><creator>Barton, Iván</creator><creator>Butykai, Ádám</creator><creator>Hajas, Péter</creator><creator>Kiss, Dávid</creator><creator>Kovács, Domonkos</creator><creator>Kunsági-Máté, Sándor</creator><creator>Lengyel, Péter</creator><creator>Németh, Gábor</creator><creator>Pető, Levente</creator><creator>Ribli, Dezső</creator><creator>Szeghy, Dávid</creator><creator>Vajna, Szabolcs</creator><creator>Varga, Bálint</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20221117</creationdate><title>aiMotive Dataset: A Multimodal Dataset for Robust Autonomous Driving with Long-Range Perception</title><author>Matuszka, Tamás ; Barton, Iván ; Butykai, Ádám ; Hajas, Péter ; Kiss, Dávid ; Kovács, Domonkos ; Kunsági-Máté, Sándor ; Lengyel, Péter ; Németh, Gábor ; Pető, Levente ; Ribli, Dezső ; Szeghy, Dávid ; Vajna, Szabolcs ; Varga, Bálint</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a675-c394680a729c7a7d615075210361655e38e7adffd40b09022348c05abc4aaa843</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Matuszka, Tamás</creatorcontrib><creatorcontrib>Barton, Iván</creatorcontrib><creatorcontrib>Butykai, Ádám</creatorcontrib><creatorcontrib>Hajas, Péter</creatorcontrib><creatorcontrib>Kiss, Dávid</creatorcontrib><creatorcontrib>Kovács, Domonkos</creatorcontrib><creatorcontrib>Kunsági-Máté, Sándor</creatorcontrib><creatorcontrib>Lengyel, Péter</creatorcontrib><creatorcontrib>Németh, Gábor</creatorcontrib><creatorcontrib>Pető, Levente</creatorcontrib><creatorcontrib>Ribli, Dezső</creatorcontrib><creatorcontrib>Szeghy, Dávid</creatorcontrib><creatorcontrib>Vajna, Szabolcs</creatorcontrib><creatorcontrib>Varga, Bálint</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Matuszka, Tamás</au><au>Barton, Iván</au><au>Butykai, Ádám</au><au>Hajas, Péter</au><au>Kiss, Dávid</au><au>Kovács, Domonkos</au><au>Kunsági-Máté, Sándor</au><au>Lengyel, Péter</au><au>Németh, Gábor</au><au>Pető, Levente</au><au>Ribli, Dezső</au><au>Szeghy, Dávid</au><au>Vajna, Szabolcs</au><au>Varga, Bálint</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>aiMotive Dataset: A Multimodal Dataset for Robust Autonomous Driving with Long-Range Perception</atitle><date>2022-11-17</date><risdate>2022</risdate><abstract>Autonomous driving is a popular research area within the computer vision research community. Since autonomous vehicles are highly safety-critical, ensuring robustness is essential for real-world deployment. While several public multimodal datasets are accessible, they mainly comprise two sensor modalities (camera, LiDAR) which are not well suited for adverse weather. In addition, they lack far-range annotations, making it harder to train neural networks that are the base of a highway assistant function of an autonomous vehicle. Therefore, we introduce a multimodal dataset for robust autonomous driving with long-range perception. The dataset consists of 176 scenes with synchronized and calibrated LiDAR, camera, and radar sensors covering a 360-degree field of view. The collected data was captured in highway, urban, and suburban areas during daytime, night, and rain and is annotated with 3D bounding boxes with consistent identifiers across frames. Furthermore, we trained unimodal and multimodal baseline models for 3D object detection. Data are available at \url{https://github.com/aimotive/aimotive_dataset}.</abstract><doi>10.48550/arxiv.2211.09445</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2211.09445
ispartof
issn
language eng
recordid cdi_arxiv_primary_2211_09445
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title aiMotive Dataset: A Multimodal Dataset for Robust Autonomous Driving with Long-Range Perception
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-21T19%3A28%3A10IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=aiMotive%20Dataset:%20A%20Multimodal%20Dataset%20for%20Robust%20Autonomous%20Driving%20with%20Long-Range%20Perception&rft.au=Matuszka,%20Tam%C3%A1s&rft.date=2022-11-17&rft_id=info:doi/10.48550/arxiv.2211.09445&rft_dat=%3Carxiv_GOX%3E2211_09445%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true