WOMD-Reasoning: A Large-Scale Dataset and Benchmark for Interaction and Intention Reasoning in Driving
We propose Waymo Open Motion Dataset-Reasoning (WOMD-Reasoning), a comprehensive large-scale dataset with 3 million Q&As built on WOMD focusing on describing and reasoning interactions and intentions in driving scenarios. Existing language datasets for driving primarily capture interactions caus...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Li, Yiheng Fan, Cunxin Ge, Chongjian Zhao, Zhihao Li, Chenran Xu, Chenfeng Yao, Huaxiu Tomizuka, Masayoshi Zhou, Bolei Tang, Chen Ding, Mingyu Zhan, Wei |
description | We propose Waymo Open Motion Dataset-Reasoning (WOMD-Reasoning), a
comprehensive large-scale dataset with 3 million Q&As built on WOMD focusing on
describing and reasoning interactions and intentions in driving scenarios.
Existing language datasets for driving primarily capture interactions caused by
close distances. However, interactions induced by traffic rules and human
intentions, which can occur over long distances, are yet sufficiently covered.
To address this, WOMD-Reasoning presents by far the largest multi-modal Q&A
dataset on real-world driving scenarios, covering a wide range of driving
topics from map descriptions and motion status descriptions to narratives and
analyses of agents' interactions, behaviors, and intentions. We further
introduce Motion-LLaVA, a motion-language model fine-tuned on the proposed
dataset with robust interaction reasoning capabilities. We benchmark its
performance across various configurations including different input modalities,
reasoning techniques, and network architectures. The robust, diverse, and
multi-modal nature of WOMD-Reasoning highlights its potential to advance future
autonomous driving research and enable a broad range of applications. The
dataset and its vision modal extension are available at
https://waymo.com/open/download, and the codes & prompts to build it are
available at https://github.com/yhli123/WOMD-Reasoning. |
doi_str_mv | 10.48550/arxiv.2407.04281 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2407_04281</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2407_04281</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2407_042813</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjEw1zMwMbIw5GRIC_f3ddENSk0szs_LzEu3UnBU8EksSk_VDU5OzElVcEksSSxOLVFIzEtRcErNS87ITSzKVkjLL1LwzCtJLUpMLsnMzwPLgvh5YB7cMIXMPAWXoswyIJOHgTUtMac4lRdKczPIu7mGOHvogl0UX1CUCTS4Mh7ksniwy4wJqwAAS2tDeQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>WOMD-Reasoning: A Large-Scale Dataset and Benchmark for Interaction and Intention Reasoning in Driving</title><source>arXiv.org</source><creator>Li, Yiheng ; Fan, Cunxin ; Ge, Chongjian ; Zhao, Zhihao ; Li, Chenran ; Xu, Chenfeng ; Yao, Huaxiu ; Tomizuka, Masayoshi ; Zhou, Bolei ; Tang, Chen ; Ding, Mingyu ; Zhan, Wei</creator><creatorcontrib>Li, Yiheng ; Fan, Cunxin ; Ge, Chongjian ; Zhao, Zhihao ; Li, Chenran ; Xu, Chenfeng ; Yao, Huaxiu ; Tomizuka, Masayoshi ; Zhou, Bolei ; Tang, Chen ; Ding, Mingyu ; Zhan, Wei</creatorcontrib><description>We propose Waymo Open Motion Dataset-Reasoning (WOMD-Reasoning), a
comprehensive large-scale dataset with 3 million Q&As built on WOMD focusing on
describing and reasoning interactions and intentions in driving scenarios.
Existing language datasets for driving primarily capture interactions caused by
close distances. However, interactions induced by traffic rules and human
intentions, which can occur over long distances, are yet sufficiently covered.
To address this, WOMD-Reasoning presents by far the largest multi-modal Q&A
dataset on real-world driving scenarios, covering a wide range of driving
topics from map descriptions and motion status descriptions to narratives and
analyses of agents' interactions, behaviors, and intentions. We further
introduce Motion-LLaVA, a motion-language model fine-tuned on the proposed
dataset with robust interaction reasoning capabilities. We benchmark its
performance across various configurations including different input modalities,
reasoning techniques, and network architectures. The robust, diverse, and
multi-modal nature of WOMD-Reasoning highlights its potential to advance future
autonomous driving research and enable a broad range of applications. The
dataset and its vision modal extension are available at
https://waymo.com/open/download, and the codes & prompts to build it are
available at https://github.com/yhli123/WOMD-Reasoning.</description><identifier>DOI: 10.48550/arxiv.2407.04281</identifier><language>eng</language><subject>Computer Science - Robotics</subject><creationdate>2024-07</creationdate><rights>http://creativecommons.org/licenses/by-nc-nd/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2407.04281$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2407.04281$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Li, Yiheng</creatorcontrib><creatorcontrib>Fan, Cunxin</creatorcontrib><creatorcontrib>Ge, Chongjian</creatorcontrib><creatorcontrib>Zhao, Zhihao</creatorcontrib><creatorcontrib>Li, Chenran</creatorcontrib><creatorcontrib>Xu, Chenfeng</creatorcontrib><creatorcontrib>Yao, Huaxiu</creatorcontrib><creatorcontrib>Tomizuka, Masayoshi</creatorcontrib><creatorcontrib>Zhou, Bolei</creatorcontrib><creatorcontrib>Tang, Chen</creatorcontrib><creatorcontrib>Ding, Mingyu</creatorcontrib><creatorcontrib>Zhan, Wei</creatorcontrib><title>WOMD-Reasoning: A Large-Scale Dataset and Benchmark for Interaction and Intention Reasoning in Driving</title><description>We propose Waymo Open Motion Dataset-Reasoning (WOMD-Reasoning), a
comprehensive large-scale dataset with 3 million Q&As built on WOMD focusing on
describing and reasoning interactions and intentions in driving scenarios.
Existing language datasets for driving primarily capture interactions caused by
close distances. However, interactions induced by traffic rules and human
intentions, which can occur over long distances, are yet sufficiently covered.
To address this, WOMD-Reasoning presents by far the largest multi-modal Q&A
dataset on real-world driving scenarios, covering a wide range of driving
topics from map descriptions and motion status descriptions to narratives and
analyses of agents' interactions, behaviors, and intentions. We further
introduce Motion-LLaVA, a motion-language model fine-tuned on the proposed
dataset with robust interaction reasoning capabilities. We benchmark its
performance across various configurations including different input modalities,
reasoning techniques, and network architectures. The robust, diverse, and
multi-modal nature of WOMD-Reasoning highlights its potential to advance future
autonomous driving research and enable a broad range of applications. The
dataset and its vision modal extension are available at
https://waymo.com/open/download, and the codes & prompts to build it are
available at https://github.com/yhli123/WOMD-Reasoning.</description><subject>Computer Science - Robotics</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjEw1zMwMbIw5GRIC_f3ddENSk0szs_LzEu3UnBU8EksSk_VDU5OzElVcEksSSxOLVFIzEtRcErNS87ITSzKVkjLL1LwzCtJLUpMLsnMzwPLgvh5YB7cMIXMPAWXoswyIJOHgTUtMac4lRdKczPIu7mGOHvogl0UX1CUCTS4Mh7ksniwy4wJqwAAS2tDeQ</recordid><startdate>20240705</startdate><enddate>20240705</enddate><creator>Li, Yiheng</creator><creator>Fan, Cunxin</creator><creator>Ge, Chongjian</creator><creator>Zhao, Zhihao</creator><creator>Li, Chenran</creator><creator>Xu, Chenfeng</creator><creator>Yao, Huaxiu</creator><creator>Tomizuka, Masayoshi</creator><creator>Zhou, Bolei</creator><creator>Tang, Chen</creator><creator>Ding, Mingyu</creator><creator>Zhan, Wei</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240705</creationdate><title>WOMD-Reasoning: A Large-Scale Dataset and Benchmark for Interaction and Intention Reasoning in Driving</title><author>Li, Yiheng ; Fan, Cunxin ; Ge, Chongjian ; Zhao, Zhihao ; Li, Chenran ; Xu, Chenfeng ; Yao, Huaxiu ; Tomizuka, Masayoshi ; Zhou, Bolei ; Tang, Chen ; Ding, Mingyu ; Zhan, Wei</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2407_042813</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Robotics</topic><toplevel>online_resources</toplevel><creatorcontrib>Li, Yiheng</creatorcontrib><creatorcontrib>Fan, Cunxin</creatorcontrib><creatorcontrib>Ge, Chongjian</creatorcontrib><creatorcontrib>Zhao, Zhihao</creatorcontrib><creatorcontrib>Li, Chenran</creatorcontrib><creatorcontrib>Xu, Chenfeng</creatorcontrib><creatorcontrib>Yao, Huaxiu</creatorcontrib><creatorcontrib>Tomizuka, Masayoshi</creatorcontrib><creatorcontrib>Zhou, Bolei</creatorcontrib><creatorcontrib>Tang, Chen</creatorcontrib><creatorcontrib>Ding, Mingyu</creatorcontrib><creatorcontrib>Zhan, Wei</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Li, Yiheng</au><au>Fan, Cunxin</au><au>Ge, Chongjian</au><au>Zhao, Zhihao</au><au>Li, Chenran</au><au>Xu, Chenfeng</au><au>Yao, Huaxiu</au><au>Tomizuka, Masayoshi</au><au>Zhou, Bolei</au><au>Tang, Chen</au><au>Ding, Mingyu</au><au>Zhan, Wei</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>WOMD-Reasoning: A Large-Scale Dataset and Benchmark for Interaction and Intention Reasoning in Driving</atitle><date>2024-07-05</date><risdate>2024</risdate><abstract>We propose Waymo Open Motion Dataset-Reasoning (WOMD-Reasoning), a
comprehensive large-scale dataset with 3 million Q&As built on WOMD focusing on
describing and reasoning interactions and intentions in driving scenarios.
Existing language datasets for driving primarily capture interactions caused by
close distances. However, interactions induced by traffic rules and human
intentions, which can occur over long distances, are yet sufficiently covered.
To address this, WOMD-Reasoning presents by far the largest multi-modal Q&A
dataset on real-world driving scenarios, covering a wide range of driving
topics from map descriptions and motion status descriptions to narratives and
analyses of agents' interactions, behaviors, and intentions. We further
introduce Motion-LLaVA, a motion-language model fine-tuned on the proposed
dataset with robust interaction reasoning capabilities. We benchmark its
performance across various configurations including different input modalities,
reasoning techniques, and network architectures. The robust, diverse, and
multi-modal nature of WOMD-Reasoning highlights its potential to advance future
autonomous driving research and enable a broad range of applications. The
dataset and its vision modal extension are available at
https://waymo.com/open/download, and the codes & prompts to build it are
available at https://github.com/yhli123/WOMD-Reasoning.</abstract><doi>10.48550/arxiv.2407.04281</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2407.04281 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2407_04281 |
source | arXiv.org |
subjects | Computer Science - Robotics |
title | WOMD-Reasoning: A Large-Scale Dataset and Benchmark for Interaction and Intention Reasoning in Driving |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-06T18%3A50%3A53IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=WOMD-Reasoning:%20A%20Large-Scale%20Dataset%20and%20Benchmark%20for%20Interaction%20and%20Intention%20Reasoning%20in%20Driving&rft.au=Li,%20Yiheng&rft.date=2024-07-05&rft_id=info:doi/10.48550/arxiv.2407.04281&rft_dat=%3Carxiv_GOX%3E2407_04281%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |