SQA3D: Situated Question Answering in 3D Scenes

We propose a new task to benchmark scene understanding of embodied agents: Situated Question Answering in 3D Scenes (SQA3D). Given a scene context(e.g., 3D scan), SQA3D requires the tested agent to first understand its situation(position, orientation, etc.) in the 3D scene as described by text, then...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Ma, Xiaojian, Yong, Silong, Zheng, Zilong, Li, Qing, Ling, Yitao, Zhu, Song-Chun, Huang, Siyuan
Format: Dataset
Sprache:eng
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Ma, Xiaojian
Yong, Silong
Zheng, Zilong
Li, Qing
Ling, Yitao
Zhu, Song-Chun
Huang, Siyuan
description We propose a new task to benchmark scene understanding of embodied agents: Situated Question Answering in 3D Scenes (SQA3D). Given a scene context(e.g., 3D scan), SQA3D requires the tested agent to first understand its situation(position, orientation, etc.) in the 3D scene as described by text, then reason about its surrounding environment and answer a question under that situation. Based upon 650 scenes from ScanNet, we provide a dataset centered around 6.8k unique situations, along with 20.4k descriptions and 33.4k diverse reasoning questions for these situations. These questions examine a wide spectrum of reasoning capabilities for an intelligent agent, ranging from spatial relation comprehension to commonsense understanding, navigation, and multi-hop reasoning. SQA3D imposes a significant challenge to current multi-modal especially 3D reasoning models. We evaluate various state-of-the-art approaches and find that the best one only achieves an overall score of 47.20%, while amateur human participants can reach 90.06%. We believe SQA3D could facilitate future embodied AI research with stronger situation understanding and reasoning capability.
doi_str_mv 10.5281/zenodo.7544817
format Dataset
fullrecord <record><control><sourceid>datacite_PQ8</sourceid><recordid>TN_cdi_datacite_primary_10_5281_zenodo_7544817</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>10_5281_zenodo_7544817</sourcerecordid><originalsourceid>FETCH-LOGICAL-d797-ac4f69641bfde20c54c312689cbc1723a7c0e58daff3a74a914cab2579784e6b3</originalsourceid><addsrcrecordid>eNotzztrwzAUBWAtHUratbP-gB3rZcndTNIXBEJwdnF9dVUErVxshdL--qYk0znLOfAx9iCa2kgn1r-UpzDV1mjthL1l6-HQq-0jH1I5QaHADydaSpoy7_PyTXPK7zxlrrZ8QMq03LGbCB8L3V9zxY7PT8fNa7Xbv7xt-l0VbGcrQB3brtVijIFkg0ajErJ1HY4orFRgsSHjAsR47ho6oRFGac5bp6kd1YrVl9sABTAV8l9z-oT5x4vG_0P8BeKvEPUHhDNBWA</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>dataset</recordtype></control><display><type>dataset</type><title>SQA3D: Situated Question Answering in 3D Scenes</title><source>DataCite</source><creator>Ma, Xiaojian ; Yong, Silong ; Zheng, Zilong ; Li, Qing ; Ling, Yitao ; Zhu, Song-Chun ; Huang, Siyuan</creator><creatorcontrib>Ma, Xiaojian ; Yong, Silong ; Zheng, Zilong ; Li, Qing ; Ling, Yitao ; Zhu, Song-Chun ; Huang, Siyuan</creatorcontrib><description>We propose a new task to benchmark scene understanding of embodied agents: Situated Question Answering in 3D Scenes (SQA3D). Given a scene context(e.g., 3D scan), SQA3D requires the tested agent to first understand its situation(position, orientation, etc.) in the 3D scene as described by text, then reason about its surrounding environment and answer a question under that situation. Based upon 650 scenes from ScanNet, we provide a dataset centered around 6.8k unique situations, along with 20.4k descriptions and 33.4k diverse reasoning questions for these situations. These questions examine a wide spectrum of reasoning capabilities for an intelligent agent, ranging from spatial relation comprehension to commonsense understanding, navigation, and multi-hop reasoning. SQA3D imposes a significant challenge to current multi-modal especially 3D reasoning models. We evaluate various state-of-the-art approaches and find that the best one only achieves an overall score of 47.20%, while amateur human participants can reach 90.06%. We believe SQA3D could facilitate future embodied AI research with stronger situation understanding and reasoning capability.</description><identifier>DOI: 10.5281/zenodo.7544817</identifier><language>eng</language><publisher>Zenodo</publisher><creationdate>2023</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,1894</link.rule.ids><linktorsrc>$$Uhttps://commons.datacite.org/doi.org/10.5281/zenodo.7544817$$EView_record_in_DataCite.org$$FView_record_in_$$GDataCite.org$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>Ma, Xiaojian</creatorcontrib><creatorcontrib>Yong, Silong</creatorcontrib><creatorcontrib>Zheng, Zilong</creatorcontrib><creatorcontrib>Li, Qing</creatorcontrib><creatorcontrib>Ling, Yitao</creatorcontrib><creatorcontrib>Zhu, Song-Chun</creatorcontrib><creatorcontrib>Huang, Siyuan</creatorcontrib><title>SQA3D: Situated Question Answering in 3D Scenes</title><description>We propose a new task to benchmark scene understanding of embodied agents: Situated Question Answering in 3D Scenes (SQA3D). Given a scene context(e.g., 3D scan), SQA3D requires the tested agent to first understand its situation(position, orientation, etc.) in the 3D scene as described by text, then reason about its surrounding environment and answer a question under that situation. Based upon 650 scenes from ScanNet, we provide a dataset centered around 6.8k unique situations, along with 20.4k descriptions and 33.4k diverse reasoning questions for these situations. These questions examine a wide spectrum of reasoning capabilities for an intelligent agent, ranging from spatial relation comprehension to commonsense understanding, navigation, and multi-hop reasoning. SQA3D imposes a significant challenge to current multi-modal especially 3D reasoning models. We evaluate various state-of-the-art approaches and find that the best one only achieves an overall score of 47.20%, while amateur human participants can reach 90.06%. We believe SQA3D could facilitate future embodied AI research with stronger situation understanding and reasoning capability.</description><fulltext>true</fulltext><rsrctype>dataset</rsrctype><creationdate>2023</creationdate><recordtype>dataset</recordtype><sourceid>PQ8</sourceid><recordid>eNotzztrwzAUBWAtHUratbP-gB3rZcndTNIXBEJwdnF9dVUErVxshdL--qYk0znLOfAx9iCa2kgn1r-UpzDV1mjthL1l6-HQq-0jH1I5QaHADydaSpoy7_PyTXPK7zxlrrZ8QMq03LGbCB8L3V9zxY7PT8fNa7Xbv7xt-l0VbGcrQB3brtVijIFkg0ajErJ1HY4orFRgsSHjAsR47ho6oRFGac5bp6kd1YrVl9sABTAV8l9z-oT5x4vG_0P8BeKvEPUHhDNBWA</recordid><startdate>20230121</startdate><enddate>20230121</enddate><creator>Ma, Xiaojian</creator><creator>Yong, Silong</creator><creator>Zheng, Zilong</creator><creator>Li, Qing</creator><creator>Ling, Yitao</creator><creator>Zhu, Song-Chun</creator><creator>Huang, Siyuan</creator><general>Zenodo</general><scope>DYCCY</scope><scope>PQ8</scope></search><sort><creationdate>20230121</creationdate><title>SQA3D: Situated Question Answering in 3D Scenes</title><author>Ma, Xiaojian ; Yong, Silong ; Zheng, Zilong ; Li, Qing ; Ling, Yitao ; Zhu, Song-Chun ; Huang, Siyuan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-d797-ac4f69641bfde20c54c312689cbc1723a7c0e58daff3a74a914cab2579784e6b3</frbrgroupid><rsrctype>datasets</rsrctype><prefilter>datasets</prefilter><language>eng</language><creationdate>2023</creationdate><toplevel>online_resources</toplevel><creatorcontrib>Ma, Xiaojian</creatorcontrib><creatorcontrib>Yong, Silong</creatorcontrib><creatorcontrib>Zheng, Zilong</creatorcontrib><creatorcontrib>Li, Qing</creatorcontrib><creatorcontrib>Ling, Yitao</creatorcontrib><creatorcontrib>Zhu, Song-Chun</creatorcontrib><creatorcontrib>Huang, Siyuan</creatorcontrib><collection>DataCite (Open Access)</collection><collection>DataCite</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Ma, Xiaojian</au><au>Yong, Silong</au><au>Zheng, Zilong</au><au>Li, Qing</au><au>Ling, Yitao</au><au>Zhu, Song-Chun</au><au>Huang, Siyuan</au><format>book</format><genre>unknown</genre><ristype>DATA</ristype><title>SQA3D: Situated Question Answering in 3D Scenes</title><date>2023-01-21</date><risdate>2023</risdate><abstract>We propose a new task to benchmark scene understanding of embodied agents: Situated Question Answering in 3D Scenes (SQA3D). Given a scene context(e.g., 3D scan), SQA3D requires the tested agent to first understand its situation(position, orientation, etc.) in the 3D scene as described by text, then reason about its surrounding environment and answer a question under that situation. Based upon 650 scenes from ScanNet, we provide a dataset centered around 6.8k unique situations, along with 20.4k descriptions and 33.4k diverse reasoning questions for these situations. These questions examine a wide spectrum of reasoning capabilities for an intelligent agent, ranging from spatial relation comprehension to commonsense understanding, navigation, and multi-hop reasoning. SQA3D imposes a significant challenge to current multi-modal especially 3D reasoning models. We evaluate various state-of-the-art approaches and find that the best one only achieves an overall score of 47.20%, while amateur human participants can reach 90.06%. We believe SQA3D could facilitate future embodied AI research with stronger situation understanding and reasoning capability.</abstract><pub>Zenodo</pub><doi>10.5281/zenodo.7544817</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.5281/zenodo.7544817
ispartof
issn
language eng
recordid cdi_datacite_primary_10_5281_zenodo_7544817
source DataCite
title SQA3D: Situated Question Answering in 3D Scenes
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-25T15%3A42%3A05IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-datacite_PQ8&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=unknown&rft.au=Ma,%20Xiaojian&rft.date=2023-01-21&rft_id=info:doi/10.5281/zenodo.7544817&rft_dat=%3Cdatacite_PQ8%3E10_5281_zenodo_7544817%3C/datacite_PQ8%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true