3D-LLM: Injecting the 3D World into Large Language Models
Large language models (LLMs) and Vision-Language Models (VLMs) have been proven to excel at multiple tasks, such as commonsense reasoning. Powerful as these models can be, they are not grounded in the 3D physical world, which involves richer concepts such as spatial relationships, affordances, physi...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Hong, Yining Zhen, Haoyu Chen, Peihao Zheng, Shuhong Du, Yilun Chen, Zhenfang Gan, Chuang |
description | Large language models (LLMs) and Vision-Language Models (VLMs) have been
proven to excel at multiple tasks, such as commonsense reasoning. Powerful as
these models can be, they are not grounded in the 3D physical world, which
involves richer concepts such as spatial relationships, affordances, physics,
layout, and so on. In this work, we propose to inject the 3D world into large
language models and introduce a whole new family of 3D-LLMs. Specifically,
3D-LLMs can take 3D point clouds and their features as input and perform a
diverse set of 3D-related tasks, including captioning, dense captioning, 3D
question answering, task decomposition, 3D grounding, 3D-assisted dialog,
navigation, and so on. Using three types of prompting mechanisms that we
design, we are able to collect over 300k 3D-language data covering these tasks.
To efficiently train 3D-LLMs, we first utilize a 3D feature extractor that
obtains 3D features from rendered multi- view images. Then, we use 2D VLMs as
our backbones to train our 3D-LLMs. By introducing a 3D localization mechanism,
3D-LLMs can better capture 3D spatial information. Experiments on ScanQA show
that our model outperforms state-of-the-art baselines by a large margin (e.g.,
the BLEU-1 score surpasses state-of-the-art score by 9%). Furthermore,
experiments on our held-in datasets for 3D captioning, task composition, and
3D-assisted dialogue show that our model outperforms 2D VLMs. Qualitative
examples also show that our model could perform more tasks beyond the scope of
existing LLMs and VLMs. Project Page: : https://vis-www.cs.umass.edu/3dllm/. |
doi_str_mv | 10.48550/arxiv.2307.12981 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2307_12981</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2307_12981</sourcerecordid><originalsourceid>FETCH-LOGICAL-a671-127f11e08cee63537dfa9f326d6188030c39ad10b57aa9ed292eda8c95d92113</originalsourceid><addsrcrecordid>eNotj8FOwzAQRH3hUBU-oCf8A0m9Xhzb3FALpZIrDiBxjJZ4E4LSBLkBtX_f0nKZmdPTPCFmoPI7Z4yaU9q3v7lGZXPQ3sFEeFxmIWzu5br_4mps-0aOnyxxKd-H1EXZ9uMgA6WGT9k3P3QamyFyt7sWVzV1O77576l4fXp8Wzxn4WW1XjyEjAoLGWhbA7ByFXOBBm2sydeoi1iAcwpVhZ4iqA9jiTxH7TVHcpU30WsAnIrbC_V8vfxO7ZbSofxTKM8KeAQhgj6m</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>3D-LLM: Injecting the 3D World into Large Language Models</title><source>arXiv.org</source><creator>Hong, Yining ; Zhen, Haoyu ; Chen, Peihao ; Zheng, Shuhong ; Du, Yilun ; Chen, Zhenfang ; Gan, Chuang</creator><creatorcontrib>Hong, Yining ; Zhen, Haoyu ; Chen, Peihao ; Zheng, Shuhong ; Du, Yilun ; Chen, Zhenfang ; Gan, Chuang</creatorcontrib><description>Large language models (LLMs) and Vision-Language Models (VLMs) have been
proven to excel at multiple tasks, such as commonsense reasoning. Powerful as
these models can be, they are not grounded in the 3D physical world, which
involves richer concepts such as spatial relationships, affordances, physics,
layout, and so on. In this work, we propose to inject the 3D world into large
language models and introduce a whole new family of 3D-LLMs. Specifically,
3D-LLMs can take 3D point clouds and their features as input and perform a
diverse set of 3D-related tasks, including captioning, dense captioning, 3D
question answering, task decomposition, 3D grounding, 3D-assisted dialog,
navigation, and so on. Using three types of prompting mechanisms that we
design, we are able to collect over 300k 3D-language data covering these tasks.
To efficiently train 3D-LLMs, we first utilize a 3D feature extractor that
obtains 3D features from rendered multi- view images. Then, we use 2D VLMs as
our backbones to train our 3D-LLMs. By introducing a 3D localization mechanism,
3D-LLMs can better capture 3D spatial information. Experiments on ScanQA show
that our model outperforms state-of-the-art baselines by a large margin (e.g.,
the BLEU-1 score surpasses state-of-the-art score by 9%). Furthermore,
experiments on our held-in datasets for 3D captioning, task composition, and
3D-assisted dialogue show that our model outperforms 2D VLMs. Qualitative
examples also show that our model could perform more tasks beyond the scope of
existing LLMs and VLMs. Project Page: : https://vis-www.cs.umass.edu/3dllm/.</description><identifier>DOI: 10.48550/arxiv.2307.12981</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computation and Language ; Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Learning ; Computer Science - Robotics</subject><creationdate>2023-07</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2307.12981$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2307.12981$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Hong, Yining</creatorcontrib><creatorcontrib>Zhen, Haoyu</creatorcontrib><creatorcontrib>Chen, Peihao</creatorcontrib><creatorcontrib>Zheng, Shuhong</creatorcontrib><creatorcontrib>Du, Yilun</creatorcontrib><creatorcontrib>Chen, Zhenfang</creatorcontrib><creatorcontrib>Gan, Chuang</creatorcontrib><title>3D-LLM: Injecting the 3D World into Large Language Models</title><description>Large language models (LLMs) and Vision-Language Models (VLMs) have been
proven to excel at multiple tasks, such as commonsense reasoning. Powerful as
these models can be, they are not grounded in the 3D physical world, which
involves richer concepts such as spatial relationships, affordances, physics,
layout, and so on. In this work, we propose to inject the 3D world into large
language models and introduce a whole new family of 3D-LLMs. Specifically,
3D-LLMs can take 3D point clouds and their features as input and perform a
diverse set of 3D-related tasks, including captioning, dense captioning, 3D
question answering, task decomposition, 3D grounding, 3D-assisted dialog,
navigation, and so on. Using three types of prompting mechanisms that we
design, we are able to collect over 300k 3D-language data covering these tasks.
To efficiently train 3D-LLMs, we first utilize a 3D feature extractor that
obtains 3D features from rendered multi- view images. Then, we use 2D VLMs as
our backbones to train our 3D-LLMs. By introducing a 3D localization mechanism,
3D-LLMs can better capture 3D spatial information. Experiments on ScanQA show
that our model outperforms state-of-the-art baselines by a large margin (e.g.,
the BLEU-1 score surpasses state-of-the-art score by 9%). Furthermore,
experiments on our held-in datasets for 3D captioning, task composition, and
3D-assisted dialogue show that our model outperforms 2D VLMs. Qualitative
examples also show that our model could perform more tasks beyond the scope of
existing LLMs and VLMs. Project Page: : https://vis-www.cs.umass.edu/3dllm/.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Learning</subject><subject>Computer Science - Robotics</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8FOwzAQRH3hUBU-oCf8A0m9Xhzb3FALpZIrDiBxjJZ4E4LSBLkBtX_f0nKZmdPTPCFmoPI7Z4yaU9q3v7lGZXPQ3sFEeFxmIWzu5br_4mps-0aOnyxxKd-H1EXZ9uMgA6WGT9k3P3QamyFyt7sWVzV1O77576l4fXp8Wzxn4WW1XjyEjAoLGWhbA7ByFXOBBm2sydeoi1iAcwpVhZ4iqA9jiTxH7TVHcpU30WsAnIrbC_V8vfxO7ZbSofxTKM8KeAQhgj6m</recordid><startdate>20230724</startdate><enddate>20230724</enddate><creator>Hong, Yining</creator><creator>Zhen, Haoyu</creator><creator>Chen, Peihao</creator><creator>Zheng, Shuhong</creator><creator>Du, Yilun</creator><creator>Chen, Zhenfang</creator><creator>Gan, Chuang</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230724</creationdate><title>3D-LLM: Injecting the 3D World into Large Language Models</title><author>Hong, Yining ; Zhen, Haoyu ; Chen, Peihao ; Zheng, Shuhong ; Du, Yilun ; Chen, Zhenfang ; Gan, Chuang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a671-127f11e08cee63537dfa9f326d6188030c39ad10b57aa9ed292eda8c95d92113</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Learning</topic><topic>Computer Science - Robotics</topic><toplevel>online_resources</toplevel><creatorcontrib>Hong, Yining</creatorcontrib><creatorcontrib>Zhen, Haoyu</creatorcontrib><creatorcontrib>Chen, Peihao</creatorcontrib><creatorcontrib>Zheng, Shuhong</creatorcontrib><creatorcontrib>Du, Yilun</creatorcontrib><creatorcontrib>Chen, Zhenfang</creatorcontrib><creatorcontrib>Gan, Chuang</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Hong, Yining</au><au>Zhen, Haoyu</au><au>Chen, Peihao</au><au>Zheng, Shuhong</au><au>Du, Yilun</au><au>Chen, Zhenfang</au><au>Gan, Chuang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>3D-LLM: Injecting the 3D World into Large Language Models</atitle><date>2023-07-24</date><risdate>2023</risdate><abstract>Large language models (LLMs) and Vision-Language Models (VLMs) have been
proven to excel at multiple tasks, such as commonsense reasoning. Powerful as
these models can be, they are not grounded in the 3D physical world, which
involves richer concepts such as spatial relationships, affordances, physics,
layout, and so on. In this work, we propose to inject the 3D world into large
language models and introduce a whole new family of 3D-LLMs. Specifically,
3D-LLMs can take 3D point clouds and their features as input and perform a
diverse set of 3D-related tasks, including captioning, dense captioning, 3D
question answering, task decomposition, 3D grounding, 3D-assisted dialog,
navigation, and so on. Using three types of prompting mechanisms that we
design, we are able to collect over 300k 3D-language data covering these tasks.
To efficiently train 3D-LLMs, we first utilize a 3D feature extractor that
obtains 3D features from rendered multi- view images. Then, we use 2D VLMs as
our backbones to train our 3D-LLMs. By introducing a 3D localization mechanism,
3D-LLMs can better capture 3D spatial information. Experiments on ScanQA show
that our model outperforms state-of-the-art baselines by a large margin (e.g.,
the BLEU-1 score surpasses state-of-the-art score by 9%). Furthermore,
experiments on our held-in datasets for 3D captioning, task composition, and
3D-assisted dialogue show that our model outperforms 2D VLMs. Qualitative
examples also show that our model could perform more tasks beyond the scope of
existing LLMs and VLMs. Project Page: : https://vis-www.cs.umass.edu/3dllm/.</abstract><doi>10.48550/arxiv.2307.12981</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2307.12981 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2307_12981 |
source | arXiv.org |
subjects | Computer Science - Artificial Intelligence Computer Science - Computation and Language Computer Science - Computer Vision and Pattern Recognition Computer Science - Learning Computer Science - Robotics |
title | 3D-LLM: Injecting the 3D World into Large Language Models |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-11T15%3A22%3A38IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=3D-LLM:%20Injecting%20the%203D%20World%20into%20Large%20Language%20Models&rft.au=Hong,%20Yining&rft.date=2023-07-24&rft_id=info:doi/10.48550/arxiv.2307.12981&rft_dat=%3Carxiv_GOX%3E2307_12981%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |