mPLUG-Owl3: Towards Long Image-Sequence Understanding in Multi-Modal Large Language Models
Multi-modal Large Language Models (MLLMs) have demonstrated remarkable capabilities in executing instructions for a variety of single-image tasks. Despite this progress, significant challenges remain in modeling long image sequences. In this work, we introduce the versatile multi-modal large languag...
Gespeichert in:
Hauptverfasser: | , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Ye, Jiabo Xu, Haiyang Liu, Haowei Hu, Anwen Yan, Ming Qian, Qi Zhang, Ji Huang, Fei Zhou, Jingren |
description | Multi-modal Large Language Models (MLLMs) have demonstrated remarkable
capabilities in executing instructions for a variety of single-image tasks.
Despite this progress, significant challenges remain in modeling long image
sequences. In this work, we introduce the versatile multi-modal large language
model, mPLUG-Owl3, which enhances the capability for long image-sequence
understanding in scenarios that incorporate retrieved image-text knowledge,
interleaved image-text, and lengthy videos. Specifically, we propose novel
hyper attention blocks to efficiently integrate vision and language into a
common language-guided semantic space, thereby facilitating the processing of
extended multi-image scenarios. Extensive experimental results suggest that
mPLUG-Owl3 achieves state-of-the-art performance among models with a similar
size on single-image, multi-image, and video benchmarks. Moreover, we propose a
challenging long visual sequence evaluation named Distractor Resistance to
assess the ability of models to maintain focus amidst distractions. Finally,
with the proposed architecture, mPLUG-Owl3 demonstrates outstanding performance
on ultra-long visual sequence inputs. We hope that mPLUG-Owl3 can contribute to
the development of more efficient and powerful multimodal large language
models. |
doi_str_mv | 10.48550/arxiv.2408.04840 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2408_04840</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2408_04840</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2408_048403</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjGw0DMwsTAx4GSIyg3wCXXX9S_PMbZSCMkvTyxKKVbwyc9LV_DMTUxP1Q1OLSxNzUtOVQjNS0ktKi5JzEvJBEpm5in4luaUZOr65qck5ij4JBalpwLJvPRSoCYFoGBqTjEPA2taYk5xKi-U5maQd3MNcfbQBbsivqAoMzexqDIe5Jp4sGuMCasAAPBKPvs</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>mPLUG-Owl3: Towards Long Image-Sequence Understanding in Multi-Modal Large Language Models</title><source>arXiv.org</source><creator>Ye, Jiabo ; Xu, Haiyang ; Liu, Haowei ; Hu, Anwen ; Yan, Ming ; Qian, Qi ; Zhang, Ji ; Huang, Fei ; Zhou, Jingren</creator><creatorcontrib>Ye, Jiabo ; Xu, Haiyang ; Liu, Haowei ; Hu, Anwen ; Yan, Ming ; Qian, Qi ; Zhang, Ji ; Huang, Fei ; Zhou, Jingren</creatorcontrib><description>Multi-modal Large Language Models (MLLMs) have demonstrated remarkable
capabilities in executing instructions for a variety of single-image tasks.
Despite this progress, significant challenges remain in modeling long image
sequences. In this work, we introduce the versatile multi-modal large language
model, mPLUG-Owl3, which enhances the capability for long image-sequence
understanding in scenarios that incorporate retrieved image-text knowledge,
interleaved image-text, and lengthy videos. Specifically, we propose novel
hyper attention blocks to efficiently integrate vision and language into a
common language-guided semantic space, thereby facilitating the processing of
extended multi-image scenarios. Extensive experimental results suggest that
mPLUG-Owl3 achieves state-of-the-art performance among models with a similar
size on single-image, multi-image, and video benchmarks. Moreover, we propose a
challenging long visual sequence evaluation named Distractor Resistance to
assess the ability of models to maintain focus amidst distractions. Finally,
with the proposed architecture, mPLUG-Owl3 demonstrates outstanding performance
on ultra-long visual sequence inputs. We hope that mPLUG-Owl3 can contribute to
the development of more efficient and powerful multimodal large language
models.</description><identifier>DOI: 10.48550/arxiv.2408.04840</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computation and Language ; Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Learning</subject><creationdate>2024-08</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,781,886</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2408.04840$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2408.04840$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Ye, Jiabo</creatorcontrib><creatorcontrib>Xu, Haiyang</creatorcontrib><creatorcontrib>Liu, Haowei</creatorcontrib><creatorcontrib>Hu, Anwen</creatorcontrib><creatorcontrib>Yan, Ming</creatorcontrib><creatorcontrib>Qian, Qi</creatorcontrib><creatorcontrib>Zhang, Ji</creatorcontrib><creatorcontrib>Huang, Fei</creatorcontrib><creatorcontrib>Zhou, Jingren</creatorcontrib><title>mPLUG-Owl3: Towards Long Image-Sequence Understanding in Multi-Modal Large Language Models</title><description>Multi-modal Large Language Models (MLLMs) have demonstrated remarkable
capabilities in executing instructions for a variety of single-image tasks.
Despite this progress, significant challenges remain in modeling long image
sequences. In this work, we introduce the versatile multi-modal large language
model, mPLUG-Owl3, which enhances the capability for long image-sequence
understanding in scenarios that incorporate retrieved image-text knowledge,
interleaved image-text, and lengthy videos. Specifically, we propose novel
hyper attention blocks to efficiently integrate vision and language into a
common language-guided semantic space, thereby facilitating the processing of
extended multi-image scenarios. Extensive experimental results suggest that
mPLUG-Owl3 achieves state-of-the-art performance among models with a similar
size on single-image, multi-image, and video benchmarks. Moreover, we propose a
challenging long visual sequence evaluation named Distractor Resistance to
assess the ability of models to maintain focus amidst distractions. Finally,
with the proposed architecture, mPLUG-Owl3 demonstrates outstanding performance
on ultra-long visual sequence inputs. We hope that mPLUG-Owl3 can contribute to
the development of more efficient and powerful multimodal large language
models.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjGw0DMwsTAx4GSIyg3wCXXX9S_PMbZSCMkvTyxKKVbwyc9LV_DMTUxP1Q1OLSxNzUtOVQjNS0ktKi5JzEvJBEpm5in4luaUZOr65qck5ij4JBalpwLJvPRSoCYFoGBqTjEPA2taYk5xKi-U5maQd3MNcfbQBbsivqAoMzexqDIe5Jp4sGuMCasAAPBKPvs</recordid><startdate>20240808</startdate><enddate>20240808</enddate><creator>Ye, Jiabo</creator><creator>Xu, Haiyang</creator><creator>Liu, Haowei</creator><creator>Hu, Anwen</creator><creator>Yan, Ming</creator><creator>Qian, Qi</creator><creator>Zhang, Ji</creator><creator>Huang, Fei</creator><creator>Zhou, Jingren</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240808</creationdate><title>mPLUG-Owl3: Towards Long Image-Sequence Understanding in Multi-Modal Large Language Models</title><author>Ye, Jiabo ; Xu, Haiyang ; Liu, Haowei ; Hu, Anwen ; Yan, Ming ; Qian, Qi ; Zhang, Ji ; Huang, Fei ; Zhou, Jingren</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2408_048403</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Ye, Jiabo</creatorcontrib><creatorcontrib>Xu, Haiyang</creatorcontrib><creatorcontrib>Liu, Haowei</creatorcontrib><creatorcontrib>Hu, Anwen</creatorcontrib><creatorcontrib>Yan, Ming</creatorcontrib><creatorcontrib>Qian, Qi</creatorcontrib><creatorcontrib>Zhang, Ji</creatorcontrib><creatorcontrib>Huang, Fei</creatorcontrib><creatorcontrib>Zhou, Jingren</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Ye, Jiabo</au><au>Xu, Haiyang</au><au>Liu, Haowei</au><au>Hu, Anwen</au><au>Yan, Ming</au><au>Qian, Qi</au><au>Zhang, Ji</au><au>Huang, Fei</au><au>Zhou, Jingren</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>mPLUG-Owl3: Towards Long Image-Sequence Understanding in Multi-Modal Large Language Models</atitle><date>2024-08-08</date><risdate>2024</risdate><abstract>Multi-modal Large Language Models (MLLMs) have demonstrated remarkable
capabilities in executing instructions for a variety of single-image tasks.
Despite this progress, significant challenges remain in modeling long image
sequences. In this work, we introduce the versatile multi-modal large language
model, mPLUG-Owl3, which enhances the capability for long image-sequence
understanding in scenarios that incorporate retrieved image-text knowledge,
interleaved image-text, and lengthy videos. Specifically, we propose novel
hyper attention blocks to efficiently integrate vision and language into a
common language-guided semantic space, thereby facilitating the processing of
extended multi-image scenarios. Extensive experimental results suggest that
mPLUG-Owl3 achieves state-of-the-art performance among models with a similar
size on single-image, multi-image, and video benchmarks. Moreover, we propose a
challenging long visual sequence evaluation named Distractor Resistance to
assess the ability of models to maintain focus amidst distractions. Finally,
with the proposed architecture, mPLUG-Owl3 demonstrates outstanding performance
on ultra-long visual sequence inputs. We hope that mPLUG-Owl3 can contribute to
the development of more efficient and powerful multimodal large language
models.</abstract><doi>10.48550/arxiv.2408.04840</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2408.04840 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2408_04840 |
source | arXiv.org |
subjects | Computer Science - Artificial Intelligence Computer Science - Computation and Language Computer Science - Computer Vision and Pattern Recognition Computer Science - Learning |
title | mPLUG-Owl3: Towards Long Image-Sequence Understanding in Multi-Modal Large Language Models |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-17T03%3A37%3A34IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=mPLUG-Owl3:%20Towards%20Long%20Image-Sequence%20Understanding%20in%20Multi-Modal%20Large%20Language%20Models&rft.au=Ye,%20Jiabo&rft.date=2024-08-08&rft_id=info:doi/10.48550/arxiv.2408.04840&rft_dat=%3Carxiv_GOX%3E2408_04840%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |