Video-XL: Extra-Long Vision Language Model for Hour-Scale Video Understanding

Long video understanding poses a significant challenge for current Multi-modal Large Language Models (MLLMs). Notably, the MLLMs are constrained by their limited context lengths and the substantial costs while processing long videos. Although several existing methods attempt to reduce visual tokens,...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Shu, Yan, Liu, Zheng, Zhang, Peitian, Qin, Minghao, Zhou, Junjie, Liang, Zhengyang, Huang, Tiejun, Zhao, Bo
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Shu, Yan
Liu, Zheng
Zhang, Peitian
Qin, Minghao
Zhou, Junjie
Liang, Zhengyang
Huang, Tiejun
Zhao, Bo
description Long video understanding poses a significant challenge for current Multi-modal Large Language Models (MLLMs). Notably, the MLLMs are constrained by their limited context lengths and the substantial costs while processing long videos. Although several existing methods attempt to reduce visual tokens, their strategies encounter severe bottleneck, restricting MLLMs' ability to perceive fine-grained visual details. In this work, we propose Video-XL, a novel approach that leverages MLLMs' inherent key-value (KV) sparsification capacity to condense the visual input. Specifically, we introduce a new special token, the Visual Summarization Token (VST), for each interval of the video, which summarizes the visual information within the interval as its associated KV. The VST module is trained by instruction fine-tuning, where two optimizing strategies are offered. 1.Curriculum learning, where VST learns to make small (easy) and large compression (hard) progressively. 2. Composite data curation, which integrates single-image, multi-image, and synthetic data to overcome the scarcity of long-video instruction data. The compression quality is further improved by dynamic compression, which customizes compression granularity based on the information density of different video intervals. Video-XL's effectiveness is verified from three aspects. First, it achieves a superior long-video understanding capability, outperforming state-of-the-art models of comparable sizes across multiple popular benchmarks. Second, it effectively preserves video information, with minimal compression loss even at 16x compression ratio. Third, it realizes outstanding cost-effectiveness, enabling high-quality processing of thousands of frames on a single A100 GPU.
doi_str_mv 10.48550/arxiv.2409.14485
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2409_14485</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2409_14485</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2409_144853</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjGw1DM0AQpxMviGZaak5utG-FgpuFaUFCXq-uTnpSuEZRZn5ucp-CTmpZcmpqcq-OanpOYopOUXKXjklxbpBicn5qQqgHUqhOalpBYVlyTmpWTmpfMwsKYl5hSn8kJpbgZ5N9cQZw9dsMXxBUWZuYlFlfEgB8SDHWBMWAUAn5Q6hw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Video-XL: Extra-Long Vision Language Model for Hour-Scale Video Understanding</title><source>arXiv.org</source><creator>Shu, Yan ; Liu, Zheng ; Zhang, Peitian ; Qin, Minghao ; Zhou, Junjie ; Liang, Zhengyang ; Huang, Tiejun ; Zhao, Bo</creator><creatorcontrib>Shu, Yan ; Liu, Zheng ; Zhang, Peitian ; Qin, Minghao ; Zhou, Junjie ; Liang, Zhengyang ; Huang, Tiejun ; Zhao, Bo</creatorcontrib><description>Long video understanding poses a significant challenge for current Multi-modal Large Language Models (MLLMs). Notably, the MLLMs are constrained by their limited context lengths and the substantial costs while processing long videos. Although several existing methods attempt to reduce visual tokens, their strategies encounter severe bottleneck, restricting MLLMs' ability to perceive fine-grained visual details. In this work, we propose Video-XL, a novel approach that leverages MLLMs' inherent key-value (KV) sparsification capacity to condense the visual input. Specifically, we introduce a new special token, the Visual Summarization Token (VST), for each interval of the video, which summarizes the visual information within the interval as its associated KV. The VST module is trained by instruction fine-tuning, where two optimizing strategies are offered. 1.Curriculum learning, where VST learns to make small (easy) and large compression (hard) progressively. 2. Composite data curation, which integrates single-image, multi-image, and synthetic data to overcome the scarcity of long-video instruction data. The compression quality is further improved by dynamic compression, which customizes compression granularity based on the information density of different video intervals. Video-XL's effectiveness is verified from three aspects. First, it achieves a superior long-video understanding capability, outperforming state-of-the-art models of comparable sizes across multiple popular benchmarks. Second, it effectively preserves video information, with minimal compression loss even at 16x compression ratio. Third, it realizes outstanding cost-effectiveness, enabling high-quality processing of thousands of frames on a single A100 GPU.</description><identifier>DOI: 10.48550/arxiv.2409.14485</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2024-09</creationdate><rights>http://creativecommons.org/licenses/by-nc-sa/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2409.14485$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2409.14485$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Shu, Yan</creatorcontrib><creatorcontrib>Liu, Zheng</creatorcontrib><creatorcontrib>Zhang, Peitian</creatorcontrib><creatorcontrib>Qin, Minghao</creatorcontrib><creatorcontrib>Zhou, Junjie</creatorcontrib><creatorcontrib>Liang, Zhengyang</creatorcontrib><creatorcontrib>Huang, Tiejun</creatorcontrib><creatorcontrib>Zhao, Bo</creatorcontrib><title>Video-XL: Extra-Long Vision Language Model for Hour-Scale Video Understanding</title><description>Long video understanding poses a significant challenge for current Multi-modal Large Language Models (MLLMs). Notably, the MLLMs are constrained by their limited context lengths and the substantial costs while processing long videos. Although several existing methods attempt to reduce visual tokens, their strategies encounter severe bottleneck, restricting MLLMs' ability to perceive fine-grained visual details. In this work, we propose Video-XL, a novel approach that leverages MLLMs' inherent key-value (KV) sparsification capacity to condense the visual input. Specifically, we introduce a new special token, the Visual Summarization Token (VST), for each interval of the video, which summarizes the visual information within the interval as its associated KV. The VST module is trained by instruction fine-tuning, where two optimizing strategies are offered. 1.Curriculum learning, where VST learns to make small (easy) and large compression (hard) progressively. 2. Composite data curation, which integrates single-image, multi-image, and synthetic data to overcome the scarcity of long-video instruction data. The compression quality is further improved by dynamic compression, which customizes compression granularity based on the information density of different video intervals. Video-XL's effectiveness is verified from three aspects. First, it achieves a superior long-video understanding capability, outperforming state-of-the-art models of comparable sizes across multiple popular benchmarks. Second, it effectively preserves video information, with minimal compression loss even at 16x compression ratio. Third, it realizes outstanding cost-effectiveness, enabling high-quality processing of thousands of frames on a single A100 GPU.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjGw1DM0AQpxMviGZaak5utG-FgpuFaUFCXq-uTnpSuEZRZn5ucp-CTmpZcmpqcq-OanpOYopOUXKXjklxbpBicn5qQqgHUqhOalpBYVlyTmpWTmpfMwsKYl5hSn8kJpbgZ5N9cQZw9dsMXxBUWZuYlFlfEgB8SDHWBMWAUAn5Q6hw</recordid><startdate>20240922</startdate><enddate>20240922</enddate><creator>Shu, Yan</creator><creator>Liu, Zheng</creator><creator>Zhang, Peitian</creator><creator>Qin, Minghao</creator><creator>Zhou, Junjie</creator><creator>Liang, Zhengyang</creator><creator>Huang, Tiejun</creator><creator>Zhao, Bo</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240922</creationdate><title>Video-XL: Extra-Long Vision Language Model for Hour-Scale Video Understanding</title><author>Shu, Yan ; Liu, Zheng ; Zhang, Peitian ; Qin, Minghao ; Zhou, Junjie ; Liang, Zhengyang ; Huang, Tiejun ; Zhao, Bo</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2409_144853</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Shu, Yan</creatorcontrib><creatorcontrib>Liu, Zheng</creatorcontrib><creatorcontrib>Zhang, Peitian</creatorcontrib><creatorcontrib>Qin, Minghao</creatorcontrib><creatorcontrib>Zhou, Junjie</creatorcontrib><creatorcontrib>Liang, Zhengyang</creatorcontrib><creatorcontrib>Huang, Tiejun</creatorcontrib><creatorcontrib>Zhao, Bo</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Shu, Yan</au><au>Liu, Zheng</au><au>Zhang, Peitian</au><au>Qin, Minghao</au><au>Zhou, Junjie</au><au>Liang, Zhengyang</au><au>Huang, Tiejun</au><au>Zhao, Bo</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Video-XL: Extra-Long Vision Language Model for Hour-Scale Video Understanding</atitle><date>2024-09-22</date><risdate>2024</risdate><abstract>Long video understanding poses a significant challenge for current Multi-modal Large Language Models (MLLMs). Notably, the MLLMs are constrained by their limited context lengths and the substantial costs while processing long videos. Although several existing methods attempt to reduce visual tokens, their strategies encounter severe bottleneck, restricting MLLMs' ability to perceive fine-grained visual details. In this work, we propose Video-XL, a novel approach that leverages MLLMs' inherent key-value (KV) sparsification capacity to condense the visual input. Specifically, we introduce a new special token, the Visual Summarization Token (VST), for each interval of the video, which summarizes the visual information within the interval as its associated KV. The VST module is trained by instruction fine-tuning, where two optimizing strategies are offered. 1.Curriculum learning, where VST learns to make small (easy) and large compression (hard) progressively. 2. Composite data curation, which integrates single-image, multi-image, and synthetic data to overcome the scarcity of long-video instruction data. The compression quality is further improved by dynamic compression, which customizes compression granularity based on the information density of different video intervals. Video-XL's effectiveness is verified from three aspects. First, it achieves a superior long-video understanding capability, outperforming state-of-the-art models of comparable sizes across multiple popular benchmarks. Second, it effectively preserves video information, with minimal compression loss even at 16x compression ratio. Third, it realizes outstanding cost-effectiveness, enabling high-quality processing of thousands of frames on a single A100 GPU.</abstract><doi>10.48550/arxiv.2409.14485</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2409.14485
ispartof
issn
language eng
recordid cdi_arxiv_primary_2409_14485
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title Video-XL: Extra-Long Vision Language Model for Hour-Scale Video Understanding
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-05T16%3A39%3A22IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Video-XL:%20Extra-Long%20Vision%20Language%20Model%20for%20Hour-Scale%20Video%20Understanding&rft.au=Shu,%20Yan&rft.date=2024-09-22&rft_id=info:doi/10.48550/arxiv.2409.14485&rft_dat=%3Carxiv_GOX%3E2409_14485%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true