LongVILA: Scaling Long-Context Visual Language Models for Long Videos

Long-context capability is critical for multi-modal foundation models, especially for long video understanding. We introduce LongVILA, a full-stack solution for long-context visual-language models by co-designing the algorithm and system. For model training, we upgrade existing VLMs to support long...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Chen, Yukang, Xue, Fuzhao, Li, Dacheng, Hu, Qinghao, Zhu, Ligeng, Li, Xiuyu, Fang, Yunhao, Tang, Haotian, Yang, Shang, Liu, Zhijian, He, Ethan, Yin, Hongxu, Molchanov, Pavlo, Kautz, Jan, Fan, Linxi, Zhu, Yuke, Lu, Yao, Han, Song
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Chen, Yukang
Xue, Fuzhao
Li, Dacheng
Hu, Qinghao
Zhu, Ligeng
Li, Xiuyu
Fang, Yunhao
Tang, Haotian
Yang, Shang
Liu, Zhijian
He, Ethan
Yin, Hongxu
Molchanov, Pavlo
Kautz, Jan
Fan, Linxi
Zhu, Yuke
Lu, Yao
Han, Song
description Long-context capability is critical for multi-modal foundation models, especially for long video understanding. We introduce LongVILA, a full-stack solution for long-context visual-language models by co-designing the algorithm and system. For model training, we upgrade existing VLMs to support long video understanding by incorporating two additional stages, i.e., long context extension and long video supervised fine-tuning. However, training on long video is computationally and memory intensive. We introduce the long-context Multi-Modal Sequence Parallelism (MM-SP) system that efficiently parallelizes long video training and inference, enabling 2M context length training on 256 GPUs without any gradient checkpointing. LongVILA efficiently extends the number of video frames of VILA from 8 to 2048, achieving 99.8% accuracy in 6,000-frame (more than 1 million tokens) video needle-in-a-haystack. LongVILA-7B demonstrates strong accuracy on 9 popular video benchmarks, e.g. 65.1% VideoMME with subtitle. Besides, MM-SP is 2.1x - 5.7x faster than ring style sequence parallelism and 1.1x - 1.4x faster than Megatron with a hybrid context and tensor parallelism. Moreover, it seamlessly integrates with Hugging Face Transformers.
doi_str_mv 10.48550/arxiv.2408.10188
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2408_10188</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2408_10188</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2408_101883</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjGw0DM0MLSw4GRw9cnPSw_z9HG0UghOTszJzEtXAInoOufnlaRWlCiEZRaXJuYo-CTmpZcmpqcq-OanpOYUK6TlF4HVAeVTUvOLeRhY0xJzilN5oTQ3g7yba4izhy7YwviCoszcxKLKeJDF8WCLjQmrAACbkTdv</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>LongVILA: Scaling Long-Context Visual Language Models for Long Videos</title><source>arXiv.org</source><creator>Chen, Yukang ; Xue, Fuzhao ; Li, Dacheng ; Hu, Qinghao ; Zhu, Ligeng ; Li, Xiuyu ; Fang, Yunhao ; Tang, Haotian ; Yang, Shang ; Liu, Zhijian ; He, Ethan ; Yin, Hongxu ; Molchanov, Pavlo ; Kautz, Jan ; Fan, Linxi ; Zhu, Yuke ; Lu, Yao ; Han, Song</creator><creatorcontrib>Chen, Yukang ; Xue, Fuzhao ; Li, Dacheng ; Hu, Qinghao ; Zhu, Ligeng ; Li, Xiuyu ; Fang, Yunhao ; Tang, Haotian ; Yang, Shang ; Liu, Zhijian ; He, Ethan ; Yin, Hongxu ; Molchanov, Pavlo ; Kautz, Jan ; Fan, Linxi ; Zhu, Yuke ; Lu, Yao ; Han, Song</creatorcontrib><description>Long-context capability is critical for multi-modal foundation models, especially for long video understanding. We introduce LongVILA, a full-stack solution for long-context visual-language models by co-designing the algorithm and system. For model training, we upgrade existing VLMs to support long video understanding by incorporating two additional stages, i.e., long context extension and long video supervised fine-tuning. However, training on long video is computationally and memory intensive. We introduce the long-context Multi-Modal Sequence Parallelism (MM-SP) system that efficiently parallelizes long video training and inference, enabling 2M context length training on 256 GPUs without any gradient checkpointing. LongVILA efficiently extends the number of video frames of VILA from 8 to 2048, achieving 99.8% accuracy in 6,000-frame (more than 1 million tokens) video needle-in-a-haystack. LongVILA-7B demonstrates strong accuracy on 9 popular video benchmarks, e.g. 65.1% VideoMME with subtitle. Besides, MM-SP is 2.1x - 5.7x faster than ring style sequence parallelism and 1.1x - 1.4x faster than Megatron with a hybrid context and tensor parallelism. Moreover, it seamlessly integrates with Hugging Face Transformers.</description><identifier>DOI: 10.48550/arxiv.2408.10188</identifier><language>eng</language><subject>Computer Science - Computation and Language ; Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2024-08</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2408.10188$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2408.10188$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Chen, Yukang</creatorcontrib><creatorcontrib>Xue, Fuzhao</creatorcontrib><creatorcontrib>Li, Dacheng</creatorcontrib><creatorcontrib>Hu, Qinghao</creatorcontrib><creatorcontrib>Zhu, Ligeng</creatorcontrib><creatorcontrib>Li, Xiuyu</creatorcontrib><creatorcontrib>Fang, Yunhao</creatorcontrib><creatorcontrib>Tang, Haotian</creatorcontrib><creatorcontrib>Yang, Shang</creatorcontrib><creatorcontrib>Liu, Zhijian</creatorcontrib><creatorcontrib>He, Ethan</creatorcontrib><creatorcontrib>Yin, Hongxu</creatorcontrib><creatorcontrib>Molchanov, Pavlo</creatorcontrib><creatorcontrib>Kautz, Jan</creatorcontrib><creatorcontrib>Fan, Linxi</creatorcontrib><creatorcontrib>Zhu, Yuke</creatorcontrib><creatorcontrib>Lu, Yao</creatorcontrib><creatorcontrib>Han, Song</creatorcontrib><title>LongVILA: Scaling Long-Context Visual Language Models for Long Videos</title><description>Long-context capability is critical for multi-modal foundation models, especially for long video understanding. We introduce LongVILA, a full-stack solution for long-context visual-language models by co-designing the algorithm and system. For model training, we upgrade existing VLMs to support long video understanding by incorporating two additional stages, i.e., long context extension and long video supervised fine-tuning. However, training on long video is computationally and memory intensive. We introduce the long-context Multi-Modal Sequence Parallelism (MM-SP) system that efficiently parallelizes long video training and inference, enabling 2M context length training on 256 GPUs without any gradient checkpointing. LongVILA efficiently extends the number of video frames of VILA from 8 to 2048, achieving 99.8% accuracy in 6,000-frame (more than 1 million tokens) video needle-in-a-haystack. LongVILA-7B demonstrates strong accuracy on 9 popular video benchmarks, e.g. 65.1% VideoMME with subtitle. Besides, MM-SP is 2.1x - 5.7x faster than ring style sequence parallelism and 1.1x - 1.4x faster than Megatron with a hybrid context and tensor parallelism. Moreover, it seamlessly integrates with Hugging Face Transformers.</description><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjGw0DM0MLSw4GRw9cnPSw_z9HG0UghOTszJzEtXAInoOufnlaRWlCiEZRaXJuYo-CTmpZcmpqcq-OanpOYUK6TlF4HVAeVTUvOLeRhY0xJzilN5oTQ3g7yba4izhy7YwviCoszcxKLKeJDF8WCLjQmrAACbkTdv</recordid><startdate>20240819</startdate><enddate>20240819</enddate><creator>Chen, Yukang</creator><creator>Xue, Fuzhao</creator><creator>Li, Dacheng</creator><creator>Hu, Qinghao</creator><creator>Zhu, Ligeng</creator><creator>Li, Xiuyu</creator><creator>Fang, Yunhao</creator><creator>Tang, Haotian</creator><creator>Yang, Shang</creator><creator>Liu, Zhijian</creator><creator>He, Ethan</creator><creator>Yin, Hongxu</creator><creator>Molchanov, Pavlo</creator><creator>Kautz, Jan</creator><creator>Fan, Linxi</creator><creator>Zhu, Yuke</creator><creator>Lu, Yao</creator><creator>Han, Song</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240819</creationdate><title>LongVILA: Scaling Long-Context Visual Language Models for Long Videos</title><author>Chen, Yukang ; Xue, Fuzhao ; Li, Dacheng ; Hu, Qinghao ; Zhu, Ligeng ; Li, Xiuyu ; Fang, Yunhao ; Tang, Haotian ; Yang, Shang ; Liu, Zhijian ; He, Ethan ; Yin, Hongxu ; Molchanov, Pavlo ; Kautz, Jan ; Fan, Linxi ; Zhu, Yuke ; Lu, Yao ; Han, Song</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2408_101883</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Chen, Yukang</creatorcontrib><creatorcontrib>Xue, Fuzhao</creatorcontrib><creatorcontrib>Li, Dacheng</creatorcontrib><creatorcontrib>Hu, Qinghao</creatorcontrib><creatorcontrib>Zhu, Ligeng</creatorcontrib><creatorcontrib>Li, Xiuyu</creatorcontrib><creatorcontrib>Fang, Yunhao</creatorcontrib><creatorcontrib>Tang, Haotian</creatorcontrib><creatorcontrib>Yang, Shang</creatorcontrib><creatorcontrib>Liu, Zhijian</creatorcontrib><creatorcontrib>He, Ethan</creatorcontrib><creatorcontrib>Yin, Hongxu</creatorcontrib><creatorcontrib>Molchanov, Pavlo</creatorcontrib><creatorcontrib>Kautz, Jan</creatorcontrib><creatorcontrib>Fan, Linxi</creatorcontrib><creatorcontrib>Zhu, Yuke</creatorcontrib><creatorcontrib>Lu, Yao</creatorcontrib><creatorcontrib>Han, Song</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Chen, Yukang</au><au>Xue, Fuzhao</au><au>Li, Dacheng</au><au>Hu, Qinghao</au><au>Zhu, Ligeng</au><au>Li, Xiuyu</au><au>Fang, Yunhao</au><au>Tang, Haotian</au><au>Yang, Shang</au><au>Liu, Zhijian</au><au>He, Ethan</au><au>Yin, Hongxu</au><au>Molchanov, Pavlo</au><au>Kautz, Jan</au><au>Fan, Linxi</au><au>Zhu, Yuke</au><au>Lu, Yao</au><au>Han, Song</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>LongVILA: Scaling Long-Context Visual Language Models for Long Videos</atitle><date>2024-08-19</date><risdate>2024</risdate><abstract>Long-context capability is critical for multi-modal foundation models, especially for long video understanding. We introduce LongVILA, a full-stack solution for long-context visual-language models by co-designing the algorithm and system. For model training, we upgrade existing VLMs to support long video understanding by incorporating two additional stages, i.e., long context extension and long video supervised fine-tuning. However, training on long video is computationally and memory intensive. We introduce the long-context Multi-Modal Sequence Parallelism (MM-SP) system that efficiently parallelizes long video training and inference, enabling 2M context length training on 256 GPUs without any gradient checkpointing. LongVILA efficiently extends the number of video frames of VILA from 8 to 2048, achieving 99.8% accuracy in 6,000-frame (more than 1 million tokens) video needle-in-a-haystack. LongVILA-7B demonstrates strong accuracy on 9 popular video benchmarks, e.g. 65.1% VideoMME with subtitle. Besides, MM-SP is 2.1x - 5.7x faster than ring style sequence parallelism and 1.1x - 1.4x faster than Megatron with a hybrid context and tensor parallelism. Moreover, it seamlessly integrates with Hugging Face Transformers.</abstract><doi>10.48550/arxiv.2408.10188</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2408.10188
ispartof
issn
language eng
recordid cdi_arxiv_primary_2408_10188
source arXiv.org
subjects Computer Science - Computation and Language
Computer Science - Computer Vision and Pattern Recognition
title LongVILA: Scaling Long-Context Visual Language Models for Long Videos
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-21T16%3A42%3A39IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=LongVILA:%20Scaling%20Long-Context%20Visual%20Language%20Models%20for%20Long%20Videos&rft.au=Chen,%20Yukang&rft.date=2024-08-19&rft_id=info:doi/10.48550/arxiv.2408.10188&rft_dat=%3Carxiv_GOX%3E2408_10188%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true