LongVILA: Scaling Long-Context Visual Language Models for Long Videos
Long-context capability is critical for multi-modal foundation models, especially for long video understanding. We introduce LongVILA, a full-stack solution for long-context visual-language models by co-designing the algorithm and system. For model training, we upgrade existing VLMs to support long...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Long-context capability is critical for multi-modal foundation models,
especially for long video understanding. We introduce LongVILA, a full-stack
solution for long-context visual-language models by co-designing the algorithm
and system. For model training, we upgrade existing VLMs to support long video
understanding by incorporating two additional stages, i.e., long context
extension and long video supervised fine-tuning. However, training on long
video is computationally and memory intensive. We introduce the long-context
Multi-Modal Sequence Parallelism (MM-SP) system that efficiently parallelizes
long video training and inference, enabling 2M context length training on 256
GPUs without any gradient checkpointing. LongVILA efficiently extends the
number of video frames of VILA from 8 to 2048, achieving 99.8% accuracy in
6,000-frame (more than 1 million tokens) video needle-in-a-haystack.
LongVILA-7B demonstrates strong accuracy on 9 popular video benchmarks, e.g.
65.1% VideoMME with subtitle. Besides, MM-SP is 2.1x - 5.7x faster than ring
style sequence parallelism and 1.1x - 1.4x faster than Megatron with a hybrid
context and tensor parallelism. Moreover, it seamlessly integrates with Hugging
Face Transformers. |
---|---|
DOI: | 10.48550/arxiv.2408.10188 |