Dynamic-LLaVA: Efficient Multimodal Large Language Models via Dynamic Vision-language Context Sparsification
Multimodal Large Language Models (MLLMs) have achieved remarkable success in vision understanding, reasoning, and interaction. However, the inference computation and memory increase progressively with the generation of output tokens during decoding, directly affecting the efficacy of MLLMs. Existing...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Multimodal Large Language Models (MLLMs) have achieved remarkable success in
vision understanding, reasoning, and interaction. However, the inference
computation and memory increase progressively with the generation of output
tokens during decoding, directly affecting the efficacy of MLLMs. Existing
methods attempt to reduce the vision context redundancy to achieve efficient
MLLMs. Unfortunately, the efficiency benefits of the vision context reduction
in the prefill stage gradually diminish during the decoding stage. To address
this problem, we proposed a dynamic vision-language context sparsification
framework Dynamic-LLaVA, which dynamically reduces the redundancy of vision
context in the prefill stage and decreases the memory and computation overhead
of the generated language context during decoding. Dynamic-LLaVA designs a
tailored sparsification inference scheme for different inference modes, i.e.,
prefill, decoding with and without KV cache, to achieve efficient inference of
MLLMs. In practice, Dynamic-LLaVA can reduce computation consumption by
$\sim$75\% in the prefill stage. Meanwhile, throughout the entire generation
process of MLLMs, Dynamic-LLaVA reduces the $\sim$50\% computation consumption
under decoding without KV cache, while saving $\sim$50\% GPU memory overhead
when decoding with KV cache, due to the vision-language context sparsification.
Extensive experiments also demonstrate that Dynamic-LLaVA achieves efficient
inference for MLLMs with negligible understanding and generation ability
degradation or even performance gains compared to the full-context inference
baselines. Code is available at https://github.com/Osilly/dynamic_llava . |
---|---|
DOI: | 10.48550/arxiv.2412.00876 |