DeepSeek-VL2: Mixture-of-Experts Vision-Language Models for Advanced Multimodal Understanding
We present DeepSeek-VL2, an advanced series of large Mixture-of-Experts (MoE) Vision-Language Models that significantly improves upon its predecessor, DeepSeek-VL, through two key major upgrades. For the vision component, we incorporate a dynamic tiling vision encoding strategy designed for processi...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , , , , , , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We present DeepSeek-VL2, an advanced series of large Mixture-of-Experts (MoE)
Vision-Language Models that significantly improves upon its predecessor,
DeepSeek-VL, through two key major upgrades. For the vision component, we
incorporate a dynamic tiling vision encoding strategy designed for processing
high-resolution images with different aspect ratios. For the language
component, we leverage DeepSeekMoE models with the Multi-head Latent Attention
mechanism, which compresses Key-Value cache into latent vectors, to enable
efficient inference and high throughput. Trained on an improved vision-language
dataset, DeepSeek-VL2 demonstrates superior capabilities across various tasks,
including but not limited to visual question answering, optical character
recognition, document/table/chart understanding, and visual grounding. Our
model series is composed of three variants: DeepSeek-VL2-Tiny,
DeepSeek-VL2-Small and DeepSeek-VL2, with 1.0B, 2.8B and 4.5B activated
parameters respectively. DeepSeek-VL2 achieves competitive or state-of-the-art
performance with similar or fewer activated parameters compared to existing
open-source dense and MoE-based models. Codes and pre-trained models are
publicly accessible at https://github.com/deepseek-ai/DeepSeek-VL2. |
---|---|
DOI: | 10.48550/arxiv.2412.10302 |