BlueLM-V-3B: Algorithm and System Co-Design for Multimodal Large Language Models on Mobile Devices
The emergence and growing popularity of multimodal large language models (MLLMs) have significant potential to enhance various aspects of daily life, from improving communication to facilitating learning and problem-solving. Mobile phones, as essential daily companions, represent the most effective...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The emergence and growing popularity of multimodal large language models
(MLLMs) have significant potential to enhance various aspects of daily life,
from improving communication to facilitating learning and problem-solving.
Mobile phones, as essential daily companions, represent the most effective and
accessible deployment platform for MLLMs, enabling seamless integration into
everyday tasks. However, deploying MLLMs on mobile phones presents challenges
due to limitations in memory size and computational capability, making it
difficult to achieve smooth and real-time processing without extensive
optimization. In this paper, we present BlueLM-V-3B, an algorithm and system
co-design approach specifically tailored for the efficient deployment of MLLMs
on mobile platforms. To be specific, we redesign the dynamic resolution scheme
adopted by mainstream MLLMs and implement system optimization for
hardware-aware deployment to optimize model inference on mobile phones.
BlueLM-V-3B boasts the following key highlights: (1) Small Size: BlueLM-V-3B
features a language model with 2.7B parameters and a vision encoder with 400M
parameters. (2) Fast Speed: BlueLM-V-3B achieves a generation speed of 24.4
token/s on the MediaTek Dimensity 9300 processor with 4-bit LLM weight
quantization. (3) Strong Performance: BlueLM-V-3B has attained the highest
average score of 66.1 on the OpenCompass benchmark among models with $\leq$ 4B
parameters and surpassed a series of models with much larger parameter sizes
(e.g., MiniCPM-V-2.6, InternVL2-8B). |
---|---|
DOI: | 10.48550/arxiv.2411.10640 |