INF-LLaVA: Dual-perspective Perception for High-Resolution Multimodal Large Language Model
With advancements in data availability and computing resources, Multimodal Large Language Models (MLLMs) have showcased capabilities across various fields. However, the quadratic complexity of the vision encoder in MLLMs constrains the resolution of input images. Most current approaches mitigate thi...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | With advancements in data availability and computing resources, Multimodal
Large Language Models (MLLMs) have showcased capabilities across various
fields. However, the quadratic complexity of the vision encoder in MLLMs
constrains the resolution of input images. Most current approaches mitigate
this issue by cropping high-resolution images into smaller sub-images, which
are then processed independently by the vision encoder. Despite capturing
sufficient local details, these sub-images lack global context and fail to
interact with one another. To address this limitation, we propose a novel MLLM,
INF-LLaVA, designed for effective high-resolution image perception. INF-LLaVA
incorporates two innovative components. First, we introduce a Dual-perspective
Cropping Module (DCM), which ensures that each sub-image contains continuous
details from a local perspective and comprehensive information from a global
perspective. Second, we introduce Dual-perspective Enhancement Module (DEM) to
enable the mutual enhancement of global and local features, allowing INF-LLaVA
to effectively process high-resolution images by simultaneously capturing
detailed local information and comprehensive global context. Extensive ablation
studies validate the effectiveness of these components, and experiments on a
diverse set of benchmarks demonstrate that INF-LLaVA outperforms existing
MLLMs. Code and pretrained model are available at
https://github.com/WeihuangLin/INF-LLaVA. |
---|---|
DOI: | 10.48550/arxiv.2407.16198 |