VARCO-VISION: Expanding Frontiers in Korean Vision-Language Models
In this paper, we introduce an open-source Korean-English vision-language model (VLM), VARCO-VISION. We incorporate a step-by-step training strategy that allows a model learn both linguistic and visual information while preserving the backbone model's knowledge. Our model demonstrates outstandi...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In this paper, we introduce an open-source Korean-English vision-language
model (VLM), VARCO-VISION. We incorporate a step-by-step training strategy that
allows a model learn both linguistic and visual information while preserving
the backbone model's knowledge. Our model demonstrates outstanding performance
in diverse settings requiring bilingual image-text understanding and generation
abilities compared to models of similar size. VARCO-VISION is also capable of
grounding, referring, and OCR, expanding its usage and potential applications
for real-world scenarios. In addition to the model, we release five Korean
evaluation datasets, including four closed-set and one openset benchmarks. We
anticipate that our milestone will broaden the opportunities for AI researchers
aiming to train VLMs. VARCO-VISION is available at
https://huggingface.co/NCSOFT/VARCO-VISION-14B. |
---|---|
DOI: | 10.48550/arxiv.2411.19103 |