SynthVLM: High-Efficiency and High-Quality Synthetic Data for Vision Language Models
Recently, with the rise of web images, managing and understanding large-scale image datasets has become increasingly important. Vision Large Language Models (VLLMs) have recently emerged due to their robust vision-understanding capabilities. However, training these models requires vast amounts of da...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recently, with the rise of web images, managing and understanding large-scale
image datasets has become increasingly important. Vision Large Language Models
(VLLMs) have recently emerged due to their robust vision-understanding
capabilities. However, training these models requires vast amounts of data,
posing challenges to efficiency, effectiveness, data quality, and privacy. In
this paper, we introduce SynthVLM, a novel data synthesis pipeline for VLLMs.
Unlike existing methods that generate captions from images, SynthVLM employs
advanced diffusion models and high-quality captions to automatically generate
and select high-resolution images from captions, creating precisely aligned
image-text pairs. Leveraging these pairs, we achieve state-of-the-art (SoTA)
performance on various vision question answering tasks, maintaining high
alignment quality and preserving advanced language abilities. Moreover,
SynthVLM surpasses traditional GPT-4 Vision-based caption generation methods in
performance while significantly reducing computational overhead. Crucially, our
method's reliance on purely generated data ensures the preservation of privacy,
achieving SoTA performance with just 100k data points (only 18% of the official
dataset size). |
---|---|
DOI: | 10.48550/arxiv.2407.20756 |