AltChart: Enhancing VLM-based Chart Summarization Through Multi-Pretext Tasks
Chart summarization is a crucial task for blind and visually impaired individuals as it is their primary means of accessing and interpreting graphical data. Crafting high-quality descriptions is challenging because it requires precise communication of essential details within the chart without visio...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Chart summarization is a crucial task for blind and visually impaired
individuals as it is their primary means of accessing and interpreting
graphical data. Crafting high-quality descriptions is challenging because it
requires precise communication of essential details within the chart without
vision perception. Many chart analysis methods, however, produce brief,
unstructured responses that may contain significant hallucinations, affecting
their reliability for blind people. To address these challenges, this work
presents three key contributions: (1) We introduce the AltChart dataset,
comprising 10,000 real chart images, each paired with a comprehensive summary
that features long-context, and semantically rich annotations. (2) We propose a
new method for pretraining Vision-Language Models (VLMs) to learn fine-grained
chart representations through training with multiple pretext tasks, yielding a
performance gain with ${\sim}2.5\%$. (3) We conduct extensive evaluations of
four leading chart summarization models, analyzing how accessible their
descriptions are. Our dataset and codes are publicly available on our project
page: https://github.com/moured/AltChart. |
---|---|
DOI: | 10.48550/arxiv.2405.13580 |