Explaining CLIP's performance disparities on data from blind/low vision users
Large multi-modal models (LMMs) hold the potential to usher in a new era of automated visual assistance for people who are blind or low vision (BLV). Yet, these models have not been systematically evaluated on data captured by BLV users. We address this by empirically assessing CLIP, a widely-used L...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Large multi-modal models (LMMs) hold the potential to usher in a new era of
automated visual assistance for people who are blind or low vision (BLV). Yet,
these models have not been systematically evaluated on data captured by BLV
users. We address this by empirically assessing CLIP, a widely-used LMM likely
to underpin many assistive technologies. Testing 25 CLIP variants in a
zero-shot classification task, we find that their accuracy is 15 percentage
points lower on average for images captured by BLV users than web-crawled
images. This disparity stems from CLIP's sensitivities to 1) image content
(e.g. not recognizing disability objects as well as other objects); 2) image
quality (e.g. not being robust to lighting variation); and 3) text content
(e.g. not recognizing objects described by tactile adjectives as well as visual
ones). We delve deeper with a textual analysis of three common pre-training
datasets: LAION-400M, LAION-2B and DataComp-1B, showing that disability content
is rarely mentioned. We then provide three examples that illustrate how the
performance disparities extend to three downstream models underpinned by CLIP:
OWL-ViT, CLIPSeg and DALL-E2. We find that few-shot learning with as few as 5
images can mitigate CLIP's quality-of-service disparities for BLV users in some
scenarios, which we discuss alongside a set of other possible mitigations. |
---|---|
DOI: | 10.48550/arxiv.2311.17315 |