Label Propagation for Zero-shot Classification with Vision-Language Models
Vision-Language Models (VLMs) have demonstrated impressive performance on zero-shot classification, i.e. classification when provided merely with a list of class names. In this paper, we tackle the case of zero-shot classification in the presence of unlabeled data. We leverage the graph structure of...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Vision-Language Models (VLMs) have demonstrated impressive performance on
zero-shot classification, i.e. classification when provided merely with a list
of class names. In this paper, we tackle the case of zero-shot classification
in the presence of unlabeled data. We leverage the graph structure of the
unlabeled data and introduce ZLaP, a method based on label propagation (LP)
that utilizes geodesic distances for classification. We tailor LP to graphs
containing both text and image features and further propose an efficient method
for performing inductive inference based on a dual solution and a
sparsification step. We perform extensive experiments to evaluate the
effectiveness of our method on 14 common datasets and show that ZLaP
outperforms the latest related works. Code:
https://github.com/vladan-stojnic/ZLaP |
---|---|
DOI: | 10.48550/arxiv.2404.04072 |