CLIP-CID: Efficient CLIP Distillation via Cluster-Instance Discrimination
Contrastive Language-Image Pre-training (CLIP) has achieved excellent performance over a wide range of tasks. However, the effectiveness of CLIP heavily relies on a substantial corpus of pre-training data, resulting in notable consumption of computational resources. Although knowledge distillation h...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Contrastive Language-Image Pre-training (CLIP) has achieved excellent
performance over a wide range of tasks. However, the effectiveness of CLIP
heavily relies on a substantial corpus of pre-training data, resulting in
notable consumption of computational resources. Although knowledge distillation
has been widely applied in single modality models, how to efficiently expand
knowledge distillation to vision-language foundation models with extensive data
remains relatively unexplored. In this paper, we introduce CLIP-CID, a novel
distillation mechanism that effectively transfers knowledge from a large
vision-language foundation model to a smaller model. We initially propose a
simple but efficient image semantic balance method to reduce transfer learning
bias and improve distillation efficiency. This method filters out 43.7% of
image-text pairs from the LAION400M while maintaining superior performance.
After that, we leverage cluster-instance discrimination to facilitate knowledge
transfer from the teacher model to the student model, thereby empowering the
student model to acquire a holistic semantic comprehension of the pre-training
data. Experimental results demonstrate that CLIP-CID achieves state-of-the-art
performance on various downstream tasks including linear probe and zero-shot
classification. |
---|---|
DOI: | 10.48550/arxiv.2408.09441 |