Robotic-CLIP: Fine-tuning CLIP on Action Data for Robotic Applications
Vision language models have played a key role in extracting meaningful features for various robotic applications. Among these, Contrastive Language-Image Pretraining (CLIP) is widely used in robotic tasks that require both vision and natural language understanding. However, CLIP was trained solely o...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Vision language models have played a key role in extracting meaningful
features for various robotic applications. Among these, Contrastive
Language-Image Pretraining (CLIP) is widely used in robotic tasks that require
both vision and natural language understanding. However, CLIP was trained
solely on static images paired with text prompts and has not yet been fully
adapted for robotic tasks involving dynamic actions. In this paper, we
introduce Robotic-CLIP to enhance robotic perception capabilities. We first
gather and label large-scale action data, and then build our Robotic-CLIP by
fine-tuning CLIP on 309,433 videos (~7.4 million frames) of action data using
contrastive learning. By leveraging action data, Robotic-CLIP inherits CLIP's
strong image performance while gaining the ability to understand actions in
robotic contexts. Intensive experiments show that our Robotic-CLIP outperforms
other CLIP-based models across various language-driven robotic tasks.
Additionally, we demonstrate the practical effectiveness of Robotic-CLIP in
real-world grasping applications. |
---|---|
DOI: | 10.48550/arxiv.2409.17727 |