Contrastive Touch-to-Touch Pretraining
Today's tactile sensors have a variety of different designs, making it challenging to develop general-purpose methods for processing touch signals. In this paper, we learn a unified representation that captures the shared information between different tactile sensors. Unlike current approaches...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Today's tactile sensors have a variety of different designs, making it
challenging to develop general-purpose methods for processing touch signals. In
this paper, we learn a unified representation that captures the shared
information between different tactile sensors. Unlike current approaches that
focus on reconstruction or task-specific supervision, we leverage contrastive
learning to integrate tactile signals from two different sensors into a shared
embedding space, using a dataset in which the same objects are probed with
multiple sensors. We apply this approach to paired touch signals from GelSlim
and Soft Bubble sensors. We show that our learned features provide strong
pretraining for downstream pose estimation and classification tasks. We also
show that our embedding enables models trained using one touch sensor to be
deployed using another without additional training. Project details can be
found at https://www.mmintlab.com/research/cttp/. |
---|---|
DOI: | 10.48550/arxiv.2410.11834 |