Cycle-Correspondence Loss: Learning Dense View-Invariant Visual Features from Unlabeled and Unordered RGB Images
Robot manipulation relying on learned object-centric descriptors became popular in recent years. Visual descriptors can easily describe manipulation task objectives, they can be learned efficiently using self-supervision, and they can encode actuated and even non-rigid objects. However, learning rob...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Robot manipulation relying on learned object-centric descriptors became
popular in recent years. Visual descriptors can easily describe manipulation
task objectives, they can be learned efficiently using self-supervision, and
they can encode actuated and even non-rigid objects. However, learning robust,
view-invariant keypoints in a self-supervised approach requires a meticulous
data collection approach involving precise calibration and expert supervision.
In this paper we introduce Cycle-Correspondence Loss (CCL) for view-invariant
dense descriptor learning, which adopts the concept of cycle-consistency,
enabling a simple data collection pipeline and training on unpaired RGB camera
views. The key idea is to autonomously detect valid pixel correspondences by
attempting to use a prediction over a new image to predict the original pixel
in the original image, while scaling error terms based on the estimated
confidence. Our evaluation shows that we outperform other self-supervised
RGB-only methods, and approach performance of supervised methods, both with
respect to keypoint tracking as well as for a robot grasping downstream task. |
---|---|
DOI: | 10.48550/arxiv.2406.12441 |