Visuo-Tactile Keypoint Correspondences for Object Manipulation
This paper presents a novel manipulation strategy that uses keypoint correspondences extracted from visuo-tactile sensor images to facilitate precise object manipulation. Our approach uses the visuo-tactile feedback to guide the robot's actions for accurate object grasping and placement, elimin...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This paper presents a novel manipulation strategy that uses keypoint
correspondences extracted from visuo-tactile sensor images to facilitate
precise object manipulation. Our approach uses the visuo-tactile feedback to
guide the robot's actions for accurate object grasping and placement,
eliminating the need for post-grasp adjustments and extensive training. This
method provides an improvement in deployment efficiency, addressing the
challenges of manipulation tasks in environments where object locations are not
predefined. We validate the effectiveness of our strategy through experiments
demonstrating the extraction of keypoint correspondences and their application
to real-world tasks such as block alignment and gear insertion, which require
millimeter-level precision. The results show an average error margin
significantly lower than that of traditional vision-based methods, which is
sufficient to achieve the target tasks. |
---|---|
DOI: | 10.48550/arxiv.2405.14515 |