Language Embedded Radiance Fields for Zero-Shot Task-Oriented Grasping
Grasping objects by a specific part is often crucial for safety and for executing downstream tasks. Yet, learning-based grasp planners lack this behavior unless they are trained on specific object part data, making it a significant challenge to scale object diversity. Instead, we propose LERF-TOGO,...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Grasping objects by a specific part is often crucial for safety and for
executing downstream tasks. Yet, learning-based grasp planners lack this
behavior unless they are trained on specific object part data, making it a
significant challenge to scale object diversity. Instead, we propose LERF-TOGO,
Language Embedded Radiance Fields for Task-Oriented Grasping of Objects, which
uses vision-language models zero-shot to output a grasp distribution over an
object given a natural language query. To accomplish this, we first reconstruct
a LERF of the scene, which distills CLIP embeddings into a multi-scale 3D
language field queryable with text. However, LERF has no sense of objectness,
meaning its relevancy outputs often return incomplete activations over an
object which are insufficient for subsequent part queries. LERF-TOGO mitigates
this lack of spatial grouping by extracting a 3D object mask via DINO features
and then conditionally querying LERF on this mask to obtain a semantic
distribution over the object with which to rank grasps from an off-the-shelf
grasp planner. We evaluate LERF-TOGO's ability to grasp task-oriented object
parts on 31 different physical objects, and find it selects grasps on the
correct part in 81% of all trials and grasps successfully in 69%. See the
project website at: lerftogo.github.io |
---|---|
DOI: | 10.48550/arxiv.2309.07970 |