Configurable Embodied Data Generation for Class-Agnostic RGB-D Video Segmentation
This paper presents a method for generating large-scale datasets to improve class-agnostic video segmentation across robots with different form factors. Specifically, we consider the question of whether video segmentation models trained on generic segmentation data could be more effective for partic...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This paper presents a method for generating large-scale datasets to improve
class-agnostic video segmentation across robots with different form factors.
Specifically, we consider the question of whether video segmentation models
trained on generic segmentation data could be more effective for particular
robot platforms if robot embodiment is factored into the data generation
process. To answer this question, a pipeline is formulated for using 3D
reconstructions (e.g. from HM3DSem) to generate segmented videos that are
configurable based on a robot's embodiment (e.g. sensor type, sensor placement,
and illumination source). A resulting massive RGB-D video panoptic segmentation
dataset (MVPd) is introduced for extensive benchmarking with foundation and
video segmentation models, as well as to support embodiment-focused research in
video segmentation. Our experimental findings demonstrate that using MVPd for
finetuning can lead to performance improvements when transferring foundation
models to certain robot embodiments, such as specific camera placements. These
experiments also show that using 3D modalities (depth images and camera pose)
can lead to improvements in video segmentation accuracy and consistency. The
project webpage is available at https://topipari.com/projects/MVPd |
---|---|
DOI: | 10.48550/arxiv.2410.12995 |