Zero-Shot Dual-Path Integration Framework for Open-Vocabulary 3D Instance Segmentation
Open-vocabulary 3D instance segmentation transcends traditional closed-vocabulary methods by enabling the identification of both previously seen and unseen objects in real-world scenarios. It leverages a dual-modality approach, utilizing both 3D point clouds and 2D multi-view images to generate clas...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Open-vocabulary 3D instance segmentation transcends traditional
closed-vocabulary methods by enabling the identification of both previously
seen and unseen objects in real-world scenarios. It leverages a dual-modality
approach, utilizing both 3D point clouds and 2D multi-view images to generate
class-agnostic object mask proposals. Previous efforts predominantly focused on
enhancing 3D mask proposal models; consequently, the information that could
come from 2D association to 3D was not fully exploited. This bias towards 3D
data, while effective for familiar indoor objects, limits the system's
adaptability to new and varied object types, where 2D models offer greater
utility. Addressing this gap, we introduce Zero-Shot Dual-Path Integration
Framework that equally values the contributions of both 3D and 2D modalities.
Our framework comprises three components: 3D pathway, 2D pathway, and Dual-Path
Integration. 3D pathway generates spatially accurate class-agnostic mask
proposals of common indoor objects from 3D point cloud data using a pre-trained
3D model, while 2D pathway utilizes pre-trained open-vocabulary instance
segmentation model to identify a diverse array of object proposals from
multi-view RGB-D images. In Dual-Path Integration, our Conditional Integration
process, which operates in two stages, filters and merges the proposals from
both pathways adaptively. This process harmonizes output proposals to enhance
segmentation capabilities. Our framework, utilizing pre-trained models in a
zero-shot manner, is model-agnostic and demonstrates superior performance on
both seen and unseen data, as evidenced by comprehensive evaluations on the
ScanNet200 and qualitative results on ARKitScenes datasets. |
---|---|
DOI: | 10.48550/arxiv.2408.08591 |