Distilling Spectral Graph for Object-Context Aware Open-Vocabulary Semantic Segmentation
Open-Vocabulary Semantic Segmentation (OVSS) has advanced with recent vision-language models (VLMs), enabling segmentation beyond predefined categories through various learning schemes. Notably, training-free methods offer scalable, easily deployable solutions for handling unseen data, a key goal of...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Open-Vocabulary Semantic Segmentation (OVSS) has advanced with recent
vision-language models (VLMs), enabling segmentation beyond predefined
categories through various learning schemes. Notably, training-free methods
offer scalable, easily deployable solutions for handling unseen data, a key
goal of OVSS. Yet, a critical issue persists: lack of object-level context
consideration when segmenting complex objects in the challenging environment of
OVSS based on arbitrary query prompts. This oversight limits models' ability to
group semantically consistent elements within object and map them precisely to
user-defined arbitrary classes. In this work, we introduce a novel approach
that overcomes this limitation by incorporating object-level contextual
knowledge within images. Specifically, our model enhances intra-object
consistency by distilling spectral-driven features from vision foundation
models into the attention mechanism of the visual encoder, enabling
semantically coherent components to form a single object mask. Additionally, we
refine the text embeddings with zero-shot object presence likelihood to ensure
accurate alignment with the specific objects represented in the images. By
leveraging object-level contextual knowledge, our proposed approach achieves
state-of-the-art performance with strong generalizability across diverse
datasets. |
---|---|
DOI: | 10.48550/arxiv.2411.17150 |