Grounded SAM: Assembling Open-World Models for Diverse Visual Tasks
We introduce Grounded SAM, which uses Grounding DINO as an open-set object detector to combine with the segment anything model (SAM). This integration enables the detection and segmentation of any regions based on arbitrary text inputs and opens a door to connecting various vision models. As shown i...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We introduce Grounded SAM, which uses Grounding DINO as an open-set object
detector to combine with the segment anything model (SAM). This integration
enables the detection and segmentation of any regions based on arbitrary text
inputs and opens a door to connecting various vision models. As shown in Fig.1,
a wide range of vision tasks can be achieved by using the versatile Grounded
SAM pipeline. For example, an automatic annotation pipeline based solely on
input images can be realized by incorporating models such as BLIP and Recognize
Anything. Additionally, incorporating Stable-Diffusion allows for controllable
image editing, while the integration of OSX facilitates promptable 3D human
motion analysis. Grounded SAM also shows superior performance on
open-vocabulary benchmarks, achieving 48.7 mean AP on SegInW (Segmentation in
the wild) zero-shot benchmark with the combination of Grounding DINO-Base and
SAM-Huge models. |
---|---|
DOI: | 10.48550/arxiv.2401.14159 |