Multi-Scale and Detail-Enhanced Segment Anything Model for Salient Object Detection
Salient Object Detection (SOD) aims to identify and segment the most prominent objects in images. Advanced SOD methods often utilize various Convolutional Neural Networks (CNN) or Transformers for deep feature extraction. However, these methods still deliver low performance and poor generalization i...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Salient Object Detection (SOD) aims to identify and segment the most
prominent objects in images. Advanced SOD methods often utilize various
Convolutional Neural Networks (CNN) or Transformers for deep feature
extraction. However, these methods still deliver low performance and poor
generalization in complex cases. Recently, Segment Anything Model (SAM) has
been proposed as a visual fundamental model, which gives strong segmentation
and generalization capabilities. Nonetheless, SAM requires accurate prompts of
target objects, which are unavailable in SOD. Additionally, SAM lacks the
utilization of multi-scale and multi-level information, as well as the
incorporation of fine-grained details. To address these shortcomings, we
propose a Multi-scale and Detail-enhanced SAM (MDSAM) for SOD. Specifically, we
first introduce a Lightweight Multi-Scale Adapter (LMSA), which allows SAM to
learn multi-scale information with very few trainable parameters. Then, we
propose a Multi-Level Fusion Module (MLFM) to comprehensively utilize the
multi-level information from the SAM's encoder. Finally, we propose a Detail
Enhancement Module (DEM) to incorporate SAM with fine-grained details.
Experimental results demonstrate the superior performance of our model on
multiple SOD datasets and its strong generalization on other segmentation
tasks. The source code is released at https://github.com/BellyBeauty/MDSAM. |
---|---|
DOI: | 10.48550/arxiv.2408.04326 |