Activation to Saliency: Forming High-Quality Labels for Unsupervised Salient Object Detection

This paper focuses on the Unsupervised Salient Object Detection (USOD) issue. We come up with a two-stage Activation-to-Saliency (A2S) framework that effectively excavates saliency cues to train a robust saliency detector. It is worth noting that our method does not require any manual annotation in...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on circuits and systems for video technology 2023-02, Vol.33 (2), p.743-755
Hauptverfasser: Zhou, Huajun, Chen, Peijia, Yang, Lingxiao, Xie, Xiaohua, Lai, Jianhuang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:This paper focuses on the Unsupervised Salient Object Detection (USOD) issue. We come up with a two-stage Activation-to-Saliency (A2S) framework that effectively excavates saliency cues to train a robust saliency detector. It is worth noting that our method does not require any manual annotation in the whole process. In the first stage, we transform an unsupervisedly pre-trained network to aggregate multi-level features into a single activation map, where an Adaptive Decision Boundary (ADB) is proposed to assist the training of the transformed network. Moreover, a new loss function is proposed to facilitate the generation of high-quality pseudo labels. In the second stage, a self-rectification learning strategy is developed to train a saliency detector and refine the pseudo labels online. In addition, we construct a lightweight saliency detector using two Residual Attention Modules (RAMs) to learn robust saliency information. Extensive experiments on several SOD benchmarks prove that our framework reports significant performance compared with existing USOD methods. Moreover, training our framework on 3,000 images consumes about 1 hour, which is over 10 times faster than previous state-of-the-art methods. Code has been published at https://github.com/moothes/A2S-USOD .
ISSN:1051-8215
1558-2205
DOI:10.1109/TCSVT.2022.3203595