Self-Adaptively Weighted Co-Saliency Detection via Rank Constraint

Co-saliency detection aims at discovering the common salient objects existing in multiple images. Most existing methods combine multiple saliency cues based on fixed weights, and ignore the intrinsic relationship of these cues. In this paper, we provide a general saliency map fusion framework, which...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing 2014-09, Vol.23 (9), p.4175-4186
Hauptverfasser: Cao, Xiaochun, Tao, Zhiqiang, Zhang, Bao, Fu, Huazhu, Feng, Wei
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Co-saliency detection aims at discovering the common salient objects existing in multiple images. Most existing methods combine multiple saliency cues based on fixed weights, and ignore the intrinsic relationship of these cues. In this paper, we provide a general saliency map fusion framework, which exploits the relationship of multiple saliency cues and obtains the self-adaptive weight to generate the final saliency/co-saliency map. Given a group of images with similar objects, our method first utilizes several saliency detection algorithms to generate a group of saliency maps for all the images. The feature representation of the co-salient regions should be both similar and consistent. Therefore, the matrix jointing these feature histograms appears low rank. We formalize this general consistency criterion as the rank constraint, and propose two consistency energy to describe it, which are based on low rank matrix approximation and low rank matrix recovery, respectively. By calculating the self-adaptive weight based on the consistency energy, we highlight the common salient regions. Our method is valid for more than two input images and also works well for single image saliency detection. Experimental results on a variety of benchmark data sets demonstrate that the proposed method outperforms the state-of-the-art methods.
ISSN:1057-7149
1941-0042
DOI:10.1109/TIP.2014.2332399