Rule-based aggregation driven by similar images for visual saliency detection

The visual saliency detection consists in determining the relevant visual information in a scene to segment it from the background. This paper proposes visual saliency detection in a rule-based approach using the image similarity cue to improve saliency detection performance. Our system induces rule...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Applied intelligence (Dordrecht, Netherlands) Netherlands), 2020-06, Vol.50 (6), p.1745-1762
Hauptverfasser: Lopez-Alanis, Alberto, Lizarraga-Morales, Rocio A., Contreras-Cruz, Marco A., Ayala-Ramirez, Victor, Sanchez-Yanez, Raul E., Trujillo-Romero, Felipe
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The visual saliency detection consists in determining the relevant visual information in a scene to segment it from the background. This paper proposes visual saliency detection in a rule-based approach using the image similarity cue to improve saliency detection performance. Our system induces rules for saliency detection, and for a given input image, determines the subset of rules to be used from a set of candidate rules. The proposed approach consists of two main stages: training and testing. Firstly, during the training stage, our system learns an ensemble of rough-set-based rules by combining knowledge extracted from outputs of four state-of-the-art saliency models. Secondly, our system determines the most suitable subset of induced rules for binary detection of pixels of a salient object in an image. The decision of the best subset of rules is based on the image similarity cue. The binary determination of saliency in the output image, exempts us from performing a post-processing stage as is needed in most saliency approaches. The proposed method is evaluated quantitatively on three challenging databases designed for the saliency detection task. The results obtained from the performed experiments indicate that the proposed method outperforms the state-of-the-art approaches used for comparison.
ISSN:0924-669X
1573-7497
DOI:10.1007/s10489-019-01582-6