Sal-HMAX: An Enhanced HMAX Model in Conjunction With a Visual Attention Mechanism to Improve Object Recognition Task

The Hierarchical Max-pooling models (HMAX) have demonstrated excellent outperformance when integrated with various computer vision algorithms for the purpose of recognizing objects in images. However, the conventional HMAX model has two main problems: 1) it is computationally expensive to learn base...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2021, Vol.9, p.154396-154412
Hauptverfasser: Shariatmadar, Zahra Sadat, Faez, Karim, Namin, Akbar Siami
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The Hierarchical Max-pooling models (HMAX) have demonstrated excellent outperformance when integrated with various computer vision algorithms for the purpose of recognizing objects in images. However, the conventional HMAX model has two main problems: 1) it is computationally expensive to learn base matrixes, especially at layer S2 (matching layer), and 2) the patch selection in the standard HMAX model is randomly selected resulting in generating redundant and uninformed extracted patches. In this paper, a combination of the HMAX model and a selective attention mechanism is proposed to address the aforementioned drawbacks of HMAX models. Applying a selective mechanism of attention filters out unnecessary information and highlights more important and significant parts of a given image., An attention function is used to increase the matching velocity at the S2 layer, since through attention we only consider patches with more details. On the other hand, high operational precision is expected due to the extraction of distinct patches in the training phase of the S2 layer. The results of experiments show that the proposed model outperforms the conventional HMAX model. The proposed model establishes a mean accuracy of 93.7% on the first ten best-classified categories using the Caltech-101 dataset.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2021.3127928