A Computational Model of Attention-Guided Visual Learning in a High-Performance Computing Software System

This research investigates transformer architectures in high-performance computing (HPC) software systems for attention-guided visual learning (AGVL). The study focuses on the effects of environmental factors and non-contextual stimuli on cognitive control. It reveals how attention increases respons...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Science, engineering and technology engineering and technology, 2024-12, Vol.5 (1)
Hauptverfasser: Ahmed, Alice, Hossain, Md. Tanim
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:This research investigates transformer architectures in high-performance computing (HPC) software systems for attention-guided visual learning (AGVL). The study focuses on the effects of environmental factors and non-contextual stimuli on cognitive control. It reveals how attention increases responses to attentive stimuli, thereby normalizing activity across the population. Transformer blocks use parallelism and less localized attention than current or convolutional models. The study investigates the use of transformer topologies to enhance language modeling, focusing on attention-guided learning and attention-modulated Hebbian plasticity. The model includes an all-attention layer with embedded input vectors, non-contextual vectors containing generic task-relevant information, and self-attentional and feedforward layers. The work employs relative two-dimensional positional encoding to address the challenge of encoding two-dimensional data such as photographs. The feature-similarity gain model proposes that attention multiplicatively strengthens neuronal responses based on how similar their feature tuning is to the attended input. The attention-guided learning approach rewards learning with neural attentional response gain, which the network modifies via gradient descent to achieve the projected objective outputs. The study discovered that supervised error backpropagation and the attention-modulated Hebbian rule outperformed the weight gain rule on MNIST; however, concentration differed.
ISSN:2831-1043
2744-2527
DOI:10.54327/set2025/v5.i1.245