Adaptive mechanisms of visual motion discrimination, integration, and segregation

Under ecological conditions, the luminance impinging on the retina varies within a dynamic range of 220 dB. Stimulus contrast can also vary drastically within a scene and eye movements leave little time for sampling luminance. Given these fundamental problems, the human brain allocates a significant...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Vision research (Oxford) 2021-11, Vol.188, p.96-114
Hauptverfasser: Peñaloza, Boris, Herzog, Michael H., Öğmen, Haluk
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Under ecological conditions, the luminance impinging on the retina varies within a dynamic range of 220 dB. Stimulus contrast can also vary drastically within a scene and eye movements leave little time for sampling luminance. Given these fundamental problems, the human brain allocates a significant amount of resources and deploys both structural and functional solutions that work in tandem to compress this range. Here we propose a new dynamic neural model built upon well-established canonical neural mechanisms. The model consists of two feed-forward stages. The first stage encodes the stimulus spatially and normalizes its activity by extracting contrast and discounting the background luminance. These normalized activities allow a second stage to implement a contrast-dependent spatial-integration strategy. We show how the properties of this model can account for adaptive properties of motion discrimination, integration, and segregation.
ISSN:0042-6989
1878-5646
DOI:10.1016/j.visres.2021.07.002