The Dynamic Ebbinghaus: motion dynamics greatly enhance the classic contextual size illusion

The Ebbinghaus illusion is a classic example of the influence of a contextual surround on the perceived size of an object. Here, we introduce a novel variant of this illusion called the Dynamic Ebbinghaus illusion in which the size and eccentricity of the surrounding inducers modulates dynamically o...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Frontiers in human neuroscience 2015-02, Vol.9, p.77-77
Hauptverfasser: Mruczek, Ryan E B, Blair, Christopher D, Strother, Lars, Caplovitz, Gideon P
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The Ebbinghaus illusion is a classic example of the influence of a contextual surround on the perceived size of an object. Here, we introduce a novel variant of this illusion called the Dynamic Ebbinghaus illusion in which the size and eccentricity of the surrounding inducers modulates dynamically over time. Under these conditions, the size of the central circle is perceived to change in opposition with the size of the inducers. Interestingly, this illusory effect is relatively weak when participants are fixating a stationary central target, less than half the magnitude of the classic static illusion. However, when the entire stimulus translates in space requiring a smooth pursuit eye movement to track the target, the illusory effect is greatly enhanced, almost twice the magnitude of the classic static illusion. A variety of manipulations including target motion, peripheral viewing, and smooth pursuit eye movements all lead to dramatic illusory effects, with the largest effect nearly four times the strength of the classic static illusion. We interpret these results in light of the fact that motion-related manipulations lead to uncertainty in the image size representation of the target, specifically due to added noise at the level of the retinal input. We propose that the neural circuits integrating visual cues for size perception, such as retinal image size, perceived distance, and various contextual factors, weight each cue according to the level of noise or uncertainty in their neural representation. Thus, more weight is given to the influence of contextual information in deriving perceived size in the presence of stimulus and eye motion. Biologically plausible models of size perception should be able to account for the reweighting of different visual cues under varying levels of certainty.
ISSN:1662-5161
1662-5161
DOI:10.3389/fnhum.2015.00077