A model for visual flow-field cueing and self-motion estimation

A computational model for visual flow-field cueing and self-motion estimation is developed and simulated. The model is predicated on the notion that the pilot makes noisy, sampled measurements on the spatially distributed visual flow-field surrounding him, and, on the basis of these measurements, ge...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on systems, man, and cybernetics man, and cybernetics, 1985-05, Vol.SMC-15 (3), p.385-389
Hauptverfasser: Zacharias, Greg L., Caglayan, Alper K., Sinacori, John B.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:A computational model for visual flow-field cueing and self-motion estimation is developed and simulated. The model is predicated on the notion that the pilot makes noisy, sampled measurements on the spatially distributed visual flow-field surrounding him, and, on the basis of these measurements, generates estimates of his own linear and angular terrain-relative velocities which optimally satisfy, in a least-squares sense, the visual kinematic flow constraints. A subsidiary but significant output of the model is an "impact time" map, an observer-centered spatially scaled replica of the viewed surface. Simulations are presented to demonstrate the parametric sensitivity and ability to model relevant human visual performance data.
ISSN:0018-9472
2168-2909
DOI:10.1109/TSMC.1985.6313373