Integration of Proprioceptive and Visual Position-Information: An Experimentally Supported Model

Faculty of Design, Engineering and Production, Delft University of Technology, NL-2628 BX Delft, The Netherlands van Beers, Robert J., Anne C. Sittig, and Jan J. Denier van der Gon. Integration of proprioceptive and visual position-information: an experimentally supported model. To localize one'...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of neurophysiology 1999-03, Vol.81 (3), p.1355-1364
Hauptverfasser: van Beers, Robert J, Sittig, Anne C, Gon, Jan J. Denier van der
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Faculty of Design, Engineering and Production, Delft University of Technology, NL-2628 BX Delft, The Netherlands van Beers, Robert J., Anne C. Sittig, and Jan J. Denier van der Gon. Integration of proprioceptive and visual position-information: an experimentally supported model. To localize one's hand, i.e., to find out its position with respect to the body, humans may use proprioceptive information or visual information or both. It is still not known how the CNS combines simultaneous proprioceptive and visual information. In this study, we investigate in what position in a horizontal plane a hand is localized on the basis of simultaneous proprioceptive and visual information and compare this to the positions in which it is localized on the basis of proprioception only and vision only. Seated at a table, subjects matched target positions on the table top with their unseen left hand under the table. The experiment consisted of three series. In each of these series, the target positions were presented in three conditions: by vision only, by proprioception only, or by both vision and proprioception. In one of the three series, the visual information was veridical. In the other two, it was modified by prisms that displaced the visual field to the left and to the right, respectively. The results show that the mean of the positions indicated in the condition with both vision and proprioception generally lies off the straight line through the means of the other two conditions. In most cases the mean lies on the side predicted by a model describing the integration of multisensory information. According to this model, the visual information and the proprioceptive information are weighted with direction-dependent weights, the weights being related to the direction-dependent precision of the information in such a way that the available information is used very efficiently. Because the proposed model also can explain the unexpectedly small sizes of the variable errors in the localization of a seen hand that were reported earlier, there is strong evidence to support this model. The results imply that the CNS has knowledge about the direction-dependent precision of the proprioceptive and visual information.
ISSN:0022-3077
1522-1598
DOI:10.1152/jn.1999.81.3.1355