A Computational Model of Learned Avoidance Behavior in a One-Way Avoidance Experiment

We present a computational model of learned avoidance behavior in a one-way avoidance experiment. Our model employs the reinforcement learning paradigm and a temporal-difference algorithm to implement both classically conditioned and instrumentally conditioned components. The role of the classically...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Adaptive behavior 2001-01, Vol.9 (2), p.91-104
Hauptverfasser: Johnson, Jeffrey D., Li, Wei, Li, Jinghong, Klopf, A. Harry
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:We present a computational model of learned avoidance behavior in a one-way avoidance experiment. Our model employs the reinforcement learning paradigm and a temporal-difference algorithm to implement both classically conditioned and instrumentally conditioned components. The role of the classically conditioned component is to develop an expectation of future benefit that is a function of the learning system's state and action. Competition among the instrumentally conditioned components determines the overt behavior generated by the learning system. Our model displays, in simulation, the reduced latency of the avoidance behavior during learning with continuing trials and the resistance to extinction of the avoidance response. These results are consistent with experimentally observed animal behavior. Our model extends the traditional two-process learning mechanism of Mowrer by explicitly defining the mechanisms of proprioceptive feedback, an internal clock, and generalization over the action space.
ISSN:1059-7123
1741-2633
DOI:10.1177/105971230200900205