A multi-agent reinforcement learning model of reputation and cooperation in human groups
Collective action demands that individuals efficiently coordinate how much, where, and when to cooperate. Laboratory experiments have extensively explored the first part of this process, demonstrating that a variety of social-cognitive mechanisms influence how much individuals choose to invest in gr...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Collective action demands that individuals efficiently coordinate how much,
where, and when to cooperate. Laboratory experiments have extensively explored
the first part of this process, demonstrating that a variety of
social-cognitive mechanisms influence how much individuals choose to invest in
group efforts. However, experimental research has been unable to shed light on
how social cognitive mechanisms contribute to the where and when of collective
action. We build and test a computational model of human behavior in Clean Up,
a social dilemma task popular in multi-agent reinforcement learning research.
We show that human groups effectively cooperate in Clean Up when they can
identify group members and track reputations over time, but fail to organize
under conditions of anonymity. A multi-agent reinforcement learning model of
reputation demonstrates the same difference in cooperation under conditions of
identifiability and anonymity. In addition, the model accurately predicts
spatial and temporal patterns of group behavior: in this public goods dilemma,
the intrinsic motivation for reputation catalyzes the development of a
non-territorial, turn-taking strategy to coordinate collective action. |
---|---|
DOI: | 10.48550/arxiv.2103.04982 |