An adaptive exploration mechanism for Q-learning in spatial public goods games
The Q-learning algorithm has been widely applied to investigate the emergence of cooperation in social dilemmas. Despite ϵ -greedy being the most common exploration strategy in Q-learning, mechanisms for adjusting exploration as the game environment changes have not been thoroughly researched. To st...
Gespeichert in:
Veröffentlicht in: | Chaos, solitons and fractals solitons and fractals, 2024-12, Vol.189, p.115705, Article 115705 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The Q-learning algorithm has been widely applied to investigate the emergence of cooperation in social dilemmas. Despite ϵ -greedy being the most common exploration strategy in Q-learning, mechanisms for adjusting exploration as the game environment changes have not been thoroughly researched. To stay close to reality, this paper proposes an environment-adaptive exploration-based Q-Learning algorithm. We applied the registration concept from image processing to characterize agents’ sensitivity to changes in their surrounding environment to obtain local stimulation. Additionally, we calculated the advantage differences between the agent and the global environment to acquire global stimulation. Simulation results on the public goods game show that the level of cooperation increases and the fraction of exploration consequently decreases when the agents focus more on the local environment. However, the impact of the basic exploration rate on the level of cooperation is not uniform: when the enhancement factor is low, an increase in the exploration rate promotes cooperation, while when the enhancement factor is high, increasing the exploration rate reduces the level of cooperation. The basic exploration rate directly affects the fraction of exploration. Therefore, increasing the basic exploration rate can stably increase the fraction of exploration of the agents. Similarly, the effect of the memory strength parameter λ on the level of cooperation is positively correlated, and increasing the value of λ increases the level of cooperation across the board. These evolutionary dynamics could enrich the understanding of cooperation in complex systems.
•Integrate adaptive exploration into traditional Q-learning for evolutionary games.•Perceiving changes in the local environment using image registration concepts.•The strategy of the agent is influenced by both the global and local environment.•Agents that are more attentive to their surroundings can promote cooperation. |
---|---|
ISSN: | 0960-0779 |
DOI: | 10.1016/j.chaos.2024.115705 |