Towards General Negotiation Strategies with End-to-End Reinforcement Learning
The research field of automated negotiation has a long history of designing agents that can negotiate with other agents. Such negotiation strategies are traditionally based on manual design and heuristics. More recently, reinforcement learning approaches have also been used to train agents to negoti...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The research field of automated negotiation has a long history of designing
agents that can negotiate with other agents. Such negotiation strategies are
traditionally based on manual design and heuristics. More recently,
reinforcement learning approaches have also been used to train agents to
negotiate. However, negotiation problems are diverse, causing observation and
action dimensions to change, which cannot be handled by default linear policy
networks. Previous work on this topic has circumvented this issue either by
fixing the negotiation problem, causing policies to be non-transferable between
negotiation problems or by abstracting the observations and actions into
fixed-size representations, causing loss of information and expressiveness due
to feature design. We developed an end-to-end reinforcement learning method for
diverse negotiation problems by representing observations and actions as a
graph and applying graph neural networks in the policy. With empirical
evaluations, we show that our method is effective and that we can learn to
negotiate with other agents on never-before-seen negotiation problems. Our
result opens up new opportunities for reinforcement learning in negotiation
agents. |
---|---|
DOI: | 10.48550/arxiv.2406.15096 |