LEARNING DRIFTING NEGOTIATIONS
In this work, we propose the use of drift detection techniques for learning offer policies in multiissue, bilateral negotiation. Several works aiming to develop adaptive trading agents have been proposed. Such agents are capable of learning their competitors' utility values and functions, there...
Gespeichert in:
Veröffentlicht in: | Applied artificial intelligence 2007-10, Vol.21 (9), p.861-881 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In this work, we propose the use of drift detection techniques for learning offer policies in multiissue, bilateral negotiation. Several works aiming to develop
adaptive trading agents
have been proposed. Such agents are capable of learning their competitors' utility values and functions, thereby obtaining better results in negotiation. However, the learning mechanisms generally used disregard possible changes in a competitor's offer/counter-offer policy. In that case, the agent's performance may decrease drastically. The agent then needs to restart the learning process, as the model previously learned is no longer valid. Drift detection techniques can be used to detect changes in the current offers model and quickly update it. In this work, we demonstrate with simulated data that drift detection algorithms can be used to build adaptive trading agents and offer a number of advantages over the techniques mostly used in this problem. The results obtained with the algorithm IB3 (instance-based) show that the agent's performance can be rapidly recovered even when changes interesting to the competitor are abrupt, moderate, or gradual. |
---|---|
ISSN: | 0883-9514 1087-6545 |
DOI: | 10.1080/08839510701526954 |