A Deep Q-Network for the Beer Game: A Deep Reinforcement Learning algorithm to Solve Inventory Optimization Problems
The beer game is a widely used in-class game that is played in supply chain management classes to demonstrate the bullwhip effect. The game is a decentralized, multi-agent, cooperative problem that can be modeled as a serial supply chain network in which agents cooperatively attempt to minimize the...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The beer game is a widely used in-class game that is played in supply chain
management classes to demonstrate the bullwhip effect. The game is a
decentralized, multi-agent, cooperative problem that can be modeled as a serial
supply chain network in which agents cooperatively attempt to minimize the
total cost of the network even though each agent can only observe its own local
information. Each agent chooses order quantities to replenish its stock. Under
some conditions, a base-stock replenishment policy is known to be optimal.
However, in a decentralized supply chain in which some agents (stages) may act
irrationally (as they do in the beer game), there is no known optimal policy
for an agent wishing to act optimally.
We propose a machine learning algorithm, based on deep Q-networks, to
optimize the replenishment decisions at a given stage. When playing alongside
agents who follow a base-stock policy, our algorithm obtains near-optimal order
quantities. It performs much better than a base-stock policy when the other
agents use a more realistic model of human ordering behavior. Unlike most other
algorithms in the literature, our algorithm does not have any limits on the
beer game parameter values. Like any deep learning algorithm, training the
algorithm can be computationally intensive, but this can be performed ahead of
time; the algorithm executes in real time when the game is played. Moreover, we
propose a transfer learning approach so that the training performed for one
agent and one set of cost coefficients can be adapted quickly for other agents
and costs. Our algorithm can be extended to other decentralized multi-agent
cooperative games with partially observed information, which is a common type
of situation in real-world supply chain problems. |
---|---|
DOI: | 10.48550/arxiv.1708.05924 |