Combining simulation and reinforcement learning to reduce food waste in food retail
Extraordinary amounts of fresh produce are never purchased and are discarded as waste. Reinforcement learning (RL) could serve as a means to improve business profits while reducing food waste via control of store pricing and ordering decisions. We present a discrete-event-based simulation framework...
Gespeichert in:
Veröffentlicht in: | Simulation (San Diego, Calif.) Calif.), 2024-12 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Extraordinary amounts of fresh produce are never purchased and are discarded as waste. Reinforcement learning (RL) could serve as a means to improve business profits while reducing food waste via control of store pricing and ordering decisions. We present a discrete-event-based simulation framework for food retail which simulates wholesaler, store, and customer interactions. This simulator is critical for driving development and testing of future RL methods. It provides an efficient learning feedback system across a wide gamut of possible scenarios, which cannot be replicated from live observations or pure historical data alone. This is crucial as RL agents cannot learn robust decision-making policies without exposure to many unique scenarios. We evaluate our simulator on a demonstrative case generated from historical consumption and price data using a provided methodology for synthesizing daily demand from monthly and yearly stats. In this demonstrative case, we investigate proximal policy optimization, soft actor–critic, and deep Q networks trained with different reward formulations to decrease food waste and improve profits. These RL methods reduced food waste by 78%–92% on average on an unseen 3-year test period as compared to a baseline mimicking typical food retail waste. Compared to a second popular baseline in literature, the best performing RL algorithm was able to improve profits by up to 12.3%. |
---|---|
ISSN: | 0037-5497 1741-3133 1741-3133 |
DOI: | 10.1177/00375497241299054 |