RL-LLM-DT: An Automatic Decision Tree Generation Method Based on RL Evaluation and LLM Enhancement
Traditionally, AI development for two-player zero-sum games has relied on two primary techniques: decision trees and reinforcement learning (RL). A common approach involves using a fixed decision tree as one player's strategy while training an RL agent as the opponent to identify vulnerabilitie...
Gespeichert in:
Hauptverfasser: | , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Traditionally, AI development for two-player zero-sum games has relied on two
primary techniques: decision trees and reinforcement learning (RL). A common
approach involves using a fixed decision tree as one player's strategy while
training an RL agent as the opponent to identify vulnerabilities in the
decision tree, thereby improving its strategic strength iteratively. However,
this process often requires significant human intervention to refine the
decision tree after identifying its weaknesses, resulting in inefficiencies and
hindering full automation of the strategy enhancement process. Fortunately, the
advent of Large Language Models (LLMs) offers a transformative opportunity to
automate the process. We propose RL-LLM-DT, an automatic decision tree
generation method based on RL Evaluation and LLM Enhancement. Given an initial
decision tree, the method involves two important iterative steps. Response
Policy Search: RL is used to discover counter-strategies targeting the decision
tree. Policy Improvement: LLMs analyze failure scenarios and generate improved
decision tree code. In our method, RL focuses on finding the decision tree's
flaws while LLM is prompted to generate an improved version of the decision
tree. The iterative refinement process terminates when RL can't find any flaw
of the tree or LLM fails to improve the tree. To evaluate the effectiveness of
this integrated approach, we conducted experiments in a curling game. After
iterative refinements, our curling AI based on the decision tree ranks first on
the Jidi platform among 34 curling AIs in total, which demonstrates that LLMs
can significantly enhance the robustness and adaptability of decision trees,
representing a substantial advancement in the field of Game AI. Our code is
available at https://github.com/Linjunjie99/RL-LLM-DT. |
---|---|
DOI: | 10.48550/arxiv.2412.11417 |