Learning Potential in Subgoal-based Reward Shaping
Human knowledge can reduce the number of iterations required to learn in reinforcement learning. Though the most common approach uses trajectories, it is difficult to acquire them in certain domains. Subgoals, which are intermediate states, have been studied instead of trajectories. Subgoal-based re...
Gespeichert in:
Veröffentlicht in: | IEEE access 2023-01, Vol.11, p.1-1 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Human knowledge can reduce the number of iterations required to learn in reinforcement learning. Though the most common approach uses trajectories, it is difficult to acquire them in certain domains. Subgoals, which are intermediate states, have been studied instead of trajectories. Subgoal-based reward shaping is a reward-shaping framework with a sequence of subgoals based on potential-based reward shaping. The potential function which is a component of potential-based reward shaping has a hyperparameter that controls its output. However, it is not easy to select a hyperparameter because its appropriate value depends on the reward function of an environment, and the reward function is unknown. We propose learned potential that parameterizes a hyperparameter and acquires its potential through learning. A value is an expected accumulated reward if an agent follows its policy after the current state and is strongly related to the reward function. With learned potential, we build an abstract state space with a sequence of subgoals and use the value over the abstract states as the potential to accelerate the value learning. N-step TD learning learns the values over the abstract state. We conducted experiments to evaluate the effectiveness of learned potential, and the results indicate its effectiveness compared with a baseline reinforcement learning algorithm and several reward-shaping algorithms. The results also indicate that the participants' subgoals are superior to subgoals generated randomly with learned potential. We discuss the appropriate number of subgoals for learned potential, that partially ordered subgoal is helpful for learned potential, that learned potential cannot make learning efficient in step penalized rewards, and that learned potential is superior to the non-learned potential in mixed positive and negative rewards. |
---|---|
ISSN: | 2169-3536 2169-3536 |
DOI: | 10.1109/ACCESS.2023.3246267 |