Real-time Active Vision for a Humanoid Soccer Robot Using Deep Reinforcement Learning
In this paper, we present an active vision method using a deep reinforcement learning approach for a humanoid soccer-playing robot. The proposed method adaptively optimises the viewpoint of the robot to acquire the most useful landmarks for self-localisation while keeping the ball into its viewpoint...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In this paper, we present an active vision method using a deep reinforcement
learning approach for a humanoid soccer-playing robot. The proposed method
adaptively optimises the viewpoint of the robot to acquire the most useful
landmarks for self-localisation while keeping the ball into its viewpoint.
Active vision is critical for humanoid decision-maker robots with a limited
field of view. To deal with an active vision problem, several probabilistic
entropy-based approaches have previously been proposed which are highly
dependent on the accuracy of the self-localisation model. However, in this
research, we formulate the problem as an episodic reinforcement learning
problem and employ a Deep Q-learning method to solve it. The proposed network
only requires the raw images of the camera to move the robot's head toward the
best viewpoint. The model shows a very competitive rate of 80% success rate in
achieving the best viewpoint. We implemented the proposed method on a humanoid
robot simulated in Webots simulator. Our evaluations and experimental results
show that the proposed method outperforms the entropy-based methods in the
RoboCup context, in cases with high self-localisation errors. |
---|---|
DOI: | 10.48550/arxiv.2011.13851 |