Hierarchical Reinforcement Learning for Multi-agent MOBA Game
Real Time Strategy (RTS) games require macro strategies as well as micro strategies to obtain satisfactory performance since it has large state space, action space, and hidden information. This paper presents a novel hierarchical reinforcement learning model for mastering Multiplayer Online Battle A...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2019-06 |
---|---|
Hauptverfasser: | , , , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Zhang, Zhijian Li, Haozheng Zhang, Luo Zheng, Tianyin Zhang, Ting Xiong Hao Chen, Xiaoxin Chen, Min Xiao, Fangxu Zhou, Wei |
description | Real Time Strategy (RTS) games require macro strategies as well as micro strategies to obtain satisfactory performance since it has large state space, action space, and hidden information. This paper presents a novel hierarchical reinforcement learning model for mastering Multiplayer Online Battle Arena (MOBA) games, a sub-genre of RTS games. The novelty of this work are: (1) proposing a hierarchical framework, where agents execute macro strategies by imitation learning and carry out micromanipulations through reinforcement learning, (2) developing a simple self-learning method to get better sample efficiency for training, and (3) designing a dense reward function for multi-agent cooperation in the absence of game engine or Application Programming Interface (API). Finally, various experiments have been performed to validate the superior performance of the proposed method over other state-of-the-art reinforcement learning algorithms. Agent successfully learns to combat and defeat bronze-level built-in AI with 100% win rate, and experiments show that our method can create a competitive multi-agent for a kind of mobile MOBA game {\it King of Glory} in 5v5 mode. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2170773274</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2170773274</sourcerecordid><originalsourceid>FETCH-proquest_journals_21707732743</originalsourceid><addsrcrecordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mSw9chMLUosSs7ITE7MUQhKzcxLyy9KTs1NzStR8ElNLMrLzEtXAAop-JbmlGTqJqaDJHz9nRwV3BNzU3kYWNMSc4pTeaE0N4Oym2uIs4duQVF-YWlqcUl8Vn5pUR5QKt7I0NzA3NzYyNzEmDhVAD40Nzk</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2170773274</pqid></control><display><type>article</type><title>Hierarchical Reinforcement Learning for Multi-agent MOBA Game</title><source>Free E- Journals</source><creator>Zhang, Zhijian ; Li, Haozheng ; Zhang, Luo ; Zheng, Tianyin ; Zhang, Ting ; Xiong Hao ; Chen, Xiaoxin ; Chen, Min ; Xiao, Fangxu ; Zhou, Wei</creator><creatorcontrib>Zhang, Zhijian ; Li, Haozheng ; Zhang, Luo ; Zheng, Tianyin ; Zhang, Ting ; Xiong Hao ; Chen, Xiaoxin ; Chen, Min ; Xiao, Fangxu ; Zhou, Wei</creatorcontrib><description>Real Time Strategy (RTS) games require macro strategies as well as micro strategies to obtain satisfactory performance since it has large state space, action space, and hidden information. This paper presents a novel hierarchical reinforcement learning model for mastering Multiplayer Online Battle Arena (MOBA) games, a sub-genre of RTS games. The novelty of this work are: (1) proposing a hierarchical framework, where agents execute macro strategies by imitation learning and carry out micromanipulations through reinforcement learning, (2) developing a simple self-learning method to get better sample efficiency for training, and (3) designing a dense reward function for multi-agent cooperation in the absence of game engine or Application Programming Interface (API). Finally, various experiments have been performed to validate the superior performance of the proposed method over other state-of-the-art reinforcement learning algorithms. Agent successfully learns to combat and defeat bronze-level built-in AI with 100% win rate, and experiments show that our method can create a competitive multi-agent for a kind of mobile MOBA game {\it King of Glory} in 5v5 mode.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Computer & video games ; Feature extraction ; Game theory ; Machine learning ; Mastering ; Multiagent systems ; Target detection</subject><ispartof>arXiv.org, 2019-06</ispartof><rights>2019. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Zhang, Zhijian</creatorcontrib><creatorcontrib>Li, Haozheng</creatorcontrib><creatorcontrib>Zhang, Luo</creatorcontrib><creatorcontrib>Zheng, Tianyin</creatorcontrib><creatorcontrib>Zhang, Ting</creatorcontrib><creatorcontrib>Xiong Hao</creatorcontrib><creatorcontrib>Chen, Xiaoxin</creatorcontrib><creatorcontrib>Chen, Min</creatorcontrib><creatorcontrib>Xiao, Fangxu</creatorcontrib><creatorcontrib>Zhou, Wei</creatorcontrib><title>Hierarchical Reinforcement Learning for Multi-agent MOBA Game</title><title>arXiv.org</title><description>Real Time Strategy (RTS) games require macro strategies as well as micro strategies to obtain satisfactory performance since it has large state space, action space, and hidden information. This paper presents a novel hierarchical reinforcement learning model for mastering Multiplayer Online Battle Arena (MOBA) games, a sub-genre of RTS games. The novelty of this work are: (1) proposing a hierarchical framework, where agents execute macro strategies by imitation learning and carry out micromanipulations through reinforcement learning, (2) developing a simple self-learning method to get better sample efficiency for training, and (3) designing a dense reward function for multi-agent cooperation in the absence of game engine or Application Programming Interface (API). Finally, various experiments have been performed to validate the superior performance of the proposed method over other state-of-the-art reinforcement learning algorithms. Agent successfully learns to combat and defeat bronze-level built-in AI with 100% win rate, and experiments show that our method can create a competitive multi-agent for a kind of mobile MOBA game {\it King of Glory} in 5v5 mode.</description><subject>Computer & video games</subject><subject>Feature extraction</subject><subject>Game theory</subject><subject>Machine learning</subject><subject>Mastering</subject><subject>Multiagent systems</subject><subject>Target detection</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mSw9chMLUosSs7ITE7MUQhKzcxLyy9KTs1NzStR8ElNLMrLzEtXAAop-JbmlGTqJqaDJHz9nRwV3BNzU3kYWNMSc4pTeaE0N4Oym2uIs4duQVF-YWlqcUl8Vn5pUR5QKt7I0NzA3NzYyNzEmDhVAD40Nzk</recordid><startdate>20190621</startdate><enddate>20190621</enddate><creator>Zhang, Zhijian</creator><creator>Li, Haozheng</creator><creator>Zhang, Luo</creator><creator>Zheng, Tianyin</creator><creator>Zhang, Ting</creator><creator>Xiong Hao</creator><creator>Chen, Xiaoxin</creator><creator>Chen, Min</creator><creator>Xiao, Fangxu</creator><creator>Zhou, Wei</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20190621</creationdate><title>Hierarchical Reinforcement Learning for Multi-agent MOBA Game</title><author>Zhang, Zhijian ; Li, Haozheng ; Zhang, Luo ; Zheng, Tianyin ; Zhang, Ting ; Xiong Hao ; Chen, Xiaoxin ; Chen, Min ; Xiao, Fangxu ; Zhou, Wei</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_21707732743</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Computer & video games</topic><topic>Feature extraction</topic><topic>Game theory</topic><topic>Machine learning</topic><topic>Mastering</topic><topic>Multiagent systems</topic><topic>Target detection</topic><toplevel>online_resources</toplevel><creatorcontrib>Zhang, Zhijian</creatorcontrib><creatorcontrib>Li, Haozheng</creatorcontrib><creatorcontrib>Zhang, Luo</creatorcontrib><creatorcontrib>Zheng, Tianyin</creatorcontrib><creatorcontrib>Zhang, Ting</creatorcontrib><creatorcontrib>Xiong Hao</creatorcontrib><creatorcontrib>Chen, Xiaoxin</creatorcontrib><creatorcontrib>Chen, Min</creatorcontrib><creatorcontrib>Xiao, Fangxu</creatorcontrib><creatorcontrib>Zhou, Wei</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zhang, Zhijian</au><au>Li, Haozheng</au><au>Zhang, Luo</au><au>Zheng, Tianyin</au><au>Zhang, Ting</au><au>Xiong Hao</au><au>Chen, Xiaoxin</au><au>Chen, Min</au><au>Xiao, Fangxu</au><au>Zhou, Wei</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Hierarchical Reinforcement Learning for Multi-agent MOBA Game</atitle><jtitle>arXiv.org</jtitle><date>2019-06-21</date><risdate>2019</risdate><eissn>2331-8422</eissn><abstract>Real Time Strategy (RTS) games require macro strategies as well as micro strategies to obtain satisfactory performance since it has large state space, action space, and hidden information. This paper presents a novel hierarchical reinforcement learning model for mastering Multiplayer Online Battle Arena (MOBA) games, a sub-genre of RTS games. The novelty of this work are: (1) proposing a hierarchical framework, where agents execute macro strategies by imitation learning and carry out micromanipulations through reinforcement learning, (2) developing a simple self-learning method to get better sample efficiency for training, and (3) designing a dense reward function for multi-agent cooperation in the absence of game engine or Application Programming Interface (API). Finally, various experiments have been performed to validate the superior performance of the proposed method over other state-of-the-art reinforcement learning algorithms. Agent successfully learns to combat and defeat bronze-level built-in AI with 100% win rate, and experiments show that our method can create a competitive multi-agent for a kind of mobile MOBA game {\it King of Glory} in 5v5 mode.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2019-06 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2170773274 |
source | Free E- Journals |
subjects | Computer & video games Feature extraction Game theory Machine learning Mastering Multiagent systems Target detection |
title | Hierarchical Reinforcement Learning for Multi-agent MOBA Game |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-02T18%3A41%3A02IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Hierarchical%20Reinforcement%20Learning%20for%20Multi-agent%20MOBA%20Game&rft.jtitle=arXiv.org&rft.au=Zhang,%20Zhijian&rft.date=2019-06-21&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2170773274%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2170773274&rft_id=info:pmid/&rfr_iscdi=true |