Action Branching Architectures for Deep Reinforcement Learning

AAAI 32: 4131-4138 (2018) Discrete-action algorithms have been central to numerous recent successes of deep reinforcement learning. However, applying these algorithms to high-dimensional action tasks requires tackling the combinatorial increase of the number of possible actions with the number of ac...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Tavakoli, Arash, Pardo, Fabio, Kormushev, Petar
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Tavakoli, Arash
Pardo, Fabio
Kormushev, Petar
description AAAI 32: 4131-4138 (2018) Discrete-action algorithms have been central to numerous recent successes of deep reinforcement learning. However, applying these algorithms to high-dimensional action tasks requires tackling the combinatorial increase of the number of possible actions with the number of action dimensions. This problem is further exacerbated for continuous-action tasks that require fine control of actions via discretization. In this paper, we propose a novel neural architecture featuring a shared decision module followed by several network branches, one for each action dimension. This approach achieves a linear increase of the number of network outputs with the number of degrees of freedom by allowing a level of independence for each individual action dimension. To illustrate the approach, we present a novel agent, called Branching Dueling Q-Network (BDQ), as a branching variant of the Dueling Double Deep Q-Network (Dueling DDQN). We evaluate the performance of our agent on a set of challenging continuous control tasks. The empirical results show that the proposed agent scales gracefully to environments with increasing action dimensionality and indicate the significance of the shared decision module in coordination of the distributed action branches. Furthermore, we show that the proposed agent performs competitively against a state-of-the-art continuous control algorithm, Deep Deterministic Policy Gradient (DDPG).
doi_str_mv 10.48550/arxiv.1711.08946
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1711_08946</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1711_08946</sourcerecordid><originalsourceid>FETCH-LOGICAL-a676-d532dd7662bbfe21c34cc2b0a8e51fd3d4b42070caf066e7dc561d91b39503ab3</originalsourceid><addsrcrecordid>eNotj81KxDAURrNxIaMP4Mq8QGv-026EOv5CQZDZl5ubGw04mSFTRd_eOro6fPBx4DB2IUVrOmvFFdSv_NlKL2Urut64U3Y94Jx3hd9UKPiWyysf6sKZcP6odOBpV_kt0Z6_UC7LQNpSmflIUMvyPmMnCd4PdP7PFdvc323Wj834_PC0HsYGnHdNtFrF6J1TISRSErVBVEFAR1amqKMJRgkvEJJwjnxE62TsZdC9FRqCXrHLP-0xYNrXvIX6Pf2GTMcQ_QOfLUOK</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Action Branching Architectures for Deep Reinforcement Learning</title><source>arXiv.org</source><creator>Tavakoli, Arash ; Pardo, Fabio ; Kormushev, Petar</creator><creatorcontrib>Tavakoli, Arash ; Pardo, Fabio ; Kormushev, Petar</creatorcontrib><description>AAAI 32: 4131-4138 (2018) Discrete-action algorithms have been central to numerous recent successes of deep reinforcement learning. However, applying these algorithms to high-dimensional action tasks requires tackling the combinatorial increase of the number of possible actions with the number of action dimensions. This problem is further exacerbated for continuous-action tasks that require fine control of actions via discretization. In this paper, we propose a novel neural architecture featuring a shared decision module followed by several network branches, one for each action dimension. This approach achieves a linear increase of the number of network outputs with the number of degrees of freedom by allowing a level of independence for each individual action dimension. To illustrate the approach, we present a novel agent, called Branching Dueling Q-Network (BDQ), as a branching variant of the Dueling Double Deep Q-Network (Dueling DDQN). We evaluate the performance of our agent on a set of challenging continuous control tasks. The empirical results show that the proposed agent scales gracefully to environments with increasing action dimensionality and indicate the significance of the shared decision module in coordination of the distributed action branches. Furthermore, we show that the proposed agent performs competitively against a state-of-the-art continuous control algorithm, Deep Deterministic Policy Gradient (DDPG).</description><identifier>DOI: 10.48550/arxiv.1711.08946</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Learning</subject><creationdate>2017-11</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1711.08946$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1711.08946$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Tavakoli, Arash</creatorcontrib><creatorcontrib>Pardo, Fabio</creatorcontrib><creatorcontrib>Kormushev, Petar</creatorcontrib><title>Action Branching Architectures for Deep Reinforcement Learning</title><description>AAAI 32: 4131-4138 (2018) Discrete-action algorithms have been central to numerous recent successes of deep reinforcement learning. However, applying these algorithms to high-dimensional action tasks requires tackling the combinatorial increase of the number of possible actions with the number of action dimensions. This problem is further exacerbated for continuous-action tasks that require fine control of actions via discretization. In this paper, we propose a novel neural architecture featuring a shared decision module followed by several network branches, one for each action dimension. This approach achieves a linear increase of the number of network outputs with the number of degrees of freedom by allowing a level of independence for each individual action dimension. To illustrate the approach, we present a novel agent, called Branching Dueling Q-Network (BDQ), as a branching variant of the Dueling Double Deep Q-Network (Dueling DDQN). We evaluate the performance of our agent on a set of challenging continuous control tasks. The empirical results show that the proposed agent scales gracefully to environments with increasing action dimensionality and indicate the significance of the shared decision module in coordination of the distributed action branches. Furthermore, we show that the proposed agent performs competitively against a state-of-the-art continuous control algorithm, Deep Deterministic Policy Gradient (DDPG).</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2017</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj81KxDAURrNxIaMP4Mq8QGv-026EOv5CQZDZl5ubGw04mSFTRd_eOro6fPBx4DB2IUVrOmvFFdSv_NlKL2Urut64U3Y94Jx3hd9UKPiWyysf6sKZcP6odOBpV_kt0Z6_UC7LQNpSmflIUMvyPmMnCd4PdP7PFdvc323Wj834_PC0HsYGnHdNtFrF6J1TISRSErVBVEFAR1amqKMJRgkvEJJwjnxE62TsZdC9FRqCXrHLP-0xYNrXvIX6Pf2GTMcQ_QOfLUOK</recordid><startdate>20171124</startdate><enddate>20171124</enddate><creator>Tavakoli, Arash</creator><creator>Pardo, Fabio</creator><creator>Kormushev, Petar</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20171124</creationdate><title>Action Branching Architectures for Deep Reinforcement Learning</title><author>Tavakoli, Arash ; Pardo, Fabio ; Kormushev, Petar</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a676-d532dd7662bbfe21c34cc2b0a8e51fd3d4b42070caf066e7dc561d91b39503ab3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2017</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Tavakoli, Arash</creatorcontrib><creatorcontrib>Pardo, Fabio</creatorcontrib><creatorcontrib>Kormushev, Petar</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Tavakoli, Arash</au><au>Pardo, Fabio</au><au>Kormushev, Petar</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Action Branching Architectures for Deep Reinforcement Learning</atitle><date>2017-11-24</date><risdate>2017</risdate><abstract>AAAI 32: 4131-4138 (2018) Discrete-action algorithms have been central to numerous recent successes of deep reinforcement learning. However, applying these algorithms to high-dimensional action tasks requires tackling the combinatorial increase of the number of possible actions with the number of action dimensions. This problem is further exacerbated for continuous-action tasks that require fine control of actions via discretization. In this paper, we propose a novel neural architecture featuring a shared decision module followed by several network branches, one for each action dimension. This approach achieves a linear increase of the number of network outputs with the number of degrees of freedom by allowing a level of independence for each individual action dimension. To illustrate the approach, we present a novel agent, called Branching Dueling Q-Network (BDQ), as a branching variant of the Dueling Double Deep Q-Network (Dueling DDQN). We evaluate the performance of our agent on a set of challenging continuous control tasks. The empirical results show that the proposed agent scales gracefully to environments with increasing action dimensionality and indicate the significance of the shared decision module in coordination of the distributed action branches. Furthermore, we show that the proposed agent performs competitively against a state-of-the-art continuous control algorithm, Deep Deterministic Policy Gradient (DDPG).</abstract><doi>10.48550/arxiv.1711.08946</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.1711.08946
ispartof
issn
language eng
recordid cdi_arxiv_primary_1711_08946
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Learning
title Action Branching Architectures for Deep Reinforcement Learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-02T16%3A53%3A05IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Action%20Branching%20Architectures%20for%20Deep%20Reinforcement%20Learning&rft.au=Tavakoli,%20Arash&rft.date=2017-11-24&rft_id=info:doi/10.48550/arxiv.1711.08946&rft_dat=%3Carxiv_GOX%3E1711_08946%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true