A Comparison of Action Spaces for Learning Manipulation Tasks

Designing reinforcement learning (RL) problems that can produce delicate and precise manipulation policies requires careful choice of the reward function, state, and action spaces. Much prior work on applying RL to manipulation tasks has defined the action space in terms of direct joint torques or r...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Varin, Patrick, Grossman, Lev, Kuindersma, Scott
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Varin, Patrick
Grossman, Lev
Kuindersma, Scott
description Designing reinforcement learning (RL) problems that can produce delicate and precise manipulation policies requires careful choice of the reward function, state, and action spaces. Much prior work on applying RL to manipulation tasks has defined the action space in terms of direct joint torques or reference positions for a joint-space proportional derivative (PD) controller. In practice, it is often possible to add additional structure by taking advantage of model-based controllers that support both accurate positioning and control of the dynamic response of the manipulator. In this paper, we evaluate how the choice of action space for dynamic manipulation tasks affects the sample complexity as well as the final quality of learned policies. We compare learning performance across three tasks (peg insertion, hammering, and pushing), four action spaces (torque, joint PD, inverse dynamics, and impedance control), and using two modern reinforcement learning algorithms (Proximal Policy Optimization and Soft Actor-Critic). Our results lend support to the hypothesis that learning references for a task-space impedance controller significantly reduces the number of samples needed to achieve good performance across all tasks and algorithms.
doi_str_mv 10.48550/arxiv.1908.08659
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1908_08659</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1908_08659</sourcerecordid><originalsourceid>FETCH-LOGICAL-a679-5775559e50f75b4a1674a8c4bf65657ae4a941756651b19f37e23d0f840c9ece3</originalsourceid><addsrcrecordid>eNotj71uwjAUhb10QGkfgAm_QFKb-Ppn6BBFbUFK1YHs0Y2xkVVIIrut2rcHAtM5w9Gn8xGy5KwQGoA9Y_wLvwU3TBdMSzAL8lLRejxNGEMaBzp6WtnvcGm7Ca1L1I-RNg7jEIYD_cAhTD9HnActpq_0SB48HpN7umdG2rfXtt7kzef7tq6aHKUyOSgFAMYB8wp6gVwqgdqK3kuQoNAJNIIrkBJ4z40vlVuXe-a1YNY468qMrG7Y-X83xXDC-N9dPbrZozwDj69BoA</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>A Comparison of Action Spaces for Learning Manipulation Tasks</title><source>arXiv.org</source><creator>Varin, Patrick ; Grossman, Lev ; Kuindersma, Scott</creator><creatorcontrib>Varin, Patrick ; Grossman, Lev ; Kuindersma, Scott</creatorcontrib><description>Designing reinforcement learning (RL) problems that can produce delicate and precise manipulation policies requires careful choice of the reward function, state, and action spaces. Much prior work on applying RL to manipulation tasks has defined the action space in terms of direct joint torques or reference positions for a joint-space proportional derivative (PD) controller. In practice, it is often possible to add additional structure by taking advantage of model-based controllers that support both accurate positioning and control of the dynamic response of the manipulator. In this paper, we evaluate how the choice of action space for dynamic manipulation tasks affects the sample complexity as well as the final quality of learned policies. We compare learning performance across three tasks (peg insertion, hammering, and pushing), four action spaces (torque, joint PD, inverse dynamics, and impedance control), and using two modern reinforcement learning algorithms (Proximal Policy Optimization and Soft Actor-Critic). Our results lend support to the hypothesis that learning references for a task-space impedance controller significantly reduces the number of samples needed to achieve good performance across all tasks and algorithms.</description><identifier>DOI: 10.48550/arxiv.1908.08659</identifier><language>eng</language><subject>Computer Science - Learning ; Computer Science - Robotics</subject><creationdate>2019-08</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1908.08659$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1908.08659$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Varin, Patrick</creatorcontrib><creatorcontrib>Grossman, Lev</creatorcontrib><creatorcontrib>Kuindersma, Scott</creatorcontrib><title>A Comparison of Action Spaces for Learning Manipulation Tasks</title><description>Designing reinforcement learning (RL) problems that can produce delicate and precise manipulation policies requires careful choice of the reward function, state, and action spaces. Much prior work on applying RL to manipulation tasks has defined the action space in terms of direct joint torques or reference positions for a joint-space proportional derivative (PD) controller. In practice, it is often possible to add additional structure by taking advantage of model-based controllers that support both accurate positioning and control of the dynamic response of the manipulator. In this paper, we evaluate how the choice of action space for dynamic manipulation tasks affects the sample complexity as well as the final quality of learned policies. We compare learning performance across three tasks (peg insertion, hammering, and pushing), four action spaces (torque, joint PD, inverse dynamics, and impedance control), and using two modern reinforcement learning algorithms (Proximal Policy Optimization and Soft Actor-Critic). Our results lend support to the hypothesis that learning references for a task-space impedance controller significantly reduces the number of samples needed to achieve good performance across all tasks and algorithms.</description><subject>Computer Science - Learning</subject><subject>Computer Science - Robotics</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj71uwjAUhb10QGkfgAm_QFKb-Ppn6BBFbUFK1YHs0Y2xkVVIIrut2rcHAtM5w9Gn8xGy5KwQGoA9Y_wLvwU3TBdMSzAL8lLRejxNGEMaBzp6WtnvcGm7Ca1L1I-RNg7jEIYD_cAhTD9HnActpq_0SB48HpN7umdG2rfXtt7kzef7tq6aHKUyOSgFAMYB8wp6gVwqgdqK3kuQoNAJNIIrkBJ4z40vlVuXe-a1YNY468qMrG7Y-X83xXDC-N9dPbrZozwDj69BoA</recordid><startdate>20190823</startdate><enddate>20190823</enddate><creator>Varin, Patrick</creator><creator>Grossman, Lev</creator><creator>Kuindersma, Scott</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20190823</creationdate><title>A Comparison of Action Spaces for Learning Manipulation Tasks</title><author>Varin, Patrick ; Grossman, Lev ; Kuindersma, Scott</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a679-5775559e50f75b4a1674a8c4bf65657ae4a941756651b19f37e23d0f840c9ece3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Computer Science - Learning</topic><topic>Computer Science - Robotics</topic><toplevel>online_resources</toplevel><creatorcontrib>Varin, Patrick</creatorcontrib><creatorcontrib>Grossman, Lev</creatorcontrib><creatorcontrib>Kuindersma, Scott</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Varin, Patrick</au><au>Grossman, Lev</au><au>Kuindersma, Scott</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A Comparison of Action Spaces for Learning Manipulation Tasks</atitle><date>2019-08-23</date><risdate>2019</risdate><abstract>Designing reinforcement learning (RL) problems that can produce delicate and precise manipulation policies requires careful choice of the reward function, state, and action spaces. Much prior work on applying RL to manipulation tasks has defined the action space in terms of direct joint torques or reference positions for a joint-space proportional derivative (PD) controller. In practice, it is often possible to add additional structure by taking advantage of model-based controllers that support both accurate positioning and control of the dynamic response of the manipulator. In this paper, we evaluate how the choice of action space for dynamic manipulation tasks affects the sample complexity as well as the final quality of learned policies. We compare learning performance across three tasks (peg insertion, hammering, and pushing), four action spaces (torque, joint PD, inverse dynamics, and impedance control), and using two modern reinforcement learning algorithms (Proximal Policy Optimization and Soft Actor-Critic). Our results lend support to the hypothesis that learning references for a task-space impedance controller significantly reduces the number of samples needed to achieve good performance across all tasks and algorithms.</abstract><doi>10.48550/arxiv.1908.08659</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.1908.08659
ispartof
issn
language eng
recordid cdi_arxiv_primary_1908_08659
source arXiv.org
subjects Computer Science - Learning
Computer Science - Robotics
title A Comparison of Action Spaces for Learning Manipulation Tasks
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-22T12%3A05%3A59IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20Comparison%20of%20Action%20Spaces%20for%20Learning%20Manipulation%20Tasks&rft.au=Varin,%20Patrick&rft.date=2019-08-23&rft_id=info:doi/10.48550/arxiv.1908.08659&rft_dat=%3Carxiv_GOX%3E1908_08659%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true