A guidance method for coplanar orbital interception based on reinforcement learning
This paper investigates the guidance method based on reinforcement learning (RL) for the coplanar orbital intercep-tion in a continuous low-thrust scenario. The problem is formu-lated into a Markov decision process (MDP) model, then a well-designed RL algorithm, experience based deep deterministic p...
Gespeichert in:
Veröffentlicht in: | Journal of systems engineering and electronics 2021-08, Vol.32 (4), p.927-938 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 938 |
---|---|
container_issue | 4 |
container_start_page | 927 |
container_title | Journal of systems engineering and electronics |
container_volume | 32 |
creator | Xin, Zeng Yanwei, Zhu Leping, Yang Chengming, Zhang |
description | This paper investigates the guidance method based on reinforcement learning (RL) for the coplanar orbital intercep-tion in a continuous low-thrust scenario. The problem is formu-lated into a Markov decision process (MDP) model, then a well-designed RL algorithm, experience based deep deterministic policy gradient (EBDDPG), is proposed to solve it. By taking the advantage of prior information generated through the optimal control model, the proposed algorithm not only resolves the con-vergence problem of the common RL algorithm, but also suc-cessfully trains an efficient deep neural network (DNN) controller for the chaser spacecraft to generate the control sequence. Nu-merical simulation results show that the proposed algorithm is feasible and the trained DNN controller significantly improves the efficiency over traditional optimization methods by roughly two orders of magnitude. |
doi_str_mv | 10.23919/JSEE.2021.000079 |
format | Article |
fullrecord | <record><control><sourceid>wanfang_jour_cross</sourceid><recordid>TN_cdi_wanfang_journals_xtgcydzjs_e202104018</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><wanfj_id>xtgcydzjs_e202104018</wanfj_id><sourcerecordid>xtgcydzjs_e202104018</sourcerecordid><originalsourceid>FETCH-LOGICAL-c325t-4136881fca963d1202cb8b6c8e96a76b9ffd6200f2b5ccfb95c1feaee43c47bc3</originalsourceid><addsrcrecordid>eNpNkE1PAjEQhhujiQT5Ad76AwT7sSzbIyH4FRMP6LlpZ6drydKSdonir3cXPDiXmcMzM28eQm45mwmpuLp_2azXM8EEn7G-FuqCjDhjxbTgUlz-m6_JJOctO0FMCDYimyVtDr42AZDusPuMNXUxUYj71gSTaEzWd6alPnSYAPedj4Fak7Gm_ZDQhx4H3GHoaIsmBR-aG3LlTJtx8tfH5ONh_b56mr6-PT6vlq9TkGLeDYHKquIOjCplzfv4YCtbQoWqNIvSKufqUjDmhJ0DOKvmwB0axEJCsbAgx-TufPfLBGdCo7fxkEL_UX93DRzrn23WOFhhBeNVj_MzDinmnNDpffI7k46aM33yqAePetjQZ4_yF76oZ1s</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>A guidance method for coplanar orbital interception based on reinforcement learning</title><source>IEEE Power & Energy Library</source><source>EZB-FREE-00999 freely available EZB journals</source><creator>Xin, Zeng ; Yanwei, Zhu ; Leping, Yang ; Chengming, Zhang</creator><creatorcontrib>Xin, Zeng ; Yanwei, Zhu ; Leping, Yang ; Chengming, Zhang</creatorcontrib><description>This paper investigates the guidance method based on reinforcement learning (RL) for the coplanar orbital intercep-tion in a continuous low-thrust scenario. The problem is formu-lated into a Markov decision process (MDP) model, then a well-designed RL algorithm, experience based deep deterministic policy gradient (EBDDPG), is proposed to solve it. By taking the advantage of prior information generated through the optimal control model, the proposed algorithm not only resolves the con-vergence problem of the common RL algorithm, but also suc-cessfully trains an efficient deep neural network (DNN) controller for the chaser spacecraft to generate the control sequence. Nu-merical simulation results show that the proposed algorithm is feasible and the trained DNN controller significantly improves the efficiency over traditional optimization methods by roughly two orders of magnitude.</description><identifier>ISSN: 1004-4132</identifier><identifier>EISSN: 1004-4132</identifier><identifier>DOI: 10.23919/JSEE.2021.000079</identifier><language>eng</language><publisher>College of Aeronautics and Astronautics,National University of Defense Technology,Changsha 410073,China</publisher><ispartof>Journal of systems engineering and electronics, 2021-08, Vol.32 (4), p.927-938</ispartof><rights>Copyright © Wanfang Data Co. Ltd. All Rights Reserved.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c325t-4136881fca963d1202cb8b6c8e96a76b9ffd6200f2b5ccfb95c1feaee43c47bc3</citedby></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Uhttp://www.wanfangdata.com.cn/images/PeriodicalImages/xtgcydzjs-e/xtgcydzjs-e.jpg</thumbnail><link.rule.ids>314,780,784,27923,27924</link.rule.ids></links><search><creatorcontrib>Xin, Zeng</creatorcontrib><creatorcontrib>Yanwei, Zhu</creatorcontrib><creatorcontrib>Leping, Yang</creatorcontrib><creatorcontrib>Chengming, Zhang</creatorcontrib><title>A guidance method for coplanar orbital interception based on reinforcement learning</title><title>Journal of systems engineering and electronics</title><description>This paper investigates the guidance method based on reinforcement learning (RL) for the coplanar orbital intercep-tion in a continuous low-thrust scenario. The problem is formu-lated into a Markov decision process (MDP) model, then a well-designed RL algorithm, experience based deep deterministic policy gradient (EBDDPG), is proposed to solve it. By taking the advantage of prior information generated through the optimal control model, the proposed algorithm not only resolves the con-vergence problem of the common RL algorithm, but also suc-cessfully trains an efficient deep neural network (DNN) controller for the chaser spacecraft to generate the control sequence. Nu-merical simulation results show that the proposed algorithm is feasible and the trained DNN controller significantly improves the efficiency over traditional optimization methods by roughly two orders of magnitude.</description><issn>1004-4132</issn><issn>1004-4132</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><recordid>eNpNkE1PAjEQhhujiQT5Ad76AwT7sSzbIyH4FRMP6LlpZ6drydKSdonir3cXPDiXmcMzM28eQm45mwmpuLp_2azXM8EEn7G-FuqCjDhjxbTgUlz-m6_JJOctO0FMCDYimyVtDr42AZDusPuMNXUxUYj71gSTaEzWd6alPnSYAPedj4Fak7Gm_ZDQhx4H3GHoaIsmBR-aG3LlTJtx8tfH5ONh_b56mr6-PT6vlq9TkGLeDYHKquIOjCplzfv4YCtbQoWqNIvSKufqUjDmhJ0DOKvmwB0axEJCsbAgx-TufPfLBGdCo7fxkEL_UX93DRzrn23WOFhhBeNVj_MzDinmnNDpffI7k46aM33yqAePetjQZ4_yF76oZ1s</recordid><startdate>20210801</startdate><enddate>20210801</enddate><creator>Xin, Zeng</creator><creator>Yanwei, Zhu</creator><creator>Leping, Yang</creator><creator>Chengming, Zhang</creator><general>College of Aeronautics and Astronautics,National University of Defense Technology,Changsha 410073,China</general><scope>AAYXX</scope><scope>CITATION</scope><scope>2B.</scope><scope>4A8</scope><scope>92I</scope><scope>93N</scope><scope>PSX</scope><scope>TCJ</scope></search><sort><creationdate>20210801</creationdate><title>A guidance method for coplanar orbital interception based on reinforcement learning</title><author>Xin, Zeng ; Yanwei, Zhu ; Leping, Yang ; Chengming, Zhang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c325t-4136881fca963d1202cb8b6c8e96a76b9ffd6200f2b5ccfb95c1feaee43c47bc3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Xin, Zeng</creatorcontrib><creatorcontrib>Yanwei, Zhu</creatorcontrib><creatorcontrib>Leping, Yang</creatorcontrib><creatorcontrib>Chengming, Zhang</creatorcontrib><collection>CrossRef</collection><collection>Wanfang Data Journals - Hong Kong</collection><collection>WANFANG Data Centre</collection><collection>Wanfang Data Journals</collection><collection>万方数据期刊 - 香港版</collection><collection>China Online Journals (COJ)</collection><collection>China Online Journals (COJ)</collection><jtitle>Journal of systems engineering and electronics</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Xin, Zeng</au><au>Yanwei, Zhu</au><au>Leping, Yang</au><au>Chengming, Zhang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A guidance method for coplanar orbital interception based on reinforcement learning</atitle><jtitle>Journal of systems engineering and electronics</jtitle><date>2021-08-01</date><risdate>2021</risdate><volume>32</volume><issue>4</issue><spage>927</spage><epage>938</epage><pages>927-938</pages><issn>1004-4132</issn><eissn>1004-4132</eissn><abstract>This paper investigates the guidance method based on reinforcement learning (RL) for the coplanar orbital intercep-tion in a continuous low-thrust scenario. The problem is formu-lated into a Markov decision process (MDP) model, then a well-designed RL algorithm, experience based deep deterministic policy gradient (EBDDPG), is proposed to solve it. By taking the advantage of prior information generated through the optimal control model, the proposed algorithm not only resolves the con-vergence problem of the common RL algorithm, but also suc-cessfully trains an efficient deep neural network (DNN) controller for the chaser spacecraft to generate the control sequence. Nu-merical simulation results show that the proposed algorithm is feasible and the trained DNN controller significantly improves the efficiency over traditional optimization methods by roughly two orders of magnitude.</abstract><pub>College of Aeronautics and Astronautics,National University of Defense Technology,Changsha 410073,China</pub><doi>10.23919/JSEE.2021.000079</doi><tpages>12</tpages><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1004-4132 |
ispartof | Journal of systems engineering and electronics, 2021-08, Vol.32 (4), p.927-938 |
issn | 1004-4132 1004-4132 |
language | eng |
recordid | cdi_wanfang_journals_xtgcydzjs_e202104018 |
source | IEEE Power & Energy Library; EZB-FREE-00999 freely available EZB journals |
title | A guidance method for coplanar orbital interception based on reinforcement learning |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-12T07%3A24%3A24IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-wanfang_jour_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20guidance%20method%20for%20coplanar%20orbital%20interception%20based%20on%20reinforcement%20learning&rft.jtitle=Journal%20of%20systems%20engineering%20and%20electronics&rft.au=Xin,%20Zeng&rft.date=2021-08-01&rft.volume=32&rft.issue=4&rft.spage=927&rft.epage=938&rft.pages=927-938&rft.issn=1004-4132&rft.eissn=1004-4132&rft_id=info:doi/10.23919/JSEE.2021.000079&rft_dat=%3Cwanfang_jour_cross%3Extgcydzjs_e202104018%3C/wanfang_jour_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_wanfj_id=xtgcydzjs_e202104018&rfr_iscdi=true |