Investigation of Drive-Reinforcement Learning and Application of Learning to Flight Control
This report describes results obtained during a multiphase research program having the broad aim of investigating the application of learning systems to automatic control in general, and to flight control in particular. The first phase analyzed the drive-reinforcement learning paradigm and examined...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Report |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Baker, Walter L Atkins, Stephen C Baird, Leemon C , III Koenig, Mark A Millington, Peter J |
description | This report describes results obtained during a multiphase research program having the broad aim of investigating the application of learning systems to automatic control in general, and to flight control in particular. The first phase analyzed the drive-reinforcement learning paradigm and examined its application to automatic control, with mixed results. The second phase compared a number of alternative strategies for learning augmented control, and resulted in the conception of a new hybrid adaptive/learning control scheme. Subsequently, in the third phase, this hybrid control approach was more fully developed and applied to several nonlinear dynamical systems, including a cart- pole system, aeroelastic oscillator, and a three-degree-of-freedom aircraft. The fourth phase revisited drive-reinforcement learning from the point of view of optimal control and successfully applied a version embedded in the associative control process architecture to regulate an aeroelastic oscillator. The fifth phase examined the problem of learning augmented estimation, and resulted in the development of a preliminary estimation scheme consistent with the hybrid control approach. In the sixth and final phase, the hybrid control methodology was applied to a nonlinear, six-degree-of-freedom flight control problem, and then demonstrated via a challenging multiaxis maneuver. ACP Network, Drive- reinforcement, Reinforcement learning, Adaptive control, Hybrid control, Nonlinear control, Aircraft flight control, Learning control, Optimal control. |
format | Report |
fullrecord | <record><control><sourceid>dtic_1RU</sourceid><recordid>TN_cdi_dtic_stinet_ADA277442</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>ADA277442</sourcerecordid><originalsourceid>FETCH-dtic_stinet_ADA2774423</originalsourceid><addsrcrecordid>eNrjZIj2zCtLLS7JTE8syczPU8hPU3ApyixL1Q1KzcxLyy9KTs1NzStR8ElNLMrLzEtXSMxLUXAsKMjJTIarh8uV5Cu45WSmZ5QoOOfnlRTl5_AwsKYl5hSn8kJpbgYZN9cQZw_dlJLM5HigpXmpJfGOLo5G5uYmJkbGBKQBEPE4YQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>report</recordtype></control><display><type>report</type><title>Investigation of Drive-Reinforcement Learning and Application of Learning to Flight Control</title><source>DTIC Technical Reports</source><creator>Baker, Walter L ; Atkins, Stephen C ; Baird, Leemon C , III ; Koenig, Mark A ; Millington, Peter J</creator><creatorcontrib>Baker, Walter L ; Atkins, Stephen C ; Baird, Leemon C , III ; Koenig, Mark A ; Millington, Peter J ; CHARLES STARK DRAPER LAB INC CAMBRIDGE MA</creatorcontrib><description>This report describes results obtained during a multiphase research program having the broad aim of investigating the application of learning systems to automatic control in general, and to flight control in particular. The first phase analyzed the drive-reinforcement learning paradigm and examined its application to automatic control, with mixed results. The second phase compared a number of alternative strategies for learning augmented control, and resulted in the conception of a new hybrid adaptive/learning control scheme. Subsequently, in the third phase, this hybrid control approach was more fully developed and applied to several nonlinear dynamical systems, including a cart- pole system, aeroelastic oscillator, and a three-degree-of-freedom aircraft. The fourth phase revisited drive-reinforcement learning from the point of view of optimal control and successfully applied a version embedded in the associative control process architecture to regulate an aeroelastic oscillator. The fifth phase examined the problem of learning augmented estimation, and resulted in the development of a preliminary estimation scheme consistent with the hybrid control approach. In the sixth and final phase, the hybrid control methodology was applied to a nonlinear, six-degree-of-freedom flight control problem, and then demonstrated via a challenging multiaxis maneuver. ACP Network, Drive- reinforcement, Reinforcement learning, Adaptive control, Hybrid control, Nonlinear control, Aircraft flight control, Learning control, Optimal control.</description><language>eng</language><subject>ADAPTIVE CONTROL SYSTEMS ; AIRCRAFT ; APPROACH ; ARCHITECTURE ; AUTOMATIC ; CONTROL ; Cybernetics ; DRIVES ; FLIGHT ; Flight Control and Instrumentation ; FLIGHT CONTROL SYSTEMS ; LEARNING ; MANEUVERS ; METHODOLOGY ; NETWORKS ; OPTIMIZATION ; OSCILLATORS ; PE62204F ; PHASE ; STRATEGY ; WUWL20030546</subject><creationdate>1993</creationdate><rights>APPROVED FOR PUBLIC RELEASE</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>230,776,881,27544,27545</link.rule.ids><linktorsrc>$$Uhttps://apps.dtic.mil/sti/citations/ADA277442$$EView_record_in_DTIC$$FView_record_in_$$GDTIC$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>Baker, Walter L</creatorcontrib><creatorcontrib>Atkins, Stephen C</creatorcontrib><creatorcontrib>Baird, Leemon C , III</creatorcontrib><creatorcontrib>Koenig, Mark A</creatorcontrib><creatorcontrib>Millington, Peter J</creatorcontrib><creatorcontrib>CHARLES STARK DRAPER LAB INC CAMBRIDGE MA</creatorcontrib><title>Investigation of Drive-Reinforcement Learning and Application of Learning to Flight Control</title><description>This report describes results obtained during a multiphase research program having the broad aim of investigating the application of learning systems to automatic control in general, and to flight control in particular. The first phase analyzed the drive-reinforcement learning paradigm and examined its application to automatic control, with mixed results. The second phase compared a number of alternative strategies for learning augmented control, and resulted in the conception of a new hybrid adaptive/learning control scheme. Subsequently, in the third phase, this hybrid control approach was more fully developed and applied to several nonlinear dynamical systems, including a cart- pole system, aeroelastic oscillator, and a three-degree-of-freedom aircraft. The fourth phase revisited drive-reinforcement learning from the point of view of optimal control and successfully applied a version embedded in the associative control process architecture to regulate an aeroelastic oscillator. The fifth phase examined the problem of learning augmented estimation, and resulted in the development of a preliminary estimation scheme consistent with the hybrid control approach. In the sixth and final phase, the hybrid control methodology was applied to a nonlinear, six-degree-of-freedom flight control problem, and then demonstrated via a challenging multiaxis maneuver. ACP Network, Drive- reinforcement, Reinforcement learning, Adaptive control, Hybrid control, Nonlinear control, Aircraft flight control, Learning control, Optimal control.</description><subject>ADAPTIVE CONTROL SYSTEMS</subject><subject>AIRCRAFT</subject><subject>APPROACH</subject><subject>ARCHITECTURE</subject><subject>AUTOMATIC</subject><subject>CONTROL</subject><subject>Cybernetics</subject><subject>DRIVES</subject><subject>FLIGHT</subject><subject>Flight Control and Instrumentation</subject><subject>FLIGHT CONTROL SYSTEMS</subject><subject>LEARNING</subject><subject>MANEUVERS</subject><subject>METHODOLOGY</subject><subject>NETWORKS</subject><subject>OPTIMIZATION</subject><subject>OSCILLATORS</subject><subject>PE62204F</subject><subject>PHASE</subject><subject>STRATEGY</subject><subject>WUWL20030546</subject><fulltext>true</fulltext><rsrctype>report</rsrctype><creationdate>1993</creationdate><recordtype>report</recordtype><sourceid>1RU</sourceid><recordid>eNrjZIj2zCtLLS7JTE8syczPU8hPU3ApyixL1Q1KzcxLyy9KTs1NzStR8ElNLMrLzEtXSMxLUXAsKMjJTIarh8uV5Cu45WSmZ5QoOOfnlRTl5_AwsKYl5hSn8kJpbgYZN9cQZw_dlJLM5HigpXmpJfGOLo5G5uYmJkbGBKQBEPE4YQ</recordid><startdate>199308</startdate><enddate>199308</enddate><creator>Baker, Walter L</creator><creator>Atkins, Stephen C</creator><creator>Baird, Leemon C , III</creator><creator>Koenig, Mark A</creator><creator>Millington, Peter J</creator><scope>1RU</scope><scope>BHM</scope></search><sort><creationdate>199308</creationdate><title>Investigation of Drive-Reinforcement Learning and Application of Learning to Flight Control</title><author>Baker, Walter L ; Atkins, Stephen C ; Baird, Leemon C , III ; Koenig, Mark A ; Millington, Peter J</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-dtic_stinet_ADA2774423</frbrgroupid><rsrctype>reports</rsrctype><prefilter>reports</prefilter><language>eng</language><creationdate>1993</creationdate><topic>ADAPTIVE CONTROL SYSTEMS</topic><topic>AIRCRAFT</topic><topic>APPROACH</topic><topic>ARCHITECTURE</topic><topic>AUTOMATIC</topic><topic>CONTROL</topic><topic>Cybernetics</topic><topic>DRIVES</topic><topic>FLIGHT</topic><topic>Flight Control and Instrumentation</topic><topic>FLIGHT CONTROL SYSTEMS</topic><topic>LEARNING</topic><topic>MANEUVERS</topic><topic>METHODOLOGY</topic><topic>NETWORKS</topic><topic>OPTIMIZATION</topic><topic>OSCILLATORS</topic><topic>PE62204F</topic><topic>PHASE</topic><topic>STRATEGY</topic><topic>WUWL20030546</topic><toplevel>online_resources</toplevel><creatorcontrib>Baker, Walter L</creatorcontrib><creatorcontrib>Atkins, Stephen C</creatorcontrib><creatorcontrib>Baird, Leemon C , III</creatorcontrib><creatorcontrib>Koenig, Mark A</creatorcontrib><creatorcontrib>Millington, Peter J</creatorcontrib><creatorcontrib>CHARLES STARK DRAPER LAB INC CAMBRIDGE MA</creatorcontrib><collection>DTIC Technical Reports</collection><collection>DTIC STINET</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Baker, Walter L</au><au>Atkins, Stephen C</au><au>Baird, Leemon C , III</au><au>Koenig, Mark A</au><au>Millington, Peter J</au><aucorp>CHARLES STARK DRAPER LAB INC CAMBRIDGE MA</aucorp><format>book</format><genre>unknown</genre><ristype>RPRT</ristype><btitle>Investigation of Drive-Reinforcement Learning and Application of Learning to Flight Control</btitle><date>1993-08</date><risdate>1993</risdate><abstract>This report describes results obtained during a multiphase research program having the broad aim of investigating the application of learning systems to automatic control in general, and to flight control in particular. The first phase analyzed the drive-reinforcement learning paradigm and examined its application to automatic control, with mixed results. The second phase compared a number of alternative strategies for learning augmented control, and resulted in the conception of a new hybrid adaptive/learning control scheme. Subsequently, in the third phase, this hybrid control approach was more fully developed and applied to several nonlinear dynamical systems, including a cart- pole system, aeroelastic oscillator, and a three-degree-of-freedom aircraft. The fourth phase revisited drive-reinforcement learning from the point of view of optimal control and successfully applied a version embedded in the associative control process architecture to regulate an aeroelastic oscillator. The fifth phase examined the problem of learning augmented estimation, and resulted in the development of a preliminary estimation scheme consistent with the hybrid control approach. In the sixth and final phase, the hybrid control methodology was applied to a nonlinear, six-degree-of-freedom flight control problem, and then demonstrated via a challenging multiaxis maneuver. ACP Network, Drive- reinforcement, Reinforcement learning, Adaptive control, Hybrid control, Nonlinear control, Aircraft flight control, Learning control, Optimal control.</abstract><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | |
ispartof | |
issn | |
language | eng |
recordid | cdi_dtic_stinet_ADA277442 |
source | DTIC Technical Reports |
subjects | ADAPTIVE CONTROL SYSTEMS AIRCRAFT APPROACH ARCHITECTURE AUTOMATIC CONTROL Cybernetics DRIVES FLIGHT Flight Control and Instrumentation FLIGHT CONTROL SYSTEMS LEARNING MANEUVERS METHODOLOGY NETWORKS OPTIMIZATION OSCILLATORS PE62204F PHASE STRATEGY WUWL20030546 |
title | Investigation of Drive-Reinforcement Learning and Application of Learning to Flight Control |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-31T20%3A49%3A20IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-dtic_1RU&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=unknown&rft.btitle=Investigation%20of%20Drive-Reinforcement%20Learning%20and%20Application%20of%20Learning%20to%20Flight%20Control&rft.au=Baker,%20Walter%20L&rft.aucorp=CHARLES%20STARK%20DRAPER%20LAB%20INC%20CAMBRIDGE%20MA&rft.date=1993-08&rft_id=info:doi/&rft_dat=%3Cdtic_1RU%3EADA277442%3C/dtic_1RU%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |