One-shot Input-Output History Feedback Controller Design for Unknown Linear Systems: Reinforcement Learning Approach
In this paper, we propose a method of designing input-output history feedback controllers for unknown linear discrete-time systems. Many conventional reinforcement-learning based controls such as policy iteration are state-feedback. We extend the policy iteration by incorporating a method to statica...
Gespeichert in:
Veröffentlicht in: | Shisutemu Seigyo Jouhou Gakkai rombunshi Control and Information Engineers, 2021/09/15, Vol.34(9), pp.235-242 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | jpn |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In this paper, we propose a method of designing input-output history feedback controllers for unknown linear discrete-time systems. Many conventional reinforcement-learning based controls such as policy iteration are state-feedback. We extend the policy iteration by incorporating a method to statically estimate state variables from a history of finite-time input-output data. The convergence of the policy to model-based optimal solution has been theoretically guaranteed. Moreover, the proposed method is one-shot learning, i.e., the optimal controller can be obtained by using initial experiment data only. The effectiveness of the proposed method is shown through a numerical simulation through an oscillator network. |
---|---|
ISSN: | 1342-5668 2185-811X |
DOI: | 10.5687/iscie.34.235 |