Rapidly Evolving Soft Robots via Action Inheritance

The automatic design of soft robots characterizes as jointly optimizing structure and control. As reinforcement learning is gradually used to optimize control, the time-consuming controller training makes soft robots design an expensive optimization problem. Although surrogate-assisted evolutionary...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on evolutionary computation 2024-12, Vol.28 (6), p.1674-1688
Hauptverfasser: Liu, Shulei, Yao, Wen, Wang, Handing, Peng, Wei, Yang, Yang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The automatic design of soft robots characterizes as jointly optimizing structure and control. As reinforcement learning is gradually used to optimize control, the time-consuming controller training makes soft robots design an expensive optimization problem. Although surrogate-assisted evolutionary algorithms (EAs) have made a remarkable achievement in dealing with expensive optimization problems, they typically suffer from challenges in constructing accurate surrogate models due to the complex mapping among structure, control, and task performance. Therefore, we propose an action inheritance (Act_Inh)-based EA to accelerate the design process. Instead of training a controller, the proposed algorithm uses inherited actions to control a candidate design to complete a task and obtain its approximated performance. Inherited actions are near-optimal control policies that are partially or entirely inherited from optimized control actions of a real evaluated robot design. The Act_Inh plays the role of surrogate models where its input is the structure and output is the near-optimal control actions. We also propose a random perturbation operation to estimate the error introduced by inherited control actions. The effectiveness of our proposed method is validated by evaluating it on a wide range of tasks, including locomotion and manipulation. Experimental results show that our algorithm is better than the other three state-of-the-art algorithms on most tasks when only a limited computational budget is available. Compared with the algorithm without surrogate models, our algorithm saves about half the computing cost.
ISSN:1089-778X
1941-0026
DOI:10.1109/TEVC.2023.3327459