Data-driven prediction and analysis of chaotic origami dynamics
Advances in machine learning have revolutionized capabilities in applications ranging from natural language processing to marketing to health care. Recently, machine learning techniques have also been employed to learn physics, but one of the formidable challenges is to predict complex dynamics, par...
Gespeichert in:
Veröffentlicht in: | Communications physics 2020-09, Vol.3 (1), Article 168 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Advances in machine learning have revolutionized capabilities in applications ranging from natural language processing to marketing to health care. Recently, machine learning techniques have also been employed to learn physics, but one of the formidable challenges is to predict complex dynamics, particularly chaos. Here, we demonstrate the efficacy of quasi-recurrent neural networks in predicting extremely chaotic behavior in multistable origami structures. While machine learning is often viewed as a “black box”, we conduct hidden layer analysis to understand how the neural network can process not only periodic, but also chaotic data in an accurate manner. Our approach shows its effectiveness in characterizing and predicting chaotic dynamics in a noisy environment of vibrations without relying on a mathematical model of origami systems. Therefore, our method is fully data-driven and has the potential to be used for complex scenarios, such as the nonlinear dynamics of thin-walled structures and biological membrane systems.
Predicting chaotic behaviour is challenging due to the sensitivity to initial conditions, noise, the environment, and unknown factors. Here, the authors apply quasi-recurrent neural networks to predict both periodic and chaotic dynamics of the triangulated cylindrical origami cells, and provide an analysis of the hidden units’ distinctive responses. |
---|---|
ISSN: | 2399-3650 2399-3650 |
DOI: | 10.1038/s42005-020-00431-0 |