A generalised deep learning-based surrogate model for homogenisation utilising material property encoding and physics-based bounds

The use of surrogate models based on Convolutional Neural Networks (CNN) is increasing significantly in microstructure analysis and property predictions. One of the shortcomings of the existing models is their limitation in feeding the material information. In this context, a simple method is develo...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Scientific reports 2023-06, Vol.13 (1), p.9079-9079, Article 9079
Hauptverfasser: Nakka, Rajesh, Harursampath, Dineshkumar, Ponnusami, Sathiskumar A
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The use of surrogate models based on Convolutional Neural Networks (CNN) is increasing significantly in microstructure analysis and property predictions. One of the shortcomings of the existing models is their limitation in feeding the material information. In this context, a simple method is developed for encoding material properties into the microstructure image so that the model learns material information in addition to the structure-property relationship. These ideas are demonstrated by developing a CNN model that can be used for fibre-reinforced composite materials with a ratio of elastic moduli of the fibre to the matrix between 5 and 250 and fibre volume fractions between 25 and 75%, which span end-to-end practical range. The learning convergence curves, with mean absolute percentage error as the metric of interest, are used to find the optimal number of training samples and demonstrate the model performance. The generality of the trained model is showcased through its predictions on completely unseen microstructures whose samples are drawn from the extrapolated domain of the fibre volume fractions and elastic moduli contrasts. Also, in order to make the predictions physically admissible, models are trained by enforcing Hashin–Shtrikman bounds which led to enhanced model performance in the extrapolated domain.
ISSN:2045-2322
2045-2322
DOI:10.1038/s41598-023-34823-3