Convolutional encoder–decoder network using transfer learning for topology optimization

State-of-the-art deep neural networks have achieved great success as an alternative to topology optimization by eliminating the iterative framework of the optimization process. However, models with strong predicting capabilities require massive data, which can be time-consuming, particularly for hig...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Neural computing & applications 2024-03, Vol.36 (8), p.4435-4450
Hauptverfasser: Ates, Gorkem Can, Gorguluarslan, Recep M.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:State-of-the-art deep neural networks have achieved great success as an alternative to topology optimization by eliminating the iterative framework of the optimization process. However, models with strong predicting capabilities require massive data, which can be time-consuming, particularly for high-resolution structures. Transfer learning from pre-trained networks has shown promise in enhancing network performance on new tasks with a smaller amount of data. In this study, a U-net-based deep convolutional encoder–decoder network was developed for predicting high-resolution (256 × 256) optimized structures using transfer learning and fine-tuning for topology optimization. Initially, the VGG16 network pre-trained on ImageNet was employed as the encoder for transfer learning. Subsequently, the decoder was constructed from scratch and the network was trained in two steps. Finally, the results of models employing transfer learning and those trained entirely from scratch were compared across various core parameters, including different initial input iterations, fine-tuning epoch numbers, and dataset sizes. Our findings demonstrate that the utilization of transfer learning from the ImageNet pre-trained VGG16 network as the encoder can improve the final predicting performance and alleviate structural discontinuity issues in some cases while reducing training time.
ISSN:0941-0643
1433-3058
DOI:10.1007/s00521-023-09308-z