On the transferability of the Deep Galerkin Method for solving partial differential equations
In the current work we investigate the transfer of knowledge in the context of the Deep Galerkin Method, an algorithm which uses a certain deep neural network to solve partial differential equations. Specifically, we examine how well that network pretrained on one type of problem, performs on a rela...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In the current work we investigate the transfer of knowledge in the context of the Deep Galerkin Method, an algorithm which uses a certain deep neural network to solve partial differential equations. Specifically, we examine how well that network pretrained on one type of problem, performs on a related problem. To this end, we focus on the Poisson partial differential equation and consider two test cases: transfer of learning (a) between problems admitting different oscillatory solutions of the same form and subject to the same homogeneous Dirichlet boundary conditions and (b) between problems admitting oscillatory solutions of a different form and subject to different (non-constant) Dirichlet boundary conditions. In both cases we found that there was a successful transfer of learning, when performing the same number of training steps on the pretrained and not pretrained network. That is, pretraining a network on a simpler boundary value problem can significantly improve the performance, convergence and accuracy, of the network compared to a the not pretrained network with the same architecture and hyperparameters. This preliminary work motivates a deeper future study in order to further illuminate the underline mechanisms underpinning this method. |
---|---|
ISSN: | 0094-243X 1551-7616 |
DOI: | 10.1063/5.0177426 |