Gradient recovery based a posteriori error estimator for the adaptive direct discontinuous Galerkin method
In this paper, we propose a gradient recovery method for the direct discontinuous Galerkin (DDG) method. A quadratic polynomial is obtain by using the local discrete least-squares fitting to the gradient of numerical solution at certain sampling points. The recovered gradient is defined on a piecewi...
Gespeichert in:
Veröffentlicht in: | Calcolo 2023-03, Vol.60 (1), Article 18 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In this paper, we propose a gradient recovery method for the direct discontinuous Galerkin (DDG) method. A quadratic polynomial is obtain by using the local discrete least-squares fitting to the gradient of numerical solution at certain sampling points. The recovered gradient is defined on a piecewise continuous space, and it may be discontinuous on the whole domain. Based on the recovered gradient, we introduce a posteriori error estimator which takes the
L
2
norm of the difference between the direct and post-processed approximations. Some benchmark test problems with typical difficulties are carried out to illustrate the superconvergence of the recovered gradient and validate the asymptotic exactness of the recovery-based a posteriori error estimator. Most of the test problems are from the US National Institute for Standards and Technology (NIST). |
---|---|
ISSN: | 0008-0624 1126-5434 |
DOI: | 10.1007/s10092-023-00513-9 |