Analysis of iterative ensemble smoothers for solving inverse problems

This paper examines the properties of the Iterated Ensemble Smoother (IES) and the Multiple Data Assimilation Ensemble Smoother (ES–MDA) for solving the history matching problem. The iterative methods are compared with the standard Ensemble Smoother (ES) to improve the understanding of the similarit...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Computational geosciences 2018-06, Vol.22 (3), p.885-908
1. Verfasser: Evensen, Geir
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:This paper examines the properties of the Iterated Ensemble Smoother (IES) and the Multiple Data Assimilation Ensemble Smoother (ES–MDA) for solving the history matching problem. The iterative methods are compared with the standard Ensemble Smoother (ES) to improve the understanding of the similarities and differences between them. We derive the three smoothers from Bayes’ theorem for a scalar case which allows us to compare the equations solved by the three methods, and we can better understand which assumptions are applied and their consequences. When working with a scalar model, it is possible to use a vast ensemble size, and we can construct the sample distributions for both priors and posteriors, as well as intermediate iterates. For a linear model, all three methods give the same result. For a nonlinear model, the iterative methods improve on the ES result, but the two iterative methods converge to different solutions, and it is not clear which should be the preferred choice. It is clear that the ensemble of cost functions used to define the IES solution does not represent an exact sampling of the posterior-Bayes’ probability density function. Also, the use of an ensemble representation for the gradient in IES introduces an additional approximation compared to using an exact analytic gradient. For ES–MDA, the convergence, as a function of increasing number of uniform update steps, is studied for a huge ensemble size. We illustrate that ES–MDA converges to a solution that differs from the Bayesian posterior. The convergence is also examined using a realistic sample size to study the impact of the number of realizations relative to the number of update steps. We have run multiple ES–MDA experiments to examine the impact of using different schemes for choosing the lengths of the update steps, and we have tried to understand which properties of the inverse problem imply that a non-uniform update step length is beneficial. Finally, we have examined the smoother methods with a highly nonlinear model to examine their properties and limitations in more extreme situations.
ISSN:1420-0597
1573-1499
DOI:10.1007/s10596-018-9731-y