On the quantification and efficient propagation of imprecise probabilities resulting from small datasets
•Quantifies the true epistemic uncertainty in probability model form and parameter values.•Retains a full multimodel probabilistic description of epistemic uncertainties created by small datasets.•Simultaneously propagates many (thousands or more) probability models.•Reduces the computational cost o...
Gespeichert in:
Veröffentlicht in: | Mechanical systems and signal processing 2018-01, Vol.98, p.465-483 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | •Quantifies the true epistemic uncertainty in probability model form and parameter values.•Retains a full multimodel probabilistic description of epistemic uncertainties created by small datasets.•Simultaneously propagates many (thousands or more) probability models.•Reduces the computational cost of multimodel uncertainty propagation by several orders of magnitude.
This paper addresses the problem of uncertainty quantification and propagation when data for characterizing probability distributions are scarce. We propose a methodology wherein the full uncertainty associated with probability model form and parameter estimation are retained and efficiently propagated. This is achieved by applying the information-theoretic multimodel inference method to identify plausible candidate probability densities and associated probabilities that each method is the best model in the Kullback-Leibler sense. The joint parameter densities for each plausible model are then estimated using Bayes’ rule. We then propagate this full set of probability models by estimating an optimal importance sampling density that is representative of all plausible models, propagating this density, and reweighting the samples according to each of the candidate probability models. This is in contrast with conventional methods that try to identify a single probability model that encapsulates the full uncertainty caused by lack of data and consequently underestimate uncertainty. The result is a complete probabilistic description of both aleatory and epistemic uncertainty achieved with several orders of magnitude reduction in computational cost. It is shown how the model can be updated to adaptively accommodate added data and added candidate probability models. The method is applied for uncertainty analysis of plate buckling strength where it is demonstrated how dataset size affects the confidence (or lack thereof) we can place in statistical estimates of response when data are lacking. |
---|---|
ISSN: | 0888-3270 1096-1216 |
DOI: | 10.1016/j.ymssp.2017.04.042 |