Relevant states and memory in Markov chain bootstrapping and simulation
•A new optimization-based technique for bootstrapping and simulating Markov chains is proposed.•The relevant states and memory of a Markov chain are identified as minimum information loss solution.•Numerical applications are provided to validate the theoretical results. Markov chain theory is provin...
Gespeichert in:
Veröffentlicht in: | European journal of operational research 2017-01, Vol.256 (1), p.163-177 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | •A new optimization-based technique for bootstrapping and simulating Markov chains is proposed.•The relevant states and memory of a Markov chain are identified as minimum information loss solution.•Numerical applications are provided to validate the theoretical results.
Markov chain theory is proving to be a powerful approach to bootstrap and simulate highly nonlinear time series. In this work, we provide a method to estimate the memory of a Markov chain (i.e. its order) and to identify its relevant states. In particular, the choice of memory lags and the aggregation of irrelevant states are obtained by looking for regularities in the transition probabilities. Our approach is based on an optimization model. More specifically, we consider two competing objectives that a researcher will in general pursue when dealing with bootstrapping and simulation: preserving the “structural” similarity between the original and the resampled series, and assuring a controlled diversification of the latter. A discussion based on information theory is developed to define the desirable properties for such optimal criteria. Two numerical tests are developed to verify the effectiveness of the proposed method. |
---|---|
ISSN: | 0377-2217 1872-6860 |
DOI: | 10.1016/j.ejor.2016.06.006 |