Formal Error Bounds for the State Space Reduction of Markov Chains
We study the approximation of a Markov chain on a reduced state space, for both discrete- and continuous-time Markov chains. In this context, we extend the existing theory of formal error bounds for the approximated transient distributions. As a special case, we consider aggregated (or lumped) Marko...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We study the approximation of a Markov chain on a reduced state space, for
both discrete- and continuous-time Markov chains. In this context, we extend
the existing theory of formal error bounds for the approximated transient
distributions. As a special case, we consider aggregated (or lumped) Markov
chains, where the state space reduction is achieved by partitioning the state
space into macro states. In the discrete-time setting, we bound the stepwise
increment of the error, and in the continuous-time setting, we bound the rate
at which the error grows. In addition, the same error bounds can also be
applied to bound how far an approximated stationary distribution is from
stationarity. Subsequently, we compare these error bounds with relevant
concepts from the literature, such as exact and ordinary lumpability, as well
as deflatability and aggregatability. These concepts define stricter than
necessary conditions to identify settings in which the aggregation error is
zero. We also consider possible algorithms for finding suitable aggregations
for which the formal error bounds are low, and we analyse first experiments
with these algorithms on a range of different models. |
---|---|
DOI: | 10.48550/arxiv.2403.07618 |