Finite-state Markov Chains obey Benford's Law

A sequence of real numbers (x_n) is Benford if the significands, i.e. the fraction parts in the floating-point representation of (x_n) are distributed logarithmically. Similarly, a discrete-time irreducible and aperiodic finite-state Markov chain with probability transition matrix P and limiting mat...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Kaynar, Bahar, Berger, Arno, Hill, Theodore P, Ridder, Ad
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:A sequence of real numbers (x_n) is Benford if the significands, i.e. the fraction parts in the floating-point representation of (x_n) are distributed logarithmically. Similarly, a discrete-time irreducible and aperiodic finite-state Markov chain with probability transition matrix P and limiting matrix P* is Benford if every component of both sequences of matrices (P^n - P*) and (P^{n+1}-P^n) is Benford or eventually zero. Using recent tools that established Benford behavior both for Newton's method and for finite-dimensional linear maps, via the classical theories of uniform distribution modulo 1 and Perron-Frobenius, this paper derives a simple sufficient condition (nonresonant) guaranteeing that P, or the Markov chain associated with it, is Benford. This result in turn is used to show that almost all Markov chains are Benford, in the sense that if the transition probabilities are chosen independently and continuously, then the resulting Markov chain is Benford with probability one. Concrete examples illustrate the various cases that arise, and the theory is complemented with several simulations and potential applications.
DOI:10.48550/arxiv.1003.0562