Tuesday, January 1, 2013

1212.6908 (L. Bertini et al.)

From level 2.5 to level 2 large deviations for continuous time Markov
chains
   [PDF]

L. Bertini, A. Faggionato, D. Gabrielli
We recover the Donsker-Varadhan large deviations principle (LDP) for the empirical measure of a continuous time Markov chain on a countable (finite or infinite) state space from the joint LDP for the empirical measure and the empirical flow proved in [2].
View original: http://arxiv.org/abs/1212.6908

No comments:

Post a Comment