Figure 1: Rate of Transitions Between States in a Three-State Markov Chain |
This post, about Markov processes, does not have much to do with economics. I here define how to approximate a continuous time Markov chain with a discrete time Markov chain. This mathematics is useful for one way of implementing computer simulations involving Markov chains. That is, I want to consider how to start with a continuous time model and synthesize a realization with a small, constant time step.
2.0 Continuous Time Markov ChainsConsider a stochastic process that, at any non-negative time t is in one of N states. Assume this process satisfies the Markov process: the future history of the process after time t depends only on the state of the process at time t, independently of how the process arrived at that state. I consider here only processes with stationary probability distributions for state transitions and for times between transitions. A continuous time Markov chain is specified by a state transition matrix. In this section, I define such a matrix as well as specifying two additional assumptions.
Formally, let Pi, J denote the conditional probability that the next transition will be into state j, given that the process is in state i at time zero. (As seen below, in the notation adopted here it matters that these conditional probabilities are not a function of time.) Assume that for each state, the next transition when the process is in that state is into a different state:
Pi, i = 0; i = 0, 1, ..., N - 1
Further, assume that for each state, the time to the next transition is from an exponential distribution with the rate of occurrence of state transitions dependent only on the initial state:
Fi, j(t) = 1 - e- λi t; i, j = 0, 1, ..., N - 1;
where Fi, j(t) is the conditional probability that the next transition will be before time t, given that the chain is in state i at time zero and that the next transition will be into state j. In other words, Fi, j(t) is the Cumulative Distribution Function (CDF) for the specified random variable. Under the above definitions, the stochastic process is a continuous time, finite state Markov chain.
Let Pi, j(t) be the conditional probability that the chain is in state j at time t, given that the chain is in state i at time zero. These conditional probabilities satisfy Kolmogorov's forward equation:
,
where the transition rate matrix Q is defined to be:
The elements in each row of the transition rate matrix sum to zero. Kolmogorov's forward equation can be expressed in scalar form:
The above equation applies to continuous time Markov chains with a countably infinite number of states only under certain special conditions.
Steady state probabilities of this Markov chain satisfy:
p Q = 0,
where p is a row vector in which each element is the steady-state probability that the chain is in the corresponding state.
3.0 Discrete Time ApproximationA discrete time Markov chain is specified by a state transition matrix A, where ai, j is the probability that the chain will transition in a time step from state i to state j, given that the chain is in state i at the start of the time step. Steady state probabilities for a discrete time Markov chain satisfy:
p A = p
The above equation compares and contrasts with how steady state probabilities relate to the transition rate matrix in a continuous time Markov chain.
Let the time step h be small enough that the probability of the continuous time Markov chain undergoing two or more transitions in a single time step is negligible. In other words, the following probability, calculated from a Poisson distribution, is close to unity for all states i:
P(0 or 1 transitions in time h | Chain in state i at time 0) =(1 + λi h) e- λi h
The probability that the chain remains in a given state for a time step is the probability that no transitions occur during that time step, given the state of the chain at the start of the time step. This probability is also found from a Poisson distribution:
ai, i = e- λi h = e- qi, i h; i = 0, 1, ..., N - 1
The probability that the chain transitions to state j, given the chain is in state i at the start of the time step, is the product of:
- The probability that a transition occurs during that time step, and
- The conditional probability that the next transition will be into state j, given the chain is in state i at the start of the time step.
The following equation specifies this probability:
ai, j = (1 - ai, i)Pi, j = (1 - ai, i) qi, j/(- qi, i); i ≠ j
These equations allow one to write a computer program to synthesize a realization from a finite state Markov chain, given the parameters of a continuous time, finite state Markov chain. Such a program will be based on a discrete time approximation.
4.0 An ExampleConsider a three-state, continuous time Markov chain. Figure 1 shows the rate of transitions between the various states. The transition rate matrix is:
To discretize time, choose a small time step h such that, for all states i, the following probabilities are approximately unity:
P(0 or 1 transitions in time h | Chain in state 0 at time 0) =[1 + (λ0, 1 + λ0, 2)h] e-(λ0, 1 + λ0, 2)h
P(0 or 1 transitions in time h | Chain in state 1 at time 0) =[1 + (λ1, 0 + λ1, 2)h] e-(λ1, 0 + λ1, 2)h
P(0 or 1 transitions in time h | Chain in state 2 at time 0) =[1 + (λ2, 0 + λ2, 1)h] e-(λ2, 0 + λ2, 1)h
The state transition matrix A for the discrete-time Markov chain is:
I have not tested the above with concrete values for a continuous time Markov chain.
Reference- S. M. Ross (1970). Applied Probability Models with Optimization Applications. San Francisco: Holden-Day