Markov Chains
Part 4: Summary
Definitions:
- A Markov (or transition)
matrix is any matrix in which all entries are nonnegative and all
columns sum to 1. Thus, an n x n Markov matrix gives the probability of movement
from any one of n states to any of the other n states.
- A vector whose coordinates
are nonnegative and sum to 1 is called a probability vector. Thus,
an n-dimensional probability vector gives the chances of occurrence of
each one of n events.
- If p0
is a probability vector and A is a transition matrix, then the sequence,
p0, p1, p2, p3,
... , where pk = Akp0
for k = 1, 2, 3, ..., is called a Markov chain. If Ap = p
for some probability vector, then p is called a steady-state
vector.
We saw two ways that we
can try to find a steady-state vector:
- We can compute the limiting
value of a matrix expression. In your worksheet, describe that process
and how to interpret the results.
- We can solve a matrix equation.
Write the equation in your worksheet. What is the side condition that makes
the solution unique?
We studied three examples
in this module. In two of the examples, the Markov chain converged to the
steady-state vector. In the third example, we saw a Markov chain that did
not converge. However, the Markov matrix did have a steady-state vector.
- What characteristic of the third transition matrix made this Markov chain
behave differently from the first two?
modules at math.duke.edu