Markov Chains
Part 3: Positive Markov
Matrices
- Given any transition matrix
A, you may be tempted to conclude that, as k approaches infinity,
Ak will approach a steady state. To see that this is
not true, enter the matrix A and the initial vector p0 defined in the worksheet, and compute enough terms of the chain p1,
p2, p3, ... to see a pattern. Explain
why this chain will not approach a steady state.
- Interpret the matrix A in the context of the television views from Part 1. What does the matrix say about the behavior of the viewers?
- Does the Markov matrix
in step 1 have a steady-state vector? [Hint: Use the method you learned in Part 1, Step 5 to try and find one.] If it does, what is it?
- Interpret your answer to step 3 in terms of television viewers and their behavior.
Comments
When we study eigenvalues
and eigenvectors, we will learn some of the mathematical reasons for the
results that you saw in this lab. In the process we will see how to do
these computations theoretically (that is, without approximating limits
by calculator or computer). We will also be able to see some of the mathematical
justification for the following theorem, which tells why the Markov chains
in Parts 1 and 2 approached a steady state and the one in this part did
not.
Theorem
If A is a positive transition
matrix (no zero entries), and p0 is any initial probability
vector, then Akp0 approaches the steady-state vector
as k goes to infinity.
modules at math.duke.edu