

Suppose there are two regional news shows in the local television viewing area, and we have conducted a survey of viewers to determine which channel the viewers have been watching. The first survey revealed that 40% of the viewers watched station X and 60% watched station Y. Subsequent surveys revealed that each week 15% of the X viewers switched to station Y and 5% of the Y viewers switched to station X.
We will use transition matrices and Markov chains to make a prediction about the future television market from this information. Our conclusions at the end of the example may be more meaningful if you take time now to make a guess as to what proportions of the population will be watching each station in the long run. We assume that the 15%5% switching trends will continue indefinitely.
Let p_{k} be a twodimensional column vector whose entries give the proportion of people who watch station X and the proportion who watch station Y, in that order, during week k. Thus, p_{0} = [0.4,0.6]^{T}. From the assumptions above, we see that the proportion of people watching station X in week k = 1 will be 85% of the X viewers in week k = 0 (which is 85% of 40%, or 34%) plus 5% of the Y viewers in week k = 0 (which is 5% of 60%, or 3%) for a total of 37%:
0.85(0.4) + 0.05(0.6) = 0.37.
Similarly,
0.15(0.4) + 0.95(0.6) = 0.63,
is the proportion of people watching station Y in week k = 1. So, we have computed p_{1} = [0.37,0.63]^{T}. Similarly, we could compute the sequence of vectors p_{2}, p_{3}, ... , but this process would be quite tedious if we continued in the manner above. Andrei Markov (18561922) described a much more efficient way of handling such a problem.
First, we think of the movement of viewers as being described by the following array, which gives the weekly proportion of viewers who change from one station to another:
From:  
X  Y  
To:  X  0.85  0.05 
Y  0.15  0.95 
The matrix A of entries in the table is called the transition matrix (or Markov matrix) for this problem.
Looking carefully at the definiton of matrix multiplication, we see that p_{1}=Ap_{0}. Indeed, we see that, for k = 0, 1, 2, ..., and so on, p_{k+1}=Ap_{k}. Each p_{k} is called a probability vector, and the sequence, p_{0}, p_{1}, p_{2}, p_{3}, ..., is called a Markov chain.

