|
|
Suppose there are two regional news shows in the local television viewing area, and we have conducted a survey of viewers to determine which channel the viewers have been watching. The first survey revealed that 40% of the viewers watched station X and 60% watched station Y. Subsequent surveys revealed that each week 15% of the X viewers switched to station Y and 5% of the Y viewers switched to station X.
We will use transition matrices and Markov chains to make a prediction about the future television market from this information. Our conclusions at the end of the example may be more meaningful if you take time now to make a guess as to what proportions of the population will be watching each station in the long run. We assume that the 15%-5% switching trends will continue indefinitely.
Let pk be a two-dimensional column vector whose entries give the proportion of people who watch station X and the proportion who watch station Y, in that order, during week k. Thus, p0 = [0.4,0.6]T. From the assumptions above, we see that the proportion of people watching station X in week k = 1 will be 85% of the X viewers in week k = 0 (which is 85% of 40%, or 34%) plus 5% of the Y viewers in week k = 0 (which is 5% of 60%, or 3%) for a total of 37%:
0.85(0.4) + 0.05(0.6) = 0.37.
Similarly,
0.15(0.4) + 0.95(0.6) = 0.63,
is the proportion of people watching station Y in week k = 1. So, we have computed p1 = [0.37,0.63]T. Similarly, we could compute the sequence of vectors p2, p3, ... , but this process would be quite tedious if we continued in the manner above. Andrei Markov (1856-1922) described a much more efficient way of handling such a problem.
First, we think of the movement of viewers as being described by the following array, which gives the weekly proportion of viewers who change from one station to another:
From: | |||
X | Y | ||
To: | X | 0.85 | 0.05 |
Y | 0.15 | 0.95 |
The matrix A of entries in the table is called the transition matrix (or Markov matrix) for this problem.
Looking carefully at the definiton of matrix multiplication, we see that p1=Ap0. Indeed, we see that, for k = 0, 1, 2, ..., and so on, pk+1=Apk. Each pk is called a probability vector, and the sequence, p0, p1, p2, p3, ..., is called a Markov chain.
|
|