|
|
|
Suppose A and B are events in a sample space. The knowledge that an outcome is in event A may change your estimate of the likelihood that the outcome is in event B. For example, suppose that the experiment is rolling 2 fair dice, the event A is: The sum of the dice is greater than 10, and that the event B is: At least one of the dice is 5. Then knowing that the outcome is in the event A, increases the likelihood that the outcome is in the event B. Once we know that the outcome is in A, we have a new experiment with the sample space: {(5,6), (6,5), (6,6)}. Each of these outcomes is equally likely. So, the probability of one die being a 5 given that the sum is greater than 10 is 2/3.
On the other hand, if the experiment is flipping a fair coin twice, knowing that the first flip is a head (event A), does not change the likelihood that the second flip is a tail (event B). Here the new sample space is {(H,H),(H,T)}. Again, both outcomes are equally likely, so the probability that the second flip is a tail is 1/2 -- the same probability we would assign without the knowledge about A.
Informally, two events A and B are said to be independent if knowing that an outcome is in event A does not change the likelihood that the outcome is in event B and vice versa.
The probability that a person chosen at random will be both taller than 5 ft. 7 in. and a male may at first appear to be 1/4 . After all, half the population is taller than 5 ft. 7 in. and half of that group is male, so 1/2 of 1/2 is 1/4. But since males on average are taller than females, it is wrong to assume that 1/2 of the 5 ft. 7 in and taller group is male. Knowing whether the person chosen is a male influences the likelihood (probability) that he is 5 ft. 7 in or taller.
Suppose that, for an experiment, the event A has probability 1/4 and event B has probability 1/3. If events A and B are independent, then in a large number of instances of this experiment, only 1/3 will be in the event A. Of this 1/3, only approximately 1/4 will also be in event B. So, the probability of the event A and B is just the product of the two probabilities, 1/12.
This observation is taken as the definition of the independence of two events.
Definition: Suppose we are considering the outcomes of a particular experiment. If A and B are events (subsets of the sample space of outcomes), we say A and B are independent if
(H,H,H) | (H,H,T) | (H,T,H) | (H,T,T) |
(T,H,H) | (T,H,T) | (T,T,H) | (T,T,T) |
Let A be the event that a heads appears on the first flip, and let B be the event that a heads comes up on the third flip.
Independence and the Gambler's Fallacy.
A lack appreciation of the concept of independence lies at the heart of the mistaken belief that, after a run of bad luck, a gambler's luck is due to change. Click below for a short discussion of this misconception:
|
|
|
modules at math.duke.edu | Copyright CCP and the author(s), 2003 |