Type I and Type II errors when you are testing a hypothesis
Type I and Type II errors, when you are testing a hypothesis, there is the possibility your conclusion is incorrect. You could reject your null hypothesis when it\'s actually true (Type I error), or you could not reject the null hypothesis when the alternative is true (Type II error). You can think of this in terms of a trial. If the jury puts an innocent man in jail, this is a Type I error. If the jury lets a guilty person go free, this is a Type II error.
Questions:
1. Explain Type I and Type II error.
2. If someone tells you something and you need to decide if you should believe him or her. Are you more likely to be worried about committing a Type I error or a Type II error? Why?
Solution
Type I error and Type II error.
Type I error, also known as a “false positive”: the error of rejecting a null hypothesis when it is actually true.
In other words, this is the error of accepting an alternative hypothesis (the real hypothesis of interest) when the results can be attributed to chance.
Plainly speaking, it occurs when we are observing a difference when in truth there is none (or more specifically - no statistically significant difference).
Type II error, also known as a \"false negative\": the error of not rejecting a null hypothesis when the alternative hypothesis is the true state of nature.
In other words, this is the error of failing to accept an alternative hypothesis when you don\'t have adequate power. Plainly speaking, it occurs when we are failing to observe a difference when in truth there is one.
By one common convention, if the probability value is below 0.05, then the null hypothesis is rejected.
Another convention, although slightly less common, is to reject the null hypothesis if the probability value is below 0.01.
The threshold for rejecting the null hypothesis is called the (alpha) level or simply . It is also called the significance level.
As discussed in the section on significance testing, it is better to interpret the probability value as an indication of the weight of evidence against the null hypothesis than as part of a decision rule for making a reject or do-not-reject decision.
Therefore, keep in mind that rejecting the null hypothesis is not an all-or-nothing decision.
The Type I error rate is affected by the level: the lower the level, the lower the Type I error rate.
It might seem that is the probability of a Type I error. However, this is not correct.
Instead, is the probability of a Type I error given that the null hypothesis is true. If the null hypothesis is false, then it is impossible to make a Type I error.
Unlike a Type I error, a Type II error is not really an error. When a statistical test is not significant, it means that the data do not provide strong evidence that the null hypothesis is false.
Lack of significance does not support the conclusion that the null hypothesis is true.
Therefore, a researcher should not make the mistake of incorrectly concluding that the null hypothesis is true when a statistical test was not significant.
Instead, the researcher should consider the test inconclusive.
Contrast this with a Type I error in which the researcher erroneously concludes that the null hypothesis is false when, in fact, it is true.
Type I errors can be controlled.
Although the errors cannot be completely eliminated, we can minimize one type of error.
The value of alpha, which is related to the level of significance that we selected has a direct bearing on type I errors.
Typically when we try to decrease the probability one type of error, the probability for the other type increases.
We could decrease the value of alpha from 0.05 to 0.01, corresponding to a 99% level of confidence.
However, if everything else remains the same, then the probability of a type II error will nearly always increase.

