Type 1 Error and Type 2 Error

The null hypothesis {H_o} is accepted or rejected on the basis of the value of the test-statistic which is a function of the sample. The test statistic may land in acceptance region or rejection region. If the calculated value of test-statistic, say Z, is small (insignificant) i.e., Z is close to zero or we can say Z lies between  - {Z_{\alpha /2}} and {Z_{\alpha /2}} is a two-sided alternative test \left( {{H_1}:\theta \ne {\theta _o}} \right), the hypothesis is accepted. If the calculated value of the test-statistic Z is large (significant), {H_o} is rejected and {H_1} is accepted. In this rejection plan or acceptance plan, there is the possibility of making any one of the two errors which are called Type I and Type Il-errors.

Type-I Error:

The null hypothesis {H_o} may be true but it may be rejected. This is an error and is called Type-I error. When {H_o} is true, the test-statistic, say Z, can take any value between  - \infty to + \infty . But we reject {H_o} when Z lies in the rejection region while the rejection region is also included in the interval  - \infty to + \infty . In a two-sided {H_1} (like\theta \ne  {\theta _o}), the hypothesis is rejected when Z is less than  - {Z_{\alpha /2}} or Z is greater than {Z_{\alpha /2}}. When {H_o} is true, Z can fall in the rejection region with a probability equal to the rejection region \alpha  . Thus it is possible that {H_o} is rejected while {H_o} is true. This is called Type 1 error. The probability is \left( {1 - \alpha } \right) that {H_o} is accepted when {H_o} is true. It is called correct decision. We can say that Type I error has been committed when:

  1. an intelligent student is not promoted to the next class.
  2. a good player is not allowed to play the match.
  3. an innocent person is punished.
  4. a driver is punished for no fault of him.
  5. a good worker is not paid his salary in time.

These are the examples from practical life. These examples are quoted to make a point clear to the students.

Alpha \alpha :

The probability of making Type-I error is denoted by \alpha (alpha). When a null hypothesis is rejected, we may be wrong in rejecting it or we may be right in rejecting it. We do not know that {H_o} is true or false. Whatever our decision will be, it will have the support of probability. A true hypothesis has some probability of rejection and this probability is denoted by \alpha . This probability is also called the size of Type-I error and is denoted by \alpha .

Type-II Error:

The null hypothesis {H_o} may be false but it may be accepted. It is an error and is called Type-II error. The value of the test-statistic may fall in the acceptance region when {H_o} is in fact false. Suppose the hypothesis being tested is {H_o}:\theta  = {\theta _o} and {H_o} is false and true value of \theta is {\theta _1} or{\theta _{{\text{true}}}}. If the difference between {\theta _1} is very large then the chance is very small that {\theta _o}(wrong) will be accepted. In this case the true sampling distribution of the statistic will be quite away from the sampling distribution under {H_o}. There will be hardly any test-statistic which will fall in the acceptance region of {H_o}. When the true distribution of the test-statistic overlaps the acceptance region of {H_o}, then {H_o} is accepted though {H_o} is false. If the difference between {\theta _o} and {\theta _1} is small, then there is a high chance of accepting {H_o}. This action will be an error of Type-II.

Beta\beta :

The probability of making Type II error is denoted by \beta . Type-II error is committed when {H_o} is accepted while {H_1} is true. The value of \beta can be calculated only when we happen to know the true value of the population parameter being tested.