next up previous contents index
Next: The Wilcoxon Up: Tests of Hypotheses Previous: Introduction

A Testing Procedure

After casting our questions into hypotheses, we need a formal way to test  H0 versus HA. We need to decide whether to accept  H0 and, hence, reject  HA or whether to reject H0 and, hence, accept HA. We must decide one way or another, there is no fence sitting here.

Alas, there are two types of errors we can make:

1.
Type I Error : We reject H0, when H0 is true.

2.
Type II Error : We accept H0, when HA is true.
For example, recall the first example above: At a pharmaceutical company, a new drug has been developed which should reduce cholesterol much more than their current drug on the market. Is this true? The hypotheses are: H0: New drug has the same effect on cholesterol as the current drug; and HA: New drug reduces cholesterol more than the current drug. The errors in words of this problem are:
1.
Type I Error: We declare the new drug is more effective than the current drug on the market, when really it is not more effective.

2.
Type II Error: We declare the new drug is not more effective when it really is more effective.
We need information to decide which hypothesis is true. So we take a random sample from each population and base our decision on these samples. We will use the samples to form a decision rule to make a decision on which hypothesis H0 or HA is true. Because we must decide, we may make either a Type I or Type II error. Usually Type I error is regarded as the more serious error. For instance, in the two population problem suppose the first population represents the standard while the second population represents the new. In rejecting H0 we are claiming the new is better than the standard. Hence, a Type I error here means we are claiming the new is better when it really isn't. In real life, this often means shelling out dollars (buying the new, retooling the assembly line, installing a new expensive teaching method) for something that is not better. Of course, a Type II error is serious, also, because you have missed something which is better.

Getting back to our decision rule : We have two samples and we must make a decision in the face of uncertainty. So we choose a test statistic , say T, and a decision rule say, "We reject H0 and accept HA if T is too large." How large is too large? We pick a probability for Type I error, say $\alpha$, usually .05 or smaller and then determine how large is too large. IT'S EASY. Yuck, how about an example which leads into our first test statistic?


next up previous contents index
Next: The Wilcoxon Up: Tests of Hypotheses Previous: Introduction

2001-01-01