** Next:** The Wilcoxon
** Up:** Tests of Hypotheses
** Previous:** Introduction

After casting our questions into hypotheses, we need a formal way to test
*H*_{0}
versus *H*_{A}. We need to decide whether to accept *H*_{0}
and, hence, reject *H*_{A} or whether to reject *H*_{0}
and, hence, accept *H*_{A}. We must decide one way or another,
there is no fence sitting here.

Alas, there are two types of errors we can make:

- 1.
**Type I Error** : We reject *H*_{0}, when *H*_{0} is
true.

- 2.
**Type II Error** : We accept *H*_{0}, when *H*_{A}
is true.

For example, recall the first example above: *At a pharmaceutical company,
a new drug has been developed which should reduce cholesterol much more
than their current drug on the market. Is this true? The hypotheses are:
**H*_{0}*: New drug has the same effect on cholesterol as the current
drug; and **H*_{A}*: New drug reduces cholesterol more than the current
drug.* The errors in words of this problem are:
- 1.
**Type I Error**: We declare the new drug is more effective than the
current drug on the market, when really it is not more effective.

- 2.
**Type II Error**: We declare the new drug is not more effective when
it really is more effective.

We need information to decide which hypothesis is true. So we take a random
sample from each population and base our decision on these samples. We
will use the samples to form a decision rule to make a decision on which
hypothesis *H*_{0} or *H*_{A} is true. Because
we must decide, we may make either a Type I or Type II error. Usually Type
I error is regarded as the more serious error. For instance, in the two
population problem suppose the first population represents the *standard*
while the second population represents the *new*. In rejecting
*H*_{0} we are claiming the *new* is better than the
*standard*. Hence,
a Type I error here means we are claiming the *new* is better when
it really isn't. In real life, this often means shelling out dollars (buying
the new, retooling the assembly line, installing a new expensive teaching
method) for something that is not better. Of course, a Type II error is
serious, also, because *you have missed something which is better*.

Getting back to our **decision rule** : We have two samples and we
must make a decision in the face of uncertainty. So we choose a **test
statistic** , say *T*, and a **decision rule** say, "We reject
*H*_{0} and accept *H*_{A} if *T* is too large." How large is
too large? We pick a probability for Type I error, say ,
usually
.05 or smaller and then determine how large is too large.
**IT'S EASY**.
Yuck, how about an example which leads into our first test statistic?

** Next:** The Wilcoxon
** Up:** Tests of Hypotheses
** Previous:** Introduction

*2001-01-01*