When a restrictive hypothesis like H0: mean=50 is posed for the mean of
a population it is most likely that it is false. The population mean may
be near 50, even very close, but not exactly 50. 
So to test H0 if we have a large sample size the power of the test will
be high enough so that H0 will be rejected. We know this conclusion will
happen even before we get the data.
For this reason statisticians should know the power of their tests.
A sample size "too small" is not good, but a sample size "too large" is also
not good. From a practical point of view we should carefully think about
the issue of "practical significance" vs "statistical significance".

The following task in MTB illustrates this phenomena.
Set up a large column of data for potential use in testing
the hypothesis H0: mean=0. The data has mean .2 so H0 is false,but very
close to being true. At first use only a portion of the data,
then use increasingly more of it to see the conclusion via the p-value.

###################################
random 1000 c1 ;
normal .2 1 .
copy c1 c2 ;
use 1:10 .             # the first 10 obs are in c2

onet c2 ;
test 0 .               # note the p-value of the test

###############
use copy-paste to run these commands
then repeat the last 4 commands changing the sample size to 
20 then 30 etc. keep doing this using larger and larger sample sizes.
Make a table to record the p-value result as N grows.