next up previous contents index
Next: Probability Up: Descriptive Statistics Previous: Relationships Between Variables, Part

Relationships Between Variables, Part 3: Measures of Relationships

In this section, we discuss measures of relationships between two variables X and Y. It is easiest to start with no relationship. What do we mean by no relationship? Suppose we had a lot of data on (X,Y) and obtained a scatterplot of Y versus X. If the plot was a random scatter then we would conclude that the variables X and Y are not related. What if they are related? Look at the six plots in Figure 1.10. In the first, we would probably conclude that X and Y are not related. Plot 2, we would characterize as probably a linear relationship, certainly exhibiting random error. Plot 3 is similar to Plot 2, although the pattern is not quite as tight. Plot 4 shows some negative drift. Plots 5 and 6 show the strongest relationships (tightest patterns) among the plots. Plot 5 shows a very strong circular relationship while Plot 6 a very strong quadratic pattern. It seems that a measure of a relationship should depend on what type of relationship it is. In this section, we will only be concerned for the most part about linear relationships and we will consider measures of such a relationship. It should not be surprising that this measure will indicate no (linear) relationship for the two strongest relationships in the plots.


  
Figure 1.10: Scatter plots
\begin{figure}
\begin{center}
\epsfig{file=fig9.ps, height=6in, width=7in, angle= -90}\end{center}\end{figure}

Consider Plot 2 again. We want to measure the linear relationship exhibited in this plot. Two simple lines will help a lot. On the x-axis locate the sample mean of the X's ( $\bar{X} = 0.6176199$) and draw a vertical line through this point. On the y-axis locate the sample mean of the Y's ( $\bar{Y} = 0.6032577$) and draw a horizontal line through this point. Figure 1.11 shows these lines.


  
Figure 1.11: Plot 2 with sample means
\begin{figure}
\begin{center}
\epsfig{file=fig10.ps, height=5in, width=5in, angle= -90}\end{center}\end{figure}

The lines intersect at $(\bar{X},\bar{Y})$, (locate it). This is our new center. Next Label the quadrants I, II , III and IV, beginning at the upper right quadrant and continuing counter-clockwise. The coordinates of (X,Y) relative to the new center are $(X - \bar{X}, Y-\bar{Y})$. The signs on the coordinates are (+,+), (-,+), (-,-), and (+,-) as we go around the quadrants I, II , III and IV, respectively. Then it's easy to come up with many measures of linear relationships. A simple one is to count the number of points with the same sign (those in quadrants I and III) and subtract the number of points with different signs (those in quadrants II and IV). High values of this measure indicate a positive linear relationship while low values indicate a negative linear relationship.

Instead of counting like and unlike signs, we consider a measure which takes the product of these new coordinates. Thus we have n products, one for each point in the plot. Consider as a measure their average:

\begin{displaymath}s_{XY} = \frac{\textrm{Sum}((X-\bar{X})(Y-\bar{Y}))}{n}
\end{displaymath}

which is called the sample covariance . Positive values of this measure indicate a positive linear relationship while negative values indicate a negative linear relationship. Is this measure robust? No, you are catching on.

For a given data set, we can always make this measure larger (or smaller) by changing the units. Suppose we have a positive linear relationship and X is measured in feet. If we change the X's to inches then sXY increases by the factor 12. If we change the X's to mm's then sXY increases by the factor 304.8. Thus we need to standardize our measure. In this chapter (we revisit this problem in Chapter 11), we will insist on an absolute measure which in absolute value cannot exceed 1. This is called the sample correlation coefficient  and it is simply sXY divided by the product of the standard deviations of the X's and the Y's, (except we divide by n and not n-1; i.e,

\begin{displaymath}r=\frac{s_{XY}}{\sqrt{\frac{\textrm{Sum}(X-\bar{X})^2 \textrm{Sum}(Y-\bar{Y})^2}{n^2}}}
\end{displaymath}

This is our measure of linear relationship. As we said, for all data sets, $-1 \leq r \leq 1$. The extreme values are interesting:
r = 1 means a perfect positive relationship;
r = -1 means a perfect negative relationship.
Values of r close to zero indicate little or no linear relationship.

The values of r for each of the plots in Figure 1.10 is indicated in Figure 1.12.

  
Figure 1.12: Scatter plots with values of r
\begin{figure}
\begin{center}
\epsfig{file=fig11.ps, height = 6in, width=7in, angle= -90}\end{center}\end{figure}

As we thought, the strongest relationships score 0 with our measure because they are both nonlinear. The best linear pattern is Plot 2, although Plot 3 is close. The negative drift, Plot 3, registers r = -.43 and the first plot shows little linearity as initially thought.

We can do a bit more with the sample correlation coefficient. It is associated with the LS fit. It can be shown that $r = \big(\frac{s_Y}{s_X}\big)\hat{b}$ where $\hat{b}$ is the LS estimate of slope. So r contains information on the fit.

We can be more precise. Consider the variation (or noise) in the Y data. A measure of this variation is the sample variance sY2 of the Y's. When we fit the linear model Y = a + bX + e we should account be able to account for some of this variability (X should be of help in predicting Y. In fact, $r^2 100\%$ is the percentage of variation accounted for in the LS fit of Y versus X. We call this the coefficient of determination  and we often use capital R2  to denote it. Consider the values of R2 for Plots 1-6. R2 =.007 for Plot 1; hence we have accounted for .7% of the variation in Y. R2=.66 for Plot 2; hence we have accounted for 66% of the variation in Y. R2=.59 for Plot 3; hence we have accounted for 59% of the variation in Y. R2=.18 for Plot 4; hence we have accounted for 18% of the variation in Y. Of course for the last two plots, R2=0. The value of R2 can be obtained using the regression module.

The measures r and R2 are not robust. We will consider alternative measures of r later, but for now we do offer an alternative to R2 , labeled as RW2. This is the measure that corresponds to the robust Wilcoxon fit. This is not as sensitive as R2 to outliers. We show this for the baseball height and weight data. Recall that we changed the original data by inserting an outlier. The plots in Figure 1.13 show the original and changed data along with their R2's and RW2's.


  
Figure 1.13: Coefficients of determination for LS and Wilcoxon fits
\begin{figure}
\begin{center}
\epsfig{file=fig12.ps, height=5in, width=5in, angle= -90}\end{center}\end{figure}

For the LS fit, notice that due to one outlier, the percentage of variation accounted for dropped from 50% to 19%. The measure corresponding to the robust Wilcoxon fit only changed from .44 to .39.


Exercise 2.10.1  
1.
Scatterplot the following data and guess the correlation coefficient. Then compute it, (ans: .161).
      1  2
      2  4
      3  4
      6  3
2.
Reconsider exercise #1 of Exercise 1.6. The data are given below. Scatterplot the data and guess the correlation coefficient. Recall that the LS estimate of slope was 2.405. Suppose the sample standard deviations of x and y are given by 3.58 and 10.42. Compute the correlation coefficient. (Ans: .825)
        x    y
        16   32
        15   26
        20   40
        13   27
        15   30
        17   38
        16   34
        21   43
        22   64
        23   45
        24   46
        18   39
3.
In the last problem, what percent of variation of y is accounted for by x?
4.
Which correlation seems appropriate for the following plot: -.678, .956, .892, .483, .045 ?
     
      C3      -                              *
              -              *                     *
              -                                     *
           400+
              -                          *           *
              -                                      **
              -                               2    *                *
              -                 * *            *
           300+                          *           *
              -             *      *    *        2         *
              -             *
              -                         **
              -             *
           200+                  *
              -       *
              -             *
              -
                ------+---------+---------+---------+---------+---------+C1      
                    -40       -20         0        20        40        60
5.
Same as last problem for the following plot: .999, 0.0, .002,-.999, .500, .764
         -1600+       *
              -         **
      C3      -              **
              -               ** *
              -                   **
         -2400+                      22*
              -                          2*
              -                             *2*
              -                               ** *
              -
         -3200+                                      *
              -                                          *
              -                                              *
              -                                                *
              -                                                    *
         -4000+
              -
                ----+---------+---------+---------+---------+---------+--C2      
                   30        40        50        60        70        80
6.
Same as last problem for the following plot: 0.0,.999, .500, -.18, .008, -.500
           -1600+             *
                -       *          *
        C3      -                         *                  *
                -             *            *       *
                -                                  *   *
           -2400+             *      *    **                          *
                -             *                  *   *
                -                 *             2      *
                -                   *                  **
                -
           -3200+                          *
                -                                     *
                -                                    *
                -                              *
                -              *
           -4000+
                  ------+---------+---------+---------+---------+---------+C1      
                      -40       -20         0        20        40        60


next up previous contents index
Next: Probability Up: Descriptive Statistics Previous: Relationships Between Variables, Part

2001-01-01