Measures of dispersion

It is sometimes very important to know how much the random variable deviates from the expected value on average in the population. One measure that offers information about that is the variance and the corresponding standard deviation. The variance of X is defined as

The positive square root of the variance is the standard deviation and represents the mean deviation from the expected value in the population. The most important properties of the variance is

The variance of a constant is zero. It has no variability.

If a and b are constants then Var(aX + b) = Var(aX) = a Var(X)

Alternatively we have that Var(X) = E[X 2] - E[X]2

E[X 2] = £ x 2f (X)x

Example 1.15

Calculate the variance of X using the following probability distribution:











Table 1.6 Probability distribution for X

In order to find the variance for X it is easiest to use the formula according to property 4 given above. We start by calculating E[ X2] and E[ X].

Measures of linear relationship

A very important measure for a linear relationship between two random variables is the measure of the covariance. The covariance between X and Y is defined as

The covariance is the measure of how much two random variables vary together. When two variables tend to vary in the same direction, that is, when the two variables tend to be above or below their expected value at the same time, we say that the covariance is positive. If they tend to vary in opposite direction, that is, when one tends to be above the expected value when the other is below its expected value, we have a negative covariance. If the covariance is zero, we say that there is no linear relation between the two random variables.

Important properties of the covariance

The covariance measure is level dependent and has a range from minus infinity to plus infinity. That makes it very hard to compare two covariances between different pairs of variables. For that matter it is sometimes more convenient to standardize the covariance so that it become unit free and work within a much narrower range. One such standardization gives us the correlation between the two random variables.

The correlation between X and Y is defined as

The correlation coefficient is a measure for the strength of the linear relationship and range from -1 to 1.

Example 1.16

Calculate the covariance and correlation for X and Y using the information from the joint probability mass function given in Table 1.7.

The joint probability mass function for X and Y

Table 1.7 The joint probability mass function for X and Y

We will start with the covariance. Hence we have to find E[X,Y], E[X] and [Y]. We have E[X] = 1 x 0.1 + 2 x 0.6 + 3 x 0.3 = 2.2 E[Y] = 1 x 0.3 + 2 x 0.6 + 3 x 0.1 = 1.8

E[XY ]= lxlx 0 + lx 2 x 0.1 + lx 3 x 0 + 2 xlx 0.3 + 2 x 2 x 0.2 + 2 x 3 x 0.1 + 3 xlx 0 + 3 x 2 x 0.3 + 3 x 3 x 0 = 4

This gives Cov[X J] = 4 - 2.2 x 1.8 = 0.04 > 0.

We will now calculate the correlation coefficient. For that we need V[X], V[Y]. E[X2]= 12 x 0.1 + 22 x 0.6 + 32 x 0.3 = 5.2

e[y2]= 12 x 0.3 + 22 x 0.6 + 32 x 0.1 = 3.6 V [X ] = E [x 2 ]-e[x ]2 = 5.2 - 2.22 = 0.36 V[Y] = e[y2 ]-E[y]2 = 3.6 -1.82 = 0.36 Using these calculations we may finally calculate the correlation using (1.13)

Skewness and kurtosis

The last concepts that will be discussed in this chapter are related to the shape and the form of a probability distribution. The Skewness of a distribution is defined in the following way:

A distribution can be skewed to the left or to the right. If it is not skewed we say that the distribution is symmetric. Figure 1.1 give two examples for a continuous distribution function.

Skewness of two continuous distributions

Figure 1.1 Skewness of two continuous distributions

Kurtosis is a measure of whether the data are peaked or flat relative to a normal distribution. Formally it is defined in the following way:

When a symmetric distribution follows the standard normal it has a kurtosis equal to 3. A distribution that are long tailed compared with the standard normal distribution has a kurtosis greater than 3 and if it is short tailed compared to the standard normal distribution it has a kurtosis that is less than three. It should be observed that many statistical programs standardize the kurtosis and presents the kurtosis as K-3 which means that a standard normal distribution receives a kurtosis of 0.

< Prev   CONTENTS   Next >