# Consequences

The consequences of having an autocorrelated error term are very similar to those that appear with a heteroskedastic error term. In short we have that:

1. The estimated slope coefficients are unbiased and consistent.

2. With positive autocorrelation the standard errors are biased and too small.

3. With negative autocorrelation the standard errors are biased and too large.

Since the expected value of the residual is zero, despite any autocorrelation, the estimated slope coefficients are still unbiased. That is, the property of unbiasedness and consistency does not require uncorrelated error terms. Confirm this by reading chapter 3 where we derived the sample estimators and discussed their properties.

The efficiency property of the OLS estimator does, however, depend on the assumption of no autocorrelation. To see this, it is useful to repeat how the variance of the slope estimator looks like in the simple regression case. Assume the following set up:

With this setup we observe that the residual term, * u, *is autoregressive of order one. The covariance is therefore given by:

When generalizing this expression to an arbitrary distance between two error terms it is possible to show that it equals

With this set up, together with the knowledge from chapter 3 on how the variance of the OLS estimator looks like, we can examine the variance under the assumption of autocorrelation. The variance of the slope coefficient can be expressed in the following way

If the autocorrelation coefficient were zero * (i.e. p = *0), the infinite series within the parenthesis in (10.10) would equal one. However, if ignoring the autocorrelation when present, we disregard this term which bias the variance of the slope coefficient. To receive a picture of how large in size the parenthesis is, it is useful to rewrite it into something more compact. In order to do that, we need to impose some assumptions on the behavior of X. We assume that the variance of

*is constant and given by*

**X***, and follows a first order autoregressive scheme, just like the error term of the model. This implies that*

**o***, with*

**Cov(Xt,Xt-j) = rJcrX***being the correlation coefficient for Xt and Xt 1. If we apply these assumptions to (10.10) we receive*

**r**In order to receive the compact expression given by (10.11) you have to know how to deal with geometric series. If you do not know that, do not worry! The important thing here is to see how the sign of the autocorrelation coefficient and the correlation between Xt and * X *. affect the size of the variance and induce a bias when ignoring autocorrelation. We know that both

*and r takes values between -1 and 1, since they represents correlation coefficients. With this set up we can analyze the size of the adjustment factor due to autocorrelation. Let us investigate two basic and common cases:*

**P**1) p> 0 and * r *> 0 (Positive autocorrelation)

1 _|_ **pr**

When this is true, the adjustment factor become:-— > 1, which means that the usual estimates for

1 -pr

the variance will be too small, and coefficients may appear more significant then they rely are. With a fixed value of r, the adjustment factor is increasing with the size of the autocorrelation coefficient, which increases the bias. If the value of * r *is zero, as it would be in cross sectional data for instance, the adjustment factor would be one, and the bias of the variance would be zero, independent of the size the autocorrelation coefficient. Most macro economic time series has an

*value that is different from zero, and hence the case would in general not appear.*

**r**2) * p *< 0 and

*> 0 (Negative autocorrelation)*

**r**1 - **pr**

With a negative autocorrelation, the adjustment factor become: 0 <-< 1, which means that the

1 _ **pr**

usual estimates will be too large, and appear less significant then they rely are. With a fixed value of r, and an increasing value of the autocorrelation coefficient in absolute terms, the adjustment factor will be smaller, and increase the bias.

Hence, when we have autocorrelation amongst our residual terms, we get biased estimates of the standard errors of the coefficients. Furthermore, the coefficient of determination and the usual estimator for the error variance of the model will be bias as well. Autocorrelation is therefore a serious problem that needs to be addressed.