The three most common statistical test procedures to identify a problem of heteroskedasticity are the Goldfeld-Quant test, the Breusch-Pagan test, and the White's test. Below we will shortly describe the logic of the tests and how they are implemented.
The Goldfeld-Quant test (GQ) works under the assumption that the error variance is equal for all observations, which is to say that the error term is homoskedastic. When this is true, the variance of one part of the sample must be the same as the variance of another part of the sample independent on how the sample is sorted. If this is not the case we must conclude that the data at hand is heteroskedastic. The following basic steps complete the GQ-test:
Sort the sample according to a variable that you believe drives the size of the variance. If the variable X1 is related to the size of the variance, sort the data set in an increasing order of X1, and divide the sample into three groups of equal size and omit the middle group. If the sample size is very small (i.e. each group is less than 100 observations), it is enough to divide the sample into two groups without omitting any observations.
Run the model for each subsample and calculate the Residual Sum of Squares (RSS) for each group:
Form the hypothesis that is to be tested:
Use the two Residual Sum of Squares to calculate the variance of the two subsamples and form the test function:
As a rule of thumb, one should always put the larger variance in the numerator. Choose a significance level and find the critical value to compare the test value with. If the test value is larger than the critical value you choose to reject the null hypothesis.
We are going to investigate model (9.6) used in Example 9.1 to see if we can identify any heteroskedasticity using the GQ-test. In the graphical analysis we found an indication of heteroskedasticity related to the number of years of schooling. Therefore, we sort the data set in an increasing order of years of schooling and delete the 33 percent in the middle. We use the two remaining sub-samples and estimate a regression for each sample. Using the results from these regressions we calculate the corresponding variance for each regression:
Using the estimated variances for the two subsamples we can calculate the test value:
Choosing a significance level of 5 percent we found a critical value equal to 1.16. Most statistical tables do not offer information on critical values for the degrees of freedom we have in this example. To approximate the critical value by using the numbers valid for infinity in both numerator and denominator would not be meaningful. Therefore we have been using Excel to calculate the critical value valid for the degrees of freedom in our case.
Since our test value is larger than the critical value we conclude that our model suffers from heteroskedasticity and that year of schooling is at least partly responsible. A general problem with this test is that it tends to reject the null hypothesis very often. That is, it is very sensitive to very small differences, especially when the degrees of freedom are in level with those that we have in this example, since that produce very small critical values.
In a second step you should also test the second variable, years of work experience, that could be part of the problem as well. However, we will not go through that here, and leaves that to the reader.
The Breusch-Pagan test (BP) is also a popular test procedure presented in most econometric text books. The BP-test is slightly more general than the GQ-test, since it allows for more than one variable at the time to be tested. The starting point is a set of explanatory variables that we believe drives the size of the variance of the error term. We will call them X1, X2, ..., Xh, and we claim that the following specification could be a plausible specification for our error variance:
The variables included in (9.13) could be just a subset of the explanatory variables of the model or it could be all of them. In Example 9.1 we could not be conclusive about whether just one or if both of our variables were driving the size of the variance. In a case like that it is advisable to included both the variables in the specification of the variance given by (9.13). The functional form is not expressed explicitly in (9.13) as stated, but we are going to use a linear specification, just as for the model we use.
H0 : A1 = A2 = ... = Ah = 0
The hypothesis of this test is: (9.14)
H1: Aj ^ 0 for at leat one j, j = 1,2,...,h
In order to test the hypothesis we have to go through the following basic steps:
1) Run the regression for the model you believe suffers from heteroskedasticity using OLS. 2. Save the residuals and square them ( ei2 ). Use the squared residual and run the following auxiliary regression:
Equation (9.15) is a representation of (9.13) with a linear specification.
Even though it looks like we could use the classical approach of using an F-test to test the joint hypothesis, it turns out not to be possible since the dependent variable is a construction based on another model. Instead the following test statistic could be used to test the null hypothesis:
where n is the number of observations used in the regression of (9.15) and is the coefficient of determination received from (9.15). It turns out that the product of those two terms is Chi-squared distributed with h degrees of freedom, where h is the number of restrictions, which in this case corresponds to the number of variables included in (9.15). The test value should therefore be compared with a critical value received from the Chi-square table for a suitable level of significance.
In this example we will use the same data set and the same model as in Example 9.2. But this time the test will involve both the variables included in the model. We choose not to include the squared terms, even though they in principle could be included. Following the basic procedure of the BP-test we specify and estimate the variance function with standard errors given within parenthesis:
Using this information we are able to calculate the test value:
Choosing a significance at the 5 percent level, the Chi-square table with 2 degrees of freedom shows a critical value of 5.99. Hence, the test value is smaller than the critical value and we are unable to reject the null hypothesis. This means that we have received a conflicting result compared with the GQ-test result. Since the GQ-test is very sensitive to small differences, we believe that the result of this test is more useful. However, the BP-test is a test that requires large data sets to be valid, and is sensitive to any violation of the normality assumption. Since we have more than 1000 observations we believe that our sample is sufficiently large, but in order to be sure we will move on with yet another common test called the White's test.
White's test is very similar to the BP-test but does not assume any prior knowledge of the heteroskedasticity, but instead examines whether the error variance is affected by any of the repressors, their squares or cross products. Therefore, it is also a large sample test but it does not depend on any normality assumption. Hence, this third test is more robust than the other two test procedure described above, and is sometimes also called the White's General Heteroskedasticity test (WGH). The basic steps in the procedure are as follows for a model with two explanatory variables, where (9.17) represents the main model and (9.18) the variance function that contains all the variables of the main function and their squares and cross products:
Estimate the parameters of equation (9.17) and create and save the residual. Square the residual and run the auxiliary regression model given by (9.18).
Using the results from the auxiliary regression you can calculate the test value using (9.16). If the test value is larger than the critical value chosen, you reject the null hypothesis of homoskedasticity.
We repeat the test executed in Example 9.3 and apply the WGH-test instead. Observe that the only difference is in the specification of the variance function. Following the basic steps given above we received the following results with standard errors reported within parenthesis:
Observe that the coefficients and their standard errors are different from zero even though some of them appear to be zero since they are expressed with just three decimal points. Their f-values are definitely different from zero.
Using these results we can calculate the test value:
The critical value from the Chi-square table, with 5 degrees of freedom and a significance level of 5 percent, equals 11.07, which is larger than the test value. Hence this test confirms the conclusions from the previous test and we are unable to reject the null hypothesis of homoscedasticity. That is, we have no statistical material that points in the direction of heteroskedasticity.