# Stationarity and Unit Roots Tests

## Introduction

Many financial time series, like exchange rate levels of stock prices appear to be non-stationary. New statistical issues arises when analyzing non-stationary data. Unit root tests are used to detect the presence and form of non-stationarity.

This chapter reviews main concepts of non-stationarity of time series and provides a description of some tests for time series stationarity. More information about such tests can be found in Hamilton (1994), Fuller (1996), Enders (2004), Harris (1995), Verbeek (2008).

There are two principal methods of detecting nonstationarity:

• Visual inspection of the time series graph and its correlogram;

• Formal statistical tests of unit roots.

A nonstationaty time series is called integrated if it can be transformed by first differencing once or a very few times into a stationary process. The order of integration is the minimum number of times the series needs to be first differenced to yield a stationary series. An integrated of order 1 time series is denoted by I(1). A stationary time series is said to be integrated of order zero, I(0).

## Unit Roots tests

Let us consider a time series Yt in the form

Unit root tests are based on testing the null hypothesis that H0: p = 1 against the alternative H: p < 1. They are called unit root tests because under the null hypothesis the characteristic polynomial has a root equal to unity. On the other hand, stationarity tests take the null hypothesis that Yt is trend stationary.

### Dickey-Fuller test

One commonly used test for unit roots is the Dickey-Fuller test. In its simplest form it considers a AR(1) process

where ut is an IID sequence of random variables. We want to test

Under the null hypothesis Yt is non-stationary (random walk without drift). Under the alternative hypothesis, Yt is a stationary AR(1) process.

Due to non-stationarity of Yt under the null, the standard t-statistic does not follow t distribution, not even asymptotically. To test the null hypothesis, it is possible to use

Critical values, however, have to be taken from the appropriate distribution, which is under the null hypothesis of non-stationarity is nonstandard. The asymptotic critical values of DF based on computer simulations are given in Fuller (1996).

The above test is based on the assumption that the error terms are iid and there is no drift (intercept term) in the model. The limiting distribution will be wrong if these assumptions are false.

More general form of the Dickey-Fuller test employs other variants of the time series process. Consider the following three models for the data generating process of Yt:

with ut being iid process.

Dickey and Fuller (1979) derive a limiting distribution for the least squares t-statistic for the null hypothesis that p = 1 and F-statistic (Wald statistic) for the null hypotheses of validity of combinations of linear restrictions p = 0, a = 0 and 3 = 0 where the estimated models are from (4.2.2) to (4.2.4) but in each case that (4.2.2) is the true data generating process.

### Augmented Dickey-Fuller test

Dickey and Fuller (1981) show that the limiting distributions and critical values that they obtain under the assumption of iid ut process are also valid when ut is autoregressive, when augmented Dickey-Fuller (ADF) regression is run. Assume the data are generated according to (4.2.2) with p =1 and that

where et are iid. Consider the regression

and test H0: p = 0 versus H: p < 0. Given the equation for ut in (4.2.5) we can write

Since under p =1 we have ut = Yt — Yt_i, this equation can be rewritten as

Said and Dickey (1984) provide a generalization of this result for ARMA(p q) error terms.

Procedure Before using the ADF test we have to decide how many lags of AY to include in the regression. This can be done by sequentially adding lags and testing for serial correlation using Lagrange multiplier tests to archive a white noise residuals.

Use F-test to test the null (( p) = (0 1) against the alternative ((3 p) = (0 1). If the null is rejected we know that

and the next step is to test p =1 using the t-statistic obtained from the estimating the augmented version of (4.2.4), with the critical values taken from the standard normal tables. Critical values from the standard normal are appropriate when (3 is non-zero, so that if the null hypothesis is not rejected we can rule out the second and third cases (if (3 is zero the critical values are non-standard, but will be smaller than the standard normal ones). Thus, if p = 1 is accepted we conclude that 3 = 0 and p = 1 , so that series has a unit root and a linear trend.

If we reject the null then the first alternative can be dismissed. This leaves the following two alternatives

In either case p is not 1, there is no unit root and conventional test procedures can be used. Thus we may carry out a t test for the null that 3 = 0.

If we cannot reject (3 p) = (0 1) we know that the series has a unit root with no trend but with possible drift. To support the conclusion that p = 1 we may test this, given 3 is assumed to be zero.

If we wish to establish whether the series has non-zero drift, further tests will be required. Note that we know (( p) = (0 1), and so we might carry out the F test. This tests

If we cannot reject the null hypothesis, the series is random walk without drift. If we reject it, the series is a random walk with drift.

We may wish to support these findings on the basis of estimating (4.2.3) by setting 3 at zero as suggested by the various previous tests. If 3 is actually zero then tests on a and p should have greater power once this restriction is imposed.

### Phillips and Perron tests

The statistics proposed by Phillips and Perron (1988) (Z statistics) arise from their considerations of the limiting distributions of the various Dickey-Fuller statistics when the assumption that ut is an iid process is relaxed. The test regression in the Phillips-Perron test is

where ut is a stationary process (which also may be heteroscedastic). The PP tests correct for any serial correlation and heteroscedasticity in the errors ut of the test regression by directly modifying the test statistics. These modified statistics, denoted Zt and Z,, are given by

The terms o2 and A2 are consistent estimates of the variance parameters

In the Dickey-Fuller specification we can use the critical values given by Dickey and Fuller for the various statistics if ut is an iid and we should use Phillips-Perron's counterparts if it is not iid.

An indication as to whether the Z statistic should be used in addition to (or instead of) the ADF tests might be obtained in the diagnostic statistics from the DF and ADF regressions. If normality, autocorrelation or heterogeneity statistics are significant, one might adopt the Phillips-Perron approach. Furthermore, power may be adversely affected by misspecifying the lag length in the augmented Dickey-Fuller regression, although it is unclear how far this problem is mitigated by choosing the number of lags using data-based criteria, and the Z-tests have the advantage that this choice does not have to be made. Against this, one should avoid the use of the Z test if the presence of negative moving average components is somehow suspected in the disturbances.

Under the null hypothesis that 0 = 0, the PP Zt and statistics have the same asymptotic distributions as the ADF t-statistic and normalized bias statistics. One advantage of the PP tests over the ADF tests is that the PP tests are robust to general forms of heteroskedasticity in the error term ut. Another advantage is that the user does not have to specify a lag length for the test regression.