Example: ARMA in EViews
We start our example from the simulation of ARMA process and then we take a look at its estimation. In order to illustrate the statements in Table 3.1, let us simulate AR(3), MA(2) and ARMA(3 2) processes and compute their autocorrelation and partial autocorrelation functions.
In particular, we simulate
To start with, we generate a series of uncorrelated normally distributed residuals (remember, command nrnd generates standard normally distributed random number)
Also, we have to generate initial values for the series. Since the highest order of the series is 3, let us generate first three values. This can be done by setting sample to only fist three observations and assign zero values to all of three series.
smpl @first @first+2
Now, we set the sample for the rest of observations and generate series according to formulae (3.3.2)
smpl @first+3 @last
Now, we are ready to build and inspect their correlograms. Remind, that in order to build a correlogram, one should click on the icon if the time series being investigated and choose View/Correlogram... option. The correlograms of three time series is given on Figures ??-??.
As we have expected, the autocorrelation function for the first series (AR(3)) damps out slowly towards zero while its partial autocorrelation function has spikes at first three lags. The autocorrelation function of the second series (MA(2)) has spikes at two first lags and disappears afterwards (becomes insignificant) while the partial autocorrelation function decays oscillating towards zero. Both autocorrelation and partial autocorrelation functions of the third series (ARMA(3, 2)) decay slowly towards zero without any clear spikes.
Figure 3.1: Correlogram of an AR(3) process
Figure 3.2: Correlogram of a MA(2) process
Estimation An estimation of the ARMA processes is performed in EViews in the same way as OLS estimation of a linear regression. The only difference is in specifying autoregressive and moving average terms in the model. If the series has got autoregressive components, we should include terms ar(1), ar(2), etc, as regressors up to the required order. For example, to estimate the first series, type
in the estimation equation box. EViews produces an output given in Figure ?? All coefficients are significant as expected and are very close to the true values.
Figure 3.3: Correlogram of an ARMA(3, 2) process
Figure 3.4: Estimation output of ARMA process
Inference and tests can be performed in the same way as it was done for the OLS regression.
If one needs to estimate the model containing moving average components, ma(1), mar(2), etc terms should be included into the model specification. For example, to estimate the second time series, we write
Autoregressive and moving average terms can be combined to estimate ARMA model. Thus, specification of the third series looks like
After having estimated an ARMA model, one can check whether the estimated coefficients satisfy the stationarity assumptions. This can be done through View/ARMA structure of the Equation object. For the third series we obtain
Figure 3.5: Table of the roots of the estimated ARMA process It says that our ARMA series is both stationary and invertible.
If we had not known the order of the ARMA series, we would need to apply one of the information criteria to select the most appropriate order of the series. The following program illustrates how this can be done using the Akaike criterion.
First we need to define the maximal orders for autoregressive and moving average parts and store them into variables pmax and qmax. Also we need to declare a matrix object where the values of the Akaike statistic will be written for each specification of the ARMA process.
Next, we define nested loops which will run through all possible ARMA specification with orders within the maximal values.
As the number of lags included in the model increases we add a new AR term in the model. For this purpose we create a new string variable textsf%order containing the model specification.
We perform the same procedure with the MA term specification.
Once the model specification is determined and written in the variable %order we can use a substitution to estimate the corresponding model.
The last command nullify the variable %order for the use in the next step of the loops. Now we can write the value of the Akaike criterion for the current in the table.
After the program run, the values of the Akaike criterion are stored in the table aic. Now we can choose that specification of the ARMA model which produces the smallest AIC value.