# ARMA processes

A zero mean white noise process [et] can be used to construct new processes. We describe two commonly used examples first and afterwards their generalization -autoregressive-moving average (ARMA) model.

## Autoregressive process

A simple way to model dependence between consecutive observations is where et is white noise. Such process is called a first-order autoregressive process or AR(1) process. It is stationary if the coefficient a < 1.

Since E [et] = 0 it follows that under the stationarity condition the mean of the process E[Yt] = -jzroi and variance var[Yy] = j^a? where o2£ = var[et]. An AR(1) process has autocorrelations ps = a for s > 1.

A more general representation of the autoregressive process is and called an autoregressive process of order p, or in short, AR(p).

## Moving average process

Consider the process Yt defined by so Yt is a linear function of the present and immediately preceding innovations. This process is called a moving average process of order 1 and denoted by MA(1).

A MA(1) process is always stationary with mean a0 and variance (1 + a2E. Its autocorrelations are pT = and ps = 0 for s > 1.

Comparing two time series we see that a shock et in MA(1) process affects Yt in two periods (only two positive autocorrelation coefficients), while a shock in the AR(1) process affects all future observations with a decreasing effect.

The MA(1) process may be inverted to give et as an infinite series in Yt, Yt-i Thus, MA(1) time series can be represented as AR(oo) process. It is possible to invert MA(1) process into a stationary AR process only if [3 < 1. This condition is known as invariability condition.

A more general representation of a moving average process is and called a moving average process of order q, or in short, MA(q).

## ARMA process

It is possible to combine the autoregressive and moving average specification into ARMA(p q) model An ARMA(p q) time series can be represented in a shorter form using the notion of lag operator.

The lag operator L, is defined as LYt = Yt-1, the operator which gives the previous value of the series. This operator can also be used to represent the lags of the second or higher orders in the following way: In general ARMA(p q) process is where Stationarity requires the roots of A(L) to lie outside the unit circle, and invertibility places the same condition on the roots of B (L).

Table 3.1: Correlation patterns ## Estimation of ARMA processes

ARMA(p q) models are generally estimated using the technique of maximum likelihood.

An often ignored aspect of the maximum likelihood estimation of ARMA(p q) models is the treatment of initial values. These initial values are the first p values of Yt and q values of st in (3.3.1). The exact likelihood utilizes the stationary distribution of the initial values in the construction of the likelihood. The conditional likelihood treats the p initial values of Yt as fixed and often sets the q initial values of st to zero. The exact maximum likelihood estimates (MLE) maximize the exact log-likelihood, and the conditional MLE maximize the conditional log-likelihood. The exact and conditional MLEs are asymptotically equivalent but can differ substantially in small samples, especially for models that are close to being non-stationary or non-invertible.

For pure AR models, the conditional MLEs are equivalent to the least squares estimates

Model Selection Criteria Before an ARMA(p q) may be estimated for a time series Yt, the AR and MA orders p and q must be determined by visually inspecting the autocorrelation and partial autocorrelation functions for Yt. If the autocorrelation function decays smoothly and the partial autocorrelations are zero after one lag, then a first-order autoregressive model is appropriate. Alternatively, if the autocorrelations were zero after one lag and the partial autocorrelations decay slowly towards zero, a first-order moving average process would seem appropriate.

Alternatively, statistical model selection criteria may be used. The idea is to fit all ARMA(p q) models with orders p and q and choose the values of p and q which minimizes model selection criteria: where a2(p q) is the MLE of var[et] = a2 without a degrees of freedom correction from the ARMA(p q) model.