# ARMA processes

A zero mean white noise process * [et] *can be used to construct new processes. We describe two commonly used examples first and afterwards their generalization -autoregressive-moving average (ARMA) model.

## Autoregressive process

A simple way to model dependence between consecutive observations is

where * et *is white noise. Such process is called a

*or*

**first-order autoregressive process***process. It is stationary if the coefficient*

**AR(1)***1.*

**a <**Since * E [et*] = 0 it follows that under the stationarity condition the mean of the process

*and variance*

**E[Yt] = -jzroi***] =*

**var[Yy***where*

**j^a?***An AR(1) process has autocorrelations*

**o2£ = var[et].***for*

**ps = a***1.*

**s >**A more general representation of the autoregressive process is

and called an autoregressive process of order * p, *or in short,

**AR(p).**## Moving average process

Consider the process * Yt *defined by

so * Yt *is a linear function of the present and immediately preceding innovations. This process is called a moving average process of order 1 and denoted by

**MA(1).**A MA(1) process is always stationary with mean * a0 *and variance (1 +

*Its autocorrelations are*

**a2E.***= and*

**pT***= 0 for*

**ps***1.*

**s >**Comparing two time series we see that a shock * et *in MA(1) process affects

*in two periods (only two positive autocorrelation coefficients), while a shock in the AR(1) process affects all future observations with a decreasing effect.*

**Yt**The MA(1) process may be inverted to give * et *as an infinite series in

**Yt, Yt-i**Thus, MA(1) time series can be represented as AR(oo) process. It is possible to invert MA(1) process into a stationary AR process only if * [3 < *1. This condition is known as

*condition.*

**invariability**A more general representation of a moving average process is

and called a moving average process of order q, or in short, MA(q).

## ARMA process

It is possible to combine the autoregressive and moving average specification into * ARMA(p q) *model

An * ARMA(p q) *time series can be represented in a shorter form using the notion of lag operator.

The lag operator * L, *is defined as

*= Yt-1, the operator which gives the previous value of the series. This operator can also be used to represent the lags of the second or higher orders in the following way:*

**LYt**In general * ARMA(p q) *process is

where

Stationarity requires the roots of * A(L) *to lie outside the unit circle, and invertibility places the same condition on the roots of

*(L).*

**B****Table 3.1: Correlation patterns**

## Estimation of ARMA processes

* ARMA(p q) *models are generally estimated using the technique of maximum likelihood.

An often ignored aspect of the maximum likelihood estimation of * ARMA(p q) *models is the treatment of initial values. These initial values are the first

*values of*

**p***and*

**Yt***values of*

**q***in (3.3.1). The exact likelihood utilizes the stationary distribution of the initial values in the construction of the likelihood. The conditional likelihood treats the*

**st***initial values of*

**p***as fixed and often sets the*

**Yt***initial values of*

**q***to zero. The exact maximum likelihood estimates (MLE) maximize the exact log-likelihood, and the conditional MLE maximize the conditional log-likelihood. The exact and conditional MLEs are asymptotically equivalent but can differ substantially in small samples, especially for models that are close to being non-stationary or non-invertible.*

**st**For pure AR models, the conditional MLEs are equivalent to the least squares estimates

**Model Selection Criteria** Before an * ARMA(p q) *may be estimated for a time series

*the*

**Yt,***and*

**AR***orders*

**MA***and*

**p***must be determined by visually inspecting the autocorrelation and partial autocorrelation functions for*

**q***If the autocorrelation function decays smoothly and the partial autocorrelations are zero after one lag, then a first-order autoregressive model is appropriate. Alternatively, if the autocorrelations were zero after one lag and the partial autocorrelations decay slowly towards zero, a first-order moving average process would seem appropriate.*

**Yt.**Alternatively, statistical model selection criteria may be used. The idea is to fit all * ARMA(p q) *models with orders

*and*

**p***and choose the values of*

**q***and*

**p***which minimizes model selection criteria:*

**q**where * a*2(p

*is the MLE of*

**q)***] = a2 without a degrees of freedom correction from the*

**var[et***model.*

**ARMA(p q)**