# Multivariate Time Series Analysis

## Introduction

Multivariate analysis investigates dependence and interactions among a set of variables in multi-values processes. One of the most powerful method of analyzing multivariate time series is the vector autoregression model. It is a natural extension of the univariate autoregressive model to the multivariate case.

In this chapter we cover concepts of VAR modelling, non-stationary multivariate time series and cointegration.

More detailed discussion can be found in Hamilton (1994), Harris (1995), Enders (2004), Tsay (2002), Zivot and Wang (2006).

## Vector Autoregression Model

Let Yt * = (Yi,t Y2>t Yn>t) *denote an

*1 vector of time series variables. The basic vector autoregressive model of order*

**k x***is*

**p, VAR(p),**where * Hi* are

*matrices of coefficients, c is a*

**k x k***vector of constants and ut is an*

**k x 1***1 unobservable zero mean white noise vector process with covariance matrix E.*

**k x**If we consider a special case of two dimensional vector Y, the * VAR *consists of two equation (also called a bivariate

**VAR)**with * cov (uitt u2s) = *C12 for

**t = s.**As in the univariate case with * AR *processes, we can use the lag operator to represent

**VAR(p)**where n(L) = **In - U1L - - UpLp.**

If we impose stationarity on * Yt *in (6.1.2), the unconditional expected value is given by

Very often other deterministic terms or stochastic exogenous variables may be included into the VAR specification to represent. More general form of the * VAR(p) *model is

where Xt represents an * m x *1 matrix of exogenous or deterministic variables, and r is a matrix of parameters.

### Estimation of VARs and Inference on coefficients

Since the * VAR(p) *may be written as a system of equations with the same sets of explanatory variables, its coefficients can be efficiently and consistently estimated by estimating each of the components using the OLS method (see Hamilton (1994)). Under standard assumptions regarding the behavior of stationary and ergodic VAR models (see Hamilton (1994) the estimators of the coefficients are asymptotically normally distributed.

An element of is asymptotically normally distributed, so asymptotically valid t-tests on individual coefficients may be constructed in the usual way (see Chapter 2). More general linear hypotheses can also be tested using the Wald statistic.

**Lag Length Selection** A reasonable strategy how to determine the lag length of the VAR model is to fit VAR(p) models with different orders * p *= 0

*and choose the value of*

**pmax***which minimizes some model selection criteria. Model selection criteria for VAR(p) could be based on Akaike (AIC), Schwarz-Bayesian (BIC) and Hannan-Quinn (HQ) information criteria:*

**p**Forecasting We can use VAR model to forecast times series in a similar way to forecasting from a univariate * AR *model.

The one-period-ahead forecast based on information available at time * T *is

while /-step forecast is

where YT* +j T *= YT

*for*

**+j***0. The h-step forecast errors may be expressed as*

**j <**where the matrices ^>s are determined by recursive substitution

with * *0 *=

*and*

**In***0 for*

**Hj =***The forecasts are unbiased since all of the forecast errors have expectation zero and the MSE matrix for Yt+h*

**j > p.***is*

**T**The /-step forecast in the case of estimated parameters is

where II* j *are the estimated matrices of parameters. The h-step forecast error is now

The estimate of the MSE matrix of the h-step forecast is then

### Granger Causality

One of the main uses of VAR models is forecasting. The structure of the VAR model provides information about a variable's or a group of variables' forecasting ability for other variables. The following intuitive notion of a variable's forecasting ability is due to Granger (1969). If a variable, or group of variables, * Y1 *is found to be helpful for predicting another variable, or group of variables,

*then*

**Y2***is said to Granger-cause Y2; otherwise it is said to fail to Granger-cause Y2. Formally, Yi fails to Granger-cause*

**Y1***if for all*

**Y2***0 the MSE of a forecast of*

**s >***based on*

**Y2>t+s***is the same as the MSE of a forecast of*

**(Y2<t Y2t-1 )***based on*

**Y2>t+s***and*

**(Y2tt Y2t-1 )***). Note that the notion of Granger causality only implies forecasting ability.*

**(Y1t Y1t-1**In a bivariate * VAR(p) *model for Yt

*fails to Granger-cause*

**= (Y1t Y2t)', Y2***if all of the*

**Y1***VAR coefficient matrices n1*

**p***are lower triangular. That is, all of the coefficients on lagged values of*

**Ip***are zero in the equation for Y1. The*

**Y2***linear coefficient restrictions implied by Granger non-causality may be tested using the Wald statistic. Notice that if*

**p***fails to Granger-cause*

**Y2***and*

**Y1***fails to Granger-cause Y2, then the VAR coefficient matrices n1*

**Y1***are diagonal.*

**Ip**