# Time-series models, Autoregressive Integrated Moving Average (ARIMA)

The univariate time-series models form a class of models which attempt to explain a variable of interest based solely on information contained in their own; that is, past and current values of themselves and/or past and current values of an error-term. This class of models is in sharp contrast with the regression analysis class of models (also called structural models), which explain changes in a variable of interest with information contained in past or current values of other variables (regressors). Therefore, the specification of time-series models chosen in practice is not based on a specific economic theory, i.e. it is primarily a-theoretical and based on statistical tests rather than an underlying theory. Specifically, time-series models try to capture specific characteristics of the variable of interest without examining the underlying causes of these characteristics. The most important class of time-series models is the family of Autoregressive Integrated Moving Average (ARIMA) models, which were approached in a systematic manner for the first time by Box and Jenkins (1976).

An ARIMA model is useful when a regression model seems not appropriate. This is the case when the “theoretically” implied relevant variables to explain a variable of interest are not measurable or available at a specific frequency. For example, suppose a researcher seeks to explain changes in daily shipping freight rates and the theory suggests that the explanatory variables could include industry-specific variables such as the orderbook of vessels are only available in a monthly or quarterly frequency, then an ARIMA model might provide a reasonable good alternative model to explain shipping freight rate changes. Furthermore, regression models generally generate inferior forecasts for a variable of interest which might be another reason a researcher wishes to apply an ARIMA model for explaining a variable of interest. For these reasons, time-series models and specifically ARIMA models will be discussed in the next few sections of this chapter.

## Moving Average (MA) processes

The Moving Average (MA) process is probably the simplest specification a time-series model may have. If £( denote a white noise process where E(et) = 0 and Var(et) = a2, i.e. a process where the mean value is zero, the variance is constant and past values do not contain any information regarding subsequent own values of the variable, then one can define a moving average process of q'h order denoted MA(q) as:

In this way, the moving average model gives the dependent variable as a simple linear combination of current and previous values of a white noise error-term.

## Autoregressive processes

An Autoregressive (AR) process attempts to explain current values of a variable of interest (r() with information contained in its own past values. Thus, an autoregressive process of p'h order, denoted as AR(p) is:

In this way, the autoregressive process models the dependent variable of interest as a simple linear combination of previous values of the dependent variable plus a white noise error-term.

## ARMA processes and the Box–Jenkins approach

The combination of AR(p) and M A(

The characteristics of an ARMA process will be a combination of those from the autoregressive (AR) and moving average (MA) parts. Box and Jenkins (1976) introduced a three-step procedure when estimating in practice an ARMA model: identification, estimation and diagnostic checking. Among other issues, the selection of the p'h and q'h order of an ARMA(p,p and q order of an ARMA model by minimising the values of information criteria, such as: the Schwarz (1978) Bayesian Information Criterion (SBIC) and the Akaike (1974) Information Criteria (AIC), defined as:

where, a2 is the residual sum of squares divided by the number of observations, T; k is the number of parameters estimated (k = p + q + 1) and T is the sample size. The SBIC criterion applies a much stricter penalty term for the inclusion of additional parameters to be estimated in the model than the AIC. Finally, the “I” in the acronym ARIMA, used in some instances, refers to the integrated process. Specifically, an integrated process is one that has been transformed (usually by computing the first differences) as to become stationary. In order to avoid estimation bias in the inferences drawn researchers difference the variable of interest as necessary and then build an ARMA model for the differenced variable. For most time-series, computing the first differences is enough to produce an integrated of order 1 variable, or a /(1) variable.