Exponential Smoothing Models

Table of Contents:

In general, component data must be set to signal and noise. The signal is the pattern from the intrinsic dynamics of the data collection system. Smoothing can be seen as a strategy to differentiate between signal and noise, as much as possible. The function of a filter in estimating the signal from the data is thus smoother [35].

The simplest method of smoothing a TS is to use the averaging process on the current and the previous observations. In other words, the current observation is replaced with the average of observations (current and previous).

In most cases, the recent values of the observation affect the benefits of future observations much more in contrast to the older ones. Exponentially weighted smoother reacts faster to TS oscillations by multiplying geometrically decreasing weights to the previous observations. Therefore, the addition of a parameter is an exponentially weighted smoother X as

where y, is the exponentially weighted average of the current and T -1 previous observations, and X is the discount factor and |A| < 1.

Exponential smoothers are utilized in estimating the constant and linear trend of a TS. For the constant trend, the first-order or straightforward exponential smoother is deployed, and for estimation of the linear trend, the second-order exponential smoother is utilized.

Using the exponential smoothing model, the prediction of the т-step ahead observation is equal to the current value of the exponential smoother for a TS with a constant trend

The prediction error is a function of A, while larger values of A result in larger values of error. A large A may be due to a fast reaction to the forecast error, but it may also make the forecast model react faster to random fluctuations. The choice of the A parameter is very crucial and can be estimated using some of the squared forecast errors. The A that produces the smallest sum of the squared forecast errors is the best choice.

ARMA models

Deployed models for TS have many variations with regard to different stochastic processes. There are two primary and broadly used linear TS models: Autoregressive (AR) and Moving Average (MA) models. Combining these two gives the Autoregressive Moving Average (ARMA) and Autoregressive Integrated Moving Average (ARIMA) models [36].

An ARMA (p, q) model consists of two models: the AR(p) and MA(p past observations and a random error in addition to a constant term [37]. Mathematically, the AR(p) model is represented by:

where c is a constant term, уis the observed value at time t - i, e, is the random error at time t, (p, is the model parameter, and i is the order of the model.

The AR(p) model regresses against past values of the TS. Alternatively, an MA(q) model uses past errors as the explanatory variables. The MA(^) model is expressed by

where у is the average value of the TS, в, is the model parameter, and q is the order of the model.

The random error is considered as a Gaussian distribution. So, an MA model is a linear regression of the current observation of the TS against the random error of one or more prior observations. Since the random errors in the MA model cannot be neglected, fitting an MA model to a TS is more complicated.

A combination of AR and MA models leads to a general and useful type of TS model, known as the ARMA model. ARMA (p, q) model can be stated as

where p corresponds to the order and q is the direction for the average model to pass.

It is essential that to use ARMA models, TS should be stationary or should be transformed into a stationary signal using a transformation function.

To determine the orders of the ARMA model for TS data, it is necessary to perform the autocorrelation function (ACF) and partial autocorrelation function (PACF). These statistical measures express the relationship between the various observations in TS. For modeling and forecasting purposes, it is often useful to plot the ACF and PACF against a sequence of time lags. These plots help in estimating the order of AR and MA models.

Note that the ACF and PACF of an ARMA (p, q) both exhibit exponential decay as well as damped sinusoid patterns, which makes the identification of the order of the ARMA (p, q) model relatively more complicated. For that, additional sample functions such as the Extended Sample ACF, the Generalized Sample PACF, the Inverse ACF, and canonical correlations can be utilized.

The ARMA models can only be deployed for stationary TS data. For non- stationary TS, ARIMA models can be used. The nature of many TS data, such as trading or weather signals, is nonstationary. This involves the TS, which contains trends and seasonal patterns. So, ARMA models are practically inadequate to adequately describe nonstationary TS data, which are frequently encountered in practice. For this reason, the ARIMA model is proposed, which is a generalization of an ARMA model to involve the non- stationarity case as well.

ARIMA Models

ARIMA model and its different forms are based on the well-known Box- Jenkins principle, and so these are also widely known as the Box-Jenkins models [38].

A nonstationary TS is transformed to stationary by applying finite differencing of the data points in ARIMA models. Mathematically, the ARIMA (p, d, q) model using lag polynomials is written as

where p, d, and q are ARIMA model orders for autoregressive, integrated, and moving average components, respectively, and are integer values > 0. Parameter d determines the order of difference. Often, d = 1 is enough to make the TS stationary. When d = 0, then the ARIMA model reduces to an ARMA (p, q) model.

An ARIMAfp, 0,0) is the AR(p) model and ARIMA(0,0,<]) is the MA(q) model. ARIMA (0,1,0) is a special one and known as the Random Walk model. In trading and price series, the Random Walk model is a well-known model for forecasting.

 
Source
< Prev   CONTENTS   Source   Next >