# Methods Adopted in Price Forecasting

## Introduction

Energy demand forecasting is a more common concept when addressing electrical power systems, with electricity marketing only recently emerging as research into electricity price forecasting. The accuracy of the forecast of future electricity prices will have practical significance for an efficient market operation. Different methods have established over a period of time, which are mainly categorized into forecasting electricity prices as time-series model and simulation-based model. Among this time-series model, forecasting is mostly used for day-ahead forecasting.

## Price Forecasting

The price information comes from the generators. Electricity demand strongly depends on peaks during evening time, weekends, and working hours. The supply is distinguished by the types or sources of electricity generation, i.e., coal, nuclear power plants, and renewable energy resources (solar and wind energy). There are various types of forecasting methods: spot price forecasting, 24-hour ahead price forecasting, monthly forecasting, and seasonal forecasting. These models are classified into six groups, as shown in Figure 14.1. There are different categories under these forecasting methods [17]. For our requirements, the forecasting methods are selected [18,19].

## Forecasting Algorithms

There are many old price forecasting methods available, such as time of day methods, regression methods, stochastic time-series method, state space methods, expert system methods, and modern methods (fuzzy logic, genetic algorithm, neural network-based methods). So statistical approaches are not reliable for the price prediction. Artificial intelligence and neural network-based methods are more suitable for this forecasting [20]. Some of the methods are explained here.

FIGURE 14.1 Electricity price forecasting-based models.

### Linear Regression-Based Models

*14.2.3.1.1 Autoregressive Integrated Moving Average (ARIMA) Model *ARIMA is a statistical price forecasting modelling, which is used for the time sequence prediction. This model has a strong capability to forecast short-time electricity prices. In the ARIMA model, upcoming cost of a variable is a linear combination of previous values and previous errors. In general, this method is expressed as

where *p,* is the time *t* related to price, *ф(Р)* and *0([3)e* are the backshift operators, and *e,* is the error term. ARIMA method is usually represented using the typical forms of ARIMA (P, *D, Q)* and *(p*, *d, q)* [21]. This model is difficult to understand some conventional model recognition strategies for distinguishing the correct model from the class of possible models.

*14.2.3.1.2 CARCH Model*

Generalized autoregressive conditional heteroskedasticity (GARCH) is an abbreviation of GARCH model. While the ARIMA model aims at predicting and forecasting the market transition itself, GARCH model aims at predicting demand fluctuations. GARCH model *(p, q)* is well defined as,

Assume a time sequence *y„*

where *[1* is the offset and *£, =*

*p* represents the terms of orders cr and *q* represents the terms of order *e ^{2}* [22].

### Based on Nonlinear Heuristics

7*4.2.3.2.1 Artificial Neural Network (ANN) Model*

One application of ANNs is time-series modelling, such as stock price prediction, future demand, and sales promotion. The ANN’s architecture can be specified by three variables, namely input neurons, hidden layers, and output neurons. The number of output neurons represents the problem in time-series forecasting for which the prediction to be addressed. So this architecture is well suited for time-series forecasting problems, which is shown in Figure 14.2, with back propagation technique, as shown in Figure 14.3.

Recurrent neural network (RNN) architecture will be explained in upcoming session. ANN has some drawbacks in the instance of forecasting; i.e., a broad sample data size is needed to produce a reliable and consistent forecast performance.

FIGURE 14.2 ANN architecture.

### Deep Learning-Based Models

Among the current machine learning techniques, the deep learning method can be considered as one of the most promising tools so far, especially in the fields of image and text mining [23].

*14.2.3.3.1 Recurrent Neural Network (RNN)*

RNN is particularly significant in forecasting of time series. Each neuron in an RNN is capable of keeping preceding input information using an internal memory. Architecture of RNN is shown in Figure 14.4.

In this network, *x _{0},x,...x,* are the stock prices today, and

*h^,h ...h,*are the hidden states of recurrent network. Circles represent the layers of the recurrent network. Usually, an RNN has three groups of constraints: input to hidden weights (vv), hidden to hidden weight (и), and hidden to output weight (v). The property of weight sharing makes our network ideal for inputs with variable dimensions. The hidden networks are recursively stated as.

One can minimize the cost function to get correct weight. For using back propagation, one has to calculate the gradient of RNN.

*14.2.3.3.2 Long Short-Term Memory (LSTM)*

RNN with LSTM paved the way as an efficient as well as accessible paradigm for many sequential data-related learning problems [23]. The fundamental concept of LSTM design is a memory cell that can uphold its position over time, and nonlinear gating units control stream of info flowing into the cell and out of the cell. The main awareness behind the LSTM is to control the cell positions using dissimilar gate forms, namely the input gate, the forget gate, and the output gate. Figure 14.5 shows the gate levels of LSTM architecture.

FIGURE 14.5 LSTM architecture.

Every cell’s state (c, -1) permits over the LSTM cells to produce state related to the subsequent stage *c,.* Mathematical equations of three gates are defined as follows:

where *i,* is the input gate that regulates data flow of inputs *p,* and the preceding hidden layer /?,_, is permissible to pass the memory cell./, is the forget gate, which is regulated by means based on evidence elapsed passing to the cell, *o,* is the output gate that governs and estimates evidence which can be transmitted from the present memory cell to a hidden-layer state, *c,* is the memory cell state, *w* is the matrix weight, and *p* is the bias to the memory cell. О Symbol signifies the multiplication of individual parameters. Every gate is viewed as the layer of the neural network. LSTM memory cells may also arrange to form a network organized in several layers [23].

## Conclusion

Various price forecasting approaches in the deregulated scenario is analysed in this chapter. Time sequence-based approaches are most widely used for forecasting electricity prices because of their simplicity and ease of execution. Among these models, currently deep learning model is much efficient to provide great accuracy and precision, which is applied towards price forecasting.