# Materials and Methodology

In this area, we mainly describe the database and proposed methodology. A deep CNN model is used for classification of EEG signals. The dataset is classified into three distinct classes namely: normal, pre-ictal and seizure. The proposed work flow diagram for the detection of seizure patient is shown in the Figure 2.2. Then performance evaluation for different ratios like 90%, 80%, 70%, 60% of training and 10%, 20%, 30%, 40% testing respectively has been done. Also, 10-fold cross validation on 90-10 train test ratio was applied.

## Dataset. Normalization

The data taken by Andrzejak et al. [2] was used for research in this paper. The dataset has been obtained from 5 distinct patients that contain 3 different classes. There are normal (set-Z), Pre-ictal (set-N) and seizure (set-S). Each subset consists of 100 singlechannel EEG fragments. Each fragment is of 23.6 s and has a sampling frequency of 173.6 Hz [10]. The complete EEG data was recorded using a 128-channel amplifier system, by employing an average common reference. The spectral bandwidth of the data ranges from 0.5 Hz to 85 Hz, which is similar to the acquisition system.

In our study, the database has a mix of variables with large variance and small variance. Normalization is a radical transformation and is mainly done to get a normal distribution. For the normalization purpose, z-score normalization as a preprocessing step is extremely important for learning in the data preparation step. The main aim of normalization is to use a common scale by diverging the value of the numeric columns. The Z-score normalization is calculated according to equation 2.1, where p is mean, *a* is standard deviation and x is random variable.

FIGURE 2.2 Flow chart of proposed model.

## Convolution Neural Network (CNN)

Deep learning uses raw signal and extracts features automatically, directly from input data. The advantage that comes with DL is that these methods are more promising than any other classification method for big datasets. Here, the network keeps on learning to get the best weights. This is done by feedback sent from output periodically to update the weights until the best combination is found [26]. The basic ideology behind CNN is very much similar to LeNet-5. There are mainly 3 layers namely: convolutional layer, pooling layer and the fully connected layer [27, 28, 29].

*Convolutional layer*: It is the primary or initial layer of the CNN model. Here, EEG signals along with the filters also called as kernels are taken as inputs and some special selected features are sent as output to the next layer. Each convolution is involved in getting a certain unique feature from the input values. [27, 30, 31].

*Pooling Layer:* Pooling operation is generally employed to reduce the dimensionality of the subsequent convolutional layer. For the same reason down sampling layer is used as an alias for this [24]. Average, max and sum pooling are the main distinct types of pooling. Generally, and most frequently, max pooling is used which has the role of finding the maximum elements for feature mapping process [32].

*ReLU Function:* In non-linear operation, ReLU is the abbreviation for Rectified Linear Unit. The function of ReLU is to map a negative value to zero and hence have all positive values so that the training can be fast and more effective [33]. It applies element-wise activation function [34, 35, 36], whose output is

*Fully-Connected layer:* The output to this layer is fed to this layer from the maxpooling layers. This layer is very much similar to a neural network [27]. The matrix in this layer is flattened and the formed vector is fed into a fully connected layer which resembles a neural network [37].

*SoftMax:* It is the subsequent and the last output layer, which is used SoftMax activation function provides the end result in terms of classification [38, 39]. Mathematically it is represented as

*Proposed CNN Architecture* Figure 2.3 below shows a summary of the architecture of the 12-layer proposed CNN model. There is a total of 4097 inputs and a combination of four convolutional layers, four max-pooling layers and three fully connected layers. For each convolutional layer, it is necessary to specify the filter

FIGURE 2.3 Architecture of proposed CNN model.

(kernel) size. The first convolutional layer is used at the kernel size 40 and returns the output data 4058, which is then passed to the maximum-pooling layer, which reduces the values to half of the convolutional layer and gives outputs 2029 values. The output of the max-pooling layer is passed to the second convolutional layer with filter size 30 and returns the output 2000 value, which is then passed to the max-pooling layer and produces an output 1000 data value. In addition, the third and fourth convolutional layer takes the input of 1000 and 495 data values with kernel size 10 and 4, with the latter max-pooling layer producing 991 and 492 data values, respectively. Flatten is employed for connecting the fully connected layer with ReLU activation function. Eventually, the ultimate three fully connected layers use SoftMax as the activation function.