Result and Discussions

Experiment 1: 10-Fold Cross Validation on 90:10 Ratio

In this proposed algorithm was implemented in python language with the help of keras libraries (https://keras.io/layers/convolutional). We take multiple simulations which were performed by varying the learning rate and the number of epochs to test our proposed method. We take all the ratios of training and testing. In our study, we employ 90% of data for training and the remaining 10% is used for validation procedures. Here, we have used a 10-fold cross-validation approach. Initially, the signals are randomly divided into 10 parts of equal ratio. Subsequently, 9 out of 10 signal fragments are used for training purpose. The remaining 1 portion of signal is used for validation of data. The explained strategy is used for different fragments and is repeated 10 times by using different training and testing data fragments. This works similar to the convolutional back propagation (BP) with a batch size of 34. Here, gradient of loss function is calculated with respect to the weight in the back- propagation procedure. This helps in giving backward feedback for error propagating in the network. At the time of training, weights are updated in a network. In this model, there exists a maximum of 50 epochs of training. 10% of the data is taken for testing. In 90% of data, it takes 70% of data for training and 30% of the remaining data for validations as shown in Figure 2.4.

Overall performance is evaluated on the basis of 3 main measurements namely: accuracy, specificity and sensitivity of the classification. All those measurements are described by using a confusion matrix. Where TP is for true positive, FP is for false positive, FN is for false negative and TN is for true negative [29].

Data divided for training, validation, and testing ratios

FIGURE 2.4 Data divided for training, validation, and testing ratios.

Table 2.1 presents the 90:10 ratio confusion matrix, for all ten-fold cross-validation. In this model we see that 95% are correctly classified as normal EEG signals and 5% are incorrectly classified. In pre-ictal class 97% of EEG signals are correctly classified and 3% are incorrectly classified. In seizure class 96% are correctly classified and 4% incorrectly classified.

Figure 2.5 depicts an accuracy graph for three class classifications, namely normal, pre-ictal and seizure class. Here the train test ratio is 90:10.

Experiment 2: Training and Testing Ratio Variation

Then we have performed training and testing with other ratios (like 80:20 ratio, 70:30 ratio, 60:40 ratio.) for this 12-layer deep CNN model. Then we present below tables.

Similarly, the Table 2.2 presents the 80:20 ratio confusion matrix. In this model we see that 99% has been correctly classified as normal EEG signal and 1% is incorrectly classified. While we consider the Pre-ictal class, 90% of EEG signals are

TABLE 2.1

Classification Results of Three Classes Using 10-Fold Cross Validation

Predicted

Normal

Pre-ictal

Seizure

Accuracy

PPV

Sensitivity

Specificity

Normal

95

3

2

97

95.95

95

98

Pre-ictal

2

97

1

97.33

95.09

97

97.5

Seizure

2

2

96

97.66

96.96

96

98.5

ratio training and testing accuracy graph for three classes

FIGURE 2.5 90:10 ratio training and testing accuracy graph for three classes.

TABLE 2.2

80:20 Ratio Confusion Matrix and Performance Summary

Predicted

Normal

Pre-ictal

Seizure

Accuracy

PPV

Sensitivity

Specificity

Normal

99

0

1

95.66

98.01

99

94

Pre-ictal

10

90

0

95.66

96.77

90

98.5

Seizure

2

3

95

98

98.95

95

99.5

correctly classified and 10% is incorrectly classified. And in case of the Seizure class, 95% of correct classified 5% incorrect classified.

The Table 2.3 presents the 70:30 ratio confusion matrix. In this model we see that 96% has been correctly classified as normal EEG signal and 4% is incorrectly classified. While we consider the Pre-ictal class, 98% of EEG signals are correct classified and 2% are incorrectly classified. And in case of the Seizure class 92% of correct classified 8% incorrect classified.

TABLE 2.3

70:30 Ration Confusion Matrix and Performance Summary

Predicted

Normal

Pre-ictal

Seizure

Accuracy

PPV

Sensitivity

Specificity

Normal

96

3

1

97.66

96.96

96

98.5

Pre-ictal

2

98

0

96

90.74

98

95.0

Seizure

1

7

92

97

98.92

92

99.5

Table 2.4 presents the 60:40 ratio confusion matrix. In this model we see that 78% have been correctly classified as Normal EEG signals and 22% are incorrectly classified. When we consider the Pre-ictal class, 98% of EEG signals are correctly classified and 2% are incorrectly classified. And in the case of the Seizure class 98% are correctly classified 2% incorrectly classified. We have acquired the best accuracy in the case of 90:10 train test ratio across 10-folds. The results were accuracy of 97.33%, sensitivity 96%, specificity 98% and precision 96%. The below table and graph show all types of training testing ratio done in this paper and performance evaluate them. The 10-fold cross validation performance results very well compared to other train-test ratios.

Table 2.5 shows the training accuracy and testing accuracy using different train test ratio with 50 epochs of the CNN model, and Figure 2.6 shows the graphical presentation of the testing accuracy.

Table 2.6 shows the computation and performance analysis of other research works. Nigam and Graupe [13] showed automated detection of epilepsy using neural network based from EEG signals. Firstly, non-linear prepossessing filter is applied that is a combination of the LAMSTAR neural network and ANN. Here. LAMSTAR is used for input preparation and ANN for training and system performance. In this paper, an accuracy of 97.20% was achieved. In a paper by Kannathal et al. [40, 41] worked in entropy estimated for detecting epileptic seizure in EEG signals. They have used ANFIS classifier for measuring test data. So the classification accuracy comes to

TABLE 2.4

60:40 Ratio Confusion Matrix and Performance Summary

Predicted

Normal

Pre-ictal

Seizure

Accuracy

PPV

Sensitivity

Specificity

Normal

78

22

0

91.33

98.74

78

99.00

Pre-ictal

2

98

0

91.33

80.32

98

88.00

Seizure

0

2

98

99.33

100

98

100

TABLE 2.5

All Ratio Training and Testing Performance

Ratio

Epochs

Training accuracy

Testing accuracy

90:10

50

99.83

97.33

80:20

50

99.12

95.00

70:30

50

98.75

92.00

60:40

50

98.33

90.50

All ratio graph for testing accuracy

FIGURE 2.6 All ratio graph for testing accuracy.

TABLE 2.6

Synopsis of Erstwhile Research for Detection of Epileptic and Normal Classes

Author

Feature

Method

Accuracy (%)

Song and Lio [52]

Sample entropy

Extreme learning machine

95.67

Nigam and

Graupe [13]

Nonlinear pre-processing filter

Diagnostic Neural Network

97.20

Kannathal el al. [40]

Entropy measures

ANFIS

92.20

Subasi [42]

DWT

Mixture expert model

94.50

Guo el al. [17]

DWT

MLPNN

95.20

Acharya et al. [53]

Entropy measures

Fuzzy logic

98.1

Chua et al. [54]: [55]

HOS and power spectral density

Gaussian classifier

93.11

Ubeyli et al. [56]; [57]

Wavelet transform

Mixture of expert model

93.17

Tawfik et al. [57]

Weighted Permutation Entropy

SVM

97.25

Ghosh-Dastidar et al. [59]

Nonlinear feature

Multi-spiking neural network

90.7 to 94.8

Acharya et al. [1]

10-fold cross validation strategy

13-layer CNN Model

88.7

Proposed model

10-fold cross validation strategy

12-layer CNN Model

ACC:97.33 SEN:96 SPE:98

90%. Subasi [42, 43, 44, 45, 46, 47, 48, 49, 50] in this paper uses DWT feature extraction in which the input taken EEG signals were decomposed into sub-bands. Then the sub-band was an input of a ME network for classification. An accuracy of 95% was obtained. Srinivasan et al. [51] in this study have used approximate entropy for feature extraction method. Approximate Entropy is the one that predicts the current amplitude values of a signal based on its previous values. The artificial neural network is used for classification of epilepsy class. Guo et al. [17] have done wavelet transforms that are derived from multi-wavelet transform as feature extraction and ANN for classification.

Song and Lio [52] proposed a novel method for automatic epileptic seizure detection. In this paper, they work on an optimized sample entropy algorithm for feature extraction. The extreme learning machine is used to identify the EEG recorded signal and determine whether it is normal or a seizure. Acharya et al. [53] proposed a method for automated detection of normal, pre-ictal and ictal conditions from EEG signal records. They take 4 entropy features that are sampEn, approximate entropy, phase entropy 1 and phase entropy 2. Then this feature extracted data was fed into 7 different types of classifiers: namely, SVM. Naive Bayes classifier, KNN, Gaussian mixture model. Decision tree, PNN, Fuzzy Sugeno entropy. Among the above mentioned seven classifiers Fuzzy classifier was able to differentiate into 3 classes with high efficiency.

 
Source
< Prev   CONTENTS   Source   Next >