Proposed Model

Fig. 15.1 shows the working principle of the feature extraction based DNN (FEDNN) model. In the beginning, the data collection process takes place followed by preprocessing. Then, the preprocessed data undergo feature extraction and extracted

Overall process of proposed FEDNN

FIGURE 15.1 Overall process of proposed FEDNN.

useful features. Followed by. DNN based classification process takes place, which is explained in the following subsections.

IoHT-Based Patient Data Collection

Here, a HD detecting method is deployed with the help of the HOBDBNN model. The prediction method is combined with IoT, due to the presence of the automated system needing more amounts of patient data to improve the detecting accuracy. If the IoT device is located on the human body, it gathers data such as ECG, heart rate, BP. peripheral pulse oximetry (Sp02) level, glucose level, blood fat level, and pulse rate details. Hence, the collected data is converted to an alternate tool from a cloud database to monitor the patient’s heart status. The IoT device and collated data have a portable watch to track the data about a patient using an IoT device.

The device is applicable in gathering a patient’s heart value and saves external activities since the physical activities give suitable data about a patient’s heart status; which is again provided to the health care center by Bluetooth links.

Here, IoT medical data is gathered under the placement of the sensor device on a human body. While the collected data is forwarded through the gateway and recorded in a cloud server. From the cloud, HD details can be forwarded in three major phases: heart data pre-processing, feature extraction, and HD prediction. The model cannot apply the FS algorithm because it is filled to predict HD by computing data and features. By analyzing these features, the executed system forecasts HD effectively to attain efficient training as well as classifiers.

IoHT Medical Data Preprocessing

The first phase in IoT is medical data preprocessing, which involves removing noise and missing data from the gathered information. Hence, noise-free data leads to an efficient analysis of HD derived patterns. Here, irregular data is removed under the application of the median studentized residual method due to the appropriate examination of data from a data set, to improve the entire HD realizing task. At the initial stage, data is investigated using rows and columns, while missing values are interchanged by the evaluation of the median value. It is computed by arranging in improved order, while the center value is estimated. The median value has been employed because of the skewed minimum as well as outliers across the mean value. Under the application of center value, irregular values might be interchanged. Once the missing values are replaced, data should undergo normalization from the range of 0 to 1 to reduce the difficulty, thereby realizing HD patterns. Normalization is carried out with the help of several distributions of regression analysis of heart details. Moreover, data has been effectively normalized from the range of (0,1). Once the noise has been removed, diverse HD features are filtered for accurately classifying HD patterns.

Heart Feature Extraction

The alternate step is to filter diverse features from IoT-centric medical details. The data set encloses data such as heart rate, BP, and blood glucose level. In order to attain the condition of HD, it is evident to obtain major features such as statistical and temporal features. The components of feature derivation are shown by the given equations

where RR is referred to as the interval of attained data.

N denotes the overall count of RR intervals in data. Here, the root mean square of the sum of successive differences (RMSSD) is displayed

where N is shown as count of features, RR refers to the intervalue of heartbeat, ц implies mean value, and R, depicts the probability rate. Hence, the given features are obtained from data gathered from wearable IoT medical devices. It is provisioned as determinants to predict HD and modifies the HD pattern.

DNN for Data Classification

Initially, it was evolved from the features of deep networks, named as a DNN- based approach with stacked autoencoders (SAE) for diabetes data classification that enhances every evaluation parameters of a classification issue. The DNN classification in the case of the diabetes data set is developed with the application of SAE as well as the softmax layer as defined. The data set is comprised of eight attributes and a class variable. These eight attributes are provided as input for the input layer. The DNN built contains layers of SAE. The network is composed of two hidden layers with 20 neurons. The softmax layer has the final hidden layer to perform classification performance. The output layer provides probabilities of diabetic as well as a nondiabetic classes for the provided data. Parameters employed for the developed model are provided in Table 15.1 and the architecture is shown in Fig. 15.2.

Training of Layers

Suppose N input vectors applied for training AE might be {x(2)......x(N)}.

The reformation of input is experimented by training AE as provided in the following:

This can be written

where fAE is a function that maps input to output in AE.

TABLE 15.1

Sensitivity Analysis of Existing with Proposed FEDNN Method


Number of Patients




































DNN architecture

FIGURE 15.2 DNN architecture.

Here, AE could be trained by reducing the proper objective function that is providing overall error function as

where Emse,EReg,Espaily are mean squared error, regularizing, as well as sparsity factor, correspondingly. The mean squared error, EMSe could be estimated by

where e, denotes the error, that is a difference among actual output, x(i) and the monitored output, л:'(/). The error e, might be determined as

Deep networks understand each point from the training data set, which leads to overfitting of the technique. It is referred to as a problem along with deep networks since it offers poor performance based on novel testing data. For resolving the problem, regularizing factor, EReg is assumed in an objective function that is estimated by applying

where X denotes the regularization of a technique. Sparsity constraint enables a method for learning new features from data. Sparsity factor Esparsln, is estimated by applying the given function

where (3 implies the sparsity weight and KL(p || py) represents KullbackLeibler divergencethat is provided by

where the sparsity parameter is provided by p and p, is an average activation measure of they'th neuron that is computed by employing

where /; (x(„) is an activation function of the y'th neuron.

The Stacked Autoencoder

A deep network that applies AE has been developed by cascading the encoder layers as given in Fig. 15.3. Recall that the mapping of AE from SAE is shown as

where the SAE function is given as fSAE- For every layer of SAE, the encoder function is employed. It is more significant that the decoding function is not used for every layer.

The Softmax Layer

Softmax classification is said to be a multiclass classifier that uses LR to classify the data. The softmax layer employs a supervised learning model that utilizes upgraded LR for classifying several classes. Hence, LR is based on the softmax classifier. In the case of multiple classification issues, the softmax classifier calculates the probability of every class where it performs data classification. Therefore, the sum of probability is 1. The softmax function performs the normalization and exponentiation

Stacked autoencoder with L layers

FIGURE 15.3 Stacked autoencoder with L layers.

Sensitivity analysis

FIGURE 15.4 Sensitivity analysis.

that are used to find the class probabilities. Hence, the softmax layer with function fsc is embedded with SAE.

< Prev   CONTENTS   Source   Next >