MACHINE LEARNING TECHNIQUES

Parallel Backpropagation

Backpropagation is method to train artificial network which is very useful in prediction (“Backpropagation”, 2015) and (Han & Kamber, 2001). Though simple backpropagation can be used to train the network, we would like to discuss parallel backpropagation technique because predictive analytics deal with really huge dataset. Backpropagation method focusses on computing the gradient of a loss function with respect to every weight in the neural network. The gradient is then given back to the network as input and weights of the network are updated using the gradient value so that the loss function gets minimized (Butler, 2014).

Backpropagation is a supervised learning algorithm as it knows the desired output for each input value, therefore making it easy to calculate the gradient of loss function. Backpropagation is applied over a “multilayer feed forward network”- a network with multiple layers, one input layer, one or more hidden layers and an output layer, and contains no cycle. The activation function used by artificial neurons of the network should be differentiable.

Huge amount of data are produced continuously from different sources. It is highly tedious task to analyse the pattern hidden inside the data. Parallel version of backpropagation along with mapreduce framework works efficiently to analyse the large dataset. This method is known as MBBN (MapReduce based Backpropagation Neural Network) (Liu et al., 2010).

 
Source
< Prev   CONTENTS   Source   Next >