# Particle Swarm Optimization Algorithm

The PSO is a stochastic optimization method based on swarm intelligence. It was first proposed by Kennedy (Eberhart & Kennedy, 1995a, 1995b) recognizing the flocking behavior of birds as the principle for optimization. Each particle in population (swarm) is represented by two vectors, namely position and velocity. These vectors are updated by the past experience of particles and their neighbors. Inertial, cognitive and social components that play a major role in effectiveness and performance of PSO update the vectors iteratively. In the literature, there are many modified versions of PSO applied to various domains. The PSO algorithm is explained in Figure 6.3.

Each variable *i* is represented by *nop* (number of populations)-dimensional position and velocity vectors. Both vectors are initialized randomly within the search space. The corresponding objective function to each population is then calculated. The best position of variable *i* through the generation cycle is known as the individual best position (p_{besti}), while the position of the best variable in its entire population is termed as the global best position (g_{best}). The best positions are usually decided based on the minimum objective function values. The position *(X)* and velocity (Vi) of particle *i* at iteration *k +* 1 are updated by

*
*

where *c _{1}* and

*c*are learning factors representing the stochastic acceleration term weighting. Generally,

_{2}*c*2 and

_{1}= c_{2}=*r*and r

_{1}_{2}are random numbers generated separately from 0 to 1.

*p^*represents the best position of variable

_{est/i}*i*till the kth iteration, while

*gb*is the best global position in swarm till the kth iteration. ю is the inertia weight term providing balance between global and local exploration ability. Among the various inertia term mechanisms proposed by different authors, a simple mechanism with a Linear Decreasing Inertia Weight (LDIW) is proposed as follows:

_{est}*
*

where = 0.9, = 0.4. iter_{max} is the maximum number of iterations

decided by the user.

Arumugam and Rao (2008) proposed another strategy which is the inertia weight and acceleration coefficient based on global and local best values. The Global-average Local best Inertia Weight (GLbestIW) for variable *i* is *
*

*Particle Swarm Optimization and Application to Liquid-Liquid Equilibrium* 145

FIGURE 6.3

Particle swarm optimization algorithm.

and the Global-Local best Acceleration Coefficient (GLbestAC) is

*
*

Here the (p_{besti}) average is the average of all the personal best values in specific generation. The velocity *(V)* of particle *i* is updated by

*
*

where *r* is the random number generated from 0 to 1. The updated velocity from Equation 6.20 is then used to calculate the new position of the particle.

The position and velocity bounds are applied to the updated vectors to keep the particles within the search space. Position bounds are then decided from the problem variable bounds. The velocity is therefore clamped within

^{[ V}max^{, V}maxL ^{where V}max ^{is} g^^{en b}y

*
*

where *X _{UB}* and

*X*represent the upper and lower limit for variable

_{LB}*X,*respectively. Velocity bounds can be varied based on the necessity of the problem. Objective functions are then evaluated at updated positions and compared with past function values.

The best values (pbesu and gb_{es}t) are improved upon continuously with each iteration. The optimization continues till the termination criterion is met. The termination criterion can be of the maximum number of iterations. Generally, the termination criterion is given by

*
*

where *n* is the number of variables and *nop* is the number of populations.