Devised by James Kennedy & Russell C. Eberhart in 1995, the Particle Swarm Optimization (PSO) method draws inspiration in the behavior of flocking birds, collectively foraging for adequate food sources [REF]. PSO is currently the second most popular population-based metaheuristic (Fausto et al. 2019), which has some merits to its simplicity, as shown in Fig. 1. In PSO, solutions (also referred as particles) are composed by a set of three \(D\)-dimensional vectors: the particle's current position (\(\vec{x}_i\)), its historical best position (\(\vec{x}^*_i\)) and its velocity (\(\vec{v}_i\)). The exploitation is performed by attracting the solutions in the population to the global best solution \(\vec{x}^*\) (note the lack of index), and the best personal \(\vec{x}^*_i\), by modifying their velocity considering the distance to these two “good” known points. Therefore, for each iteration of the optimization process, the velocity vectors are re-calculated (Eq. 2), the solutions positions are updated by applying the corresponding velocity (Eq. 1), and the memory of best personal positions is updated. Unlike other approaches, PSO does not apply a Selection operator, as each newly trial solution is maintained in the population.

Figure 1: Particle Swarm Optimization Flowchart.

In PSO, for each iteration (\(k+1\)) a new set of \(M\) trial solutions is generated by updating the current population solutions positions as follows:

\[ \vec{x}_{i}^{k + 1} = \vec{x}_{i}^{k} + \vec{v}_{i} \qquad(1)\]

where \(v_{i}\) stand for the updated velocity of particle '\(i\)', given as follows:

\[ \vec{v}_{i} = \vec{v}_{i} + c_{1} \cdot r_{1} \cdot \left( \vec{x}_{i}^{*} - \vec{x}_{i}^{k} \right)\ + c_{2} \cdot r_{2} \cdot \left( \vec{x}^{*} - \vec{x}_{i}^{k} \right) \qquad(2)\]

where \(r_{1}\) and \(r_{2}\) each denote an uniform random \(D\)-dimensional vector with domain \(\left\lbrack 0,\ 1 \right\rbrack\), while the values \(c_{1}\) and \(c_{2}\), being the meta-parameters of PSO, are known as cognitive and social parameters, respectively. Finally, the set of personal best positions, of each solution is updated as:

\[ \vec{x}_i^* = \left\{ \begin{matrix} \vec{x}_i^{k+1} \quad \text{if} \; f(\vec{x}_i^{k+1}) > f(\vec{x}_i^*) \\ \vec{x}_i^* \quad \;\; \text{otherwise} \;\;\;\;\;\;\;\;\;\;\;\;\;\; \end{matrix} \right.\ \forall \; \vec{x}_i^{k+1} \in X^{k+1} \qquad(3)\]

Because PSO does not include a Selection step, exploration is improved. However, the balance between exploration and exploitation depends mostly on the meta-parameters of the Algorithm, which dictate the amount of attraction between the global and personal best-known solutions, and the velocity inertia. Therefore the performance of PSO is heavily affected by the selection of these parameters. PSO is also known to converge quickly on local optima, for which several modification approaches have been proposed (Bonyadi, Michalewicz, and Li 2014). Given the vector nature of (Eq. 1) and (Eq. 2), the PSO computational implementation is efficient, and straight forward, simplifying its use for GPU parallel implementations. However, it requires 3x more memory than other algorithms, as it requires to store the values of sets \(X\), \(X^*\) and \(V\), which reduces the maximum amount of solutions in the population when using GPU computation or embedded devices.

The Code

function operators(self)
if size(self.velocity,1)==0
self.velocity = zeros(self.sizePopulation, self.noDimensions);
end
if size(self.bestPersonal)==0
self.bestPersonal = [self.fitness, self.population];
if self.plotSpecial
self.historicBP = zeros(self.sizePopulation, self.noDimensions, self.maxNoIterations);
end
end
% atraction to the best personal solution
pbAtraction = rand(self.sizePopulation,self.noDimensions) .* ...
(self.bestPersonal(:,2:end)-self.population);

% atraction to the best global solution
bestAtraction = rand(self.sizePopulation,self.noDimensions) .* ...
(repmat(self.bestSolution, self.sizePopulation, 1) - self.population);

% update velocity trayectories.
self.velocity = self.velocityParam * self.velocity + ...
self.bestPersonalParam * pbAtraction + ...
self.bestParam * bestAtraction;

% update solutions
self.population = self.population + self.velocity;
self.checkBoundsToroidal();

% eval fitness
self.evalPopulation();
self.updateBestPersonal();
end

function temp = updateBestPersonal(self)
temp = self.bestPersonal(:,1) > self.fitness;
self.bestPersonal(temp,:) = [self.fitness(temp), ...
self.population(temp,:)];
self.improvementsCount = self.improvementsCount + sum(temp);
end

References

Bonyadi, Mohammad Reza, Zbigniew Michalewicz, and Xiaodong Li. 2014. “An Analysis of the Velocity Updating Rule of the Particle Swarm Optimization Algorithm.” Journal of Heuristics 20 (4). Springer Nature: 417–52. https://doi.org/10.1007/s10732-014-9245-2.

Fausto, Fernando, Adolfo Reyna-Orta, Erik Cuevas, Ángel G. Andrade, and Marco Perez-Cisneros. 2019. “From Ants to Whales: Metaheuristics for All Tastes.” Artificial Intelligence Review, January. Springer Nature. https://doi.org/10.1007/s10462-018-09676-2.