In (2016), Seyedali Mirjalili & Andrew Lewis proposed a novel bio-inspired metaheuristic called Whale Optimization Algorithm (WOA). This algorithm is inspired by the predatory behavior observed in humpback whales, which employ a unique cooperative hunting maneuver known as bubble-net feeding. In this maneuver, a group of whales coordinate their efforts to trap a group of small prey by swimming in a spiral pattern around them and exhaling a burst of air to form an encircling bubble barrier, which prevents prey from escaping. Prey gets corralled into a tighter circle, until a point on which all individuals simultaneously swim to the surface with their mouths open to feed on the trapped prey.

Figure 1: Whale Optimization Algorithm Flowchart.

In WOA, the whales are represented by the solutions in the population (\(\vec{x}_i \in X\)), while the best solution found so far (\(\vec{x}^*\)) represents the prey. With every iteration, each solution updates its position using one of the following three operators: encircling the prey, attacking the prey, or searching for prey. These represents three different forms of attractions between solutions, being a encircling attraction to the best solution \(\vec{x}^*\), and two linear attraction to the best solution \(\vec{x}^*\) and to a random solution in the population \(\vec{x}_{r_1}\), this can be observed in Fig. 1. The algorithm is designed to provide the same probability between the encircling operator and the two linear ones, using an uniform random variable of \(\varphi\). However, the selection between linear operators is perform by the random variable \(A\), whose domain linearly decreases from \([0,2]\) to \([0,0]\) over the course of iterations as shown in Eq. 1. This favors the attraction to random solutions at the beginning of the iterations (\(A > 1\)), and gradually shifting to advocate for the current best solution attractor (\(A \leq 1\)).

\[ A = 2 \left( 1-\frac{k}{K} \right) rand(0,1) \qquad(1)\]

In the linear attraction operators, the solutions in the population are updated by applying the following equation:

\[ \vec{x}_{i}^{k + 1} = \left\{ \begin{matrix} \vec{x}^{r_1} - A \cdot \left| rand(0,1) \cdot \vec{x}^{r_1} - \vec{x}_{i}^{k} \right| \quad \text{if} \; A > 1\\ \;\;\;\; \vec{x}^* - A \cdot \left| rand(0,1) \cdot \vec{x}^* - \vec{x}_{i}^{k} \right| \quad \text{otherwise} \end{matrix} \right.\ \qquad(2)\]

where \(\vec{x}^{r_1}\) denotes the mixture of a randomly chosen solution for each dimension, such as \(x_d^{r_1} = x_{r,d}\) \(\mid r = rand(1,M)\), \(\forall d\).

On the other hand, for the encircling operator, each solution moves in a spiraling fashion around the current best solution, updating their positions as follows:

\[ \vec{x}_{i}^{k + 1} = \left| \vec{x}^* - \vec{x}_{i}^{k} \right| \cdot \left( e^{L} \cdot \cos\left( 2\pi L \right) \right) + \vec{x}^* \qquad(3)\]

where \(e^{L} \cdot \cos\left( 2\pi L \right)\) represented the logarithmic spiral model whose shape is controlled by the random variable \(l \in \lbrack -2, 1\rbrack\).

\[ L = -\left( 2 + \frac{k}{K} \right) \cdot rand(0,1) + 1 \qquad(4)\]

In contrast to other algorithms, WOA maintains a balanced strategy between exploration and exploitation throughout the evolutionary process. An advantage of the utilization of WOA is that no meta-parameter needs to be selected. There is two interesting characteristics of WOA linear attraction operator: the mixture of solutions through dimensionalities \(\vec{x}^{r_1}\) and that the random factor do not scale de distance between solutions, but the distance of the selected solution, a random one or the current best solution, in relation to the search space origin.

The Code

function operators(self) %vector implementation
t = self.actualIteration;
a = 2 * (1 - t / self.maxNoIterations);

% Random mixture of solutions
self.random_pop_dim = zeros(self.sizePopulation, self.noDimensions);
for i = 1:self.noDimensions
self.random_pop_dim(:,i) = randperm(self.sizePopulation);
self.random_pop_dim = self.random_pop_dim(randperm(self.sizePopulation), :);

% Circular Attraction
b = 1; % parameters in Eq. (2.5)
l = -(2 + t / self.maxNoIterations)*rand(1, self.sizePopulation) + 1; % Eq. (2.5)
l = repmat(exp(b*l') .* cos(l'*2*pi), 1, self.noDimensions);
p = rand(1, self.sizePopulation) < 0.5; % p in Eq. (2.6)

distance2Leader = abs(self.bestSolution - self.population);
tial_population = distance2Leader .* l + self.bestSolution; % Eq. (2.5)

% Linear Attraction
A = a * rand(1, self.sizePopulation); % Eq. (2.3) in the paper
A2 = repmat(A', 1,self.noDimensions);
C = 2 * rand(1, self.sizePopulation); % Eq. (2.4) in the paper

X_rand = self.population(self.random_pop_dim);
D_X_rand = abs(C * X_rand - self.population);
A_population = X_rand - A2 .* D_X_rand;

D_Leader = abs(C' * self.bestSolution - self.population);
nonA_population = self.bestSolution - A2 .* D_Leader;

% Select Between Linear Attractions
A_population(abs(A)<1, :) = nonA_population(abs(A)<1, :);

% Select Final Trian Population
tial_population(p, :) = A_population(p, :);

self.population = self.checkBoundsToroidal(tial_population);


Mirjalili, Seyedali, and Andrew Lewis. 2016. “The Whale Optimization Algorithm.” Advances in Engineering Software 95 (May). Elsevier BV: 51–67.