Introduction

The Genetic Algorithm (GA) was initially developed by John Henry Holland in 1960 (and further extended in 1975) with the goal to understand the phenomenon of natural adaptation, and how this mechanism could be implemented into computers systems to solve complex problems. GA is one of the earliest metaheuristics inspired in the concepts of natural selection and evolution and is among the most successful Evolutionary Algorithms (EA) due to its conceptual simplicity and easy implementation (Reeves 2010). This algorithm emulates the evolutionary process of a group of individuals in a restricted environment, in which the fittest individuals have higher opportunities for survival and through generations procreate even fitter offspring.

In GA, the solutions \(\vec{x}_i\) of the population are considered individuals in the environment. These solutions are usually converted to a binary string form (this is, \(x_{i,n} \in \{ 0,1\}\)) which represents the genetic code (chromosomes) of each individual. At each iteration, called generation, the chromosome population is modified by applying a set of evolutionary operators, namely: parents selection, crossover, mutation and survival selection. Fig. 1 shows the GA operators procedure to generate M new solution at each iteration of the evolutionary process. Two different solutions are randomly chosen (parents), usually biased to choose fitter solutions. The pair of solutions interchange their genetic information by the crossover operator, producing two new (offspring) solutions. New solutions have a small chance of mutating, by randomly shifting some value of the binary string. When the new population of M solutions is generated, the fitness function is applied to these to calculate their fitness value. Finally, the survival selection step chooses between the old and new generations to maintain a fixed number of solutions in the population, based on the analogy of limited resources. Several strategies have been proposed for each of the GA operators, enhancing the capabilities of this algorithm to provide a more balanced search. Form this point the chosen operators are explained with their respective math equations.

Figure 1: Genetic Algorithm Flowchart.

Tags:

  • Greedy Selection
  • Sorts the Population
  • No Attraction Based Operator
  • Easy Tuning

Operators

For the selection operation, the selection of M parents solutions (\(\vec{P}_i \mid i=1,2,\ldots,M\)) is performed by a tournament of two randomly selected solutions \(\vec{x}_{s_{1}}\) and \(\vec{x}_{s_{2}}\).

\[ \vec{P}_{i} = \left\{ \begin{matrix} \vec{x}_{s_1}, \; \text{if} \; f(\vec{x}_{s_1}) > f(\vec{x}_{s_2}) \\ \vec{x}_{s_2}, \; \text{otherwise}\;\;\;\;\;\;\;\;\;\;\;\; \end{matrix} \right.\ \mid s_1, s_2 = \text{rand}_{int}(1, M), \; s_1 \neq s_2 \qquad(1)\]

The crossover operation recombines the information of each pair of selected solutions using a binary vector \(\vec{r}\) that selects the recombination pattern, such as:

\[ P^*_{2i + 1, d} = \left\{ \begin{matrix} P_{2i+1, d} \quad \text{if} \; r_n = 1 \;\; \\ P_{2i+2, d} \quad \text{otherwise} \end{matrix} \right.\ , \\ P^*_{2i + 2, d} = \left\{ \begin{matrix} P_{2i+1, d} \quad \text{if} \; r_d = 0 \;\; \\ P_{2i+2, d} \quad \text{otherwise} \end{matrix} \right.\ , \\ \forall i \in [0, 1, \ldots, \frac{M}{2} - 1], \; \forall d \qquad(2)\]

where \(\vec{r}\) is usually form by selecting a random pivot point \(l\) as:

\[ r_d = \left\{ \begin{matrix} 1 \quad \text{if} \; d < l \;\;\;\; \\ 0 \quad \text{otherwise} \end{matrix} \right.\, l = rand_{int}(1,D) \qquad(3)\]

The vector \(\vec{r}\) can be form by using multiple random pivot points (\(l=[l_1, l_2,\ldots] \mid l_1<l_2<\ldots\)) or by a random binary choice, as:

\[ r_d = rand_{int}(0,1) \qquad(4)\]

As the crossover operator is controlled by a meta-parameter \(r_{Crossover}\), the trial solution \(\vec{x}^t_i\) is chosen between \(\vec{P}^*_i\) and \(\vec{P}_i\) with a probability of \(r_{Crossover}\), as:

\[ \vec{x}^t_i = \left\{ \begin{matrix} \vec{P}^*_i \quad \text{if}\; rand < r_{Crossover}\\ \vec{P}_i \quad \text{otherwise} \;\;\;\;\;\;\;\;\;\;\;\;\;\; \end{matrix} \right.\ \]

The mutation operation introduce a random change in the generated \(\vec{x}^t_i\) solutions, usually changing a single bit from 1 to 0 or vice-versa. Mutation can occur over each dimensionality of the solutions with a probability \(r_{Mutation}\) (typically as low as 0.001), as given as follows:

\[ x^t_{i,d} = \left\{ \begin{matrix} \overline{x^t_{i,d}} \quad \text{if} \; rand(0,1) < r_{Mutation} \\ x^t_{i,d} \quad \text{otherwise} \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \\ \end{matrix} \right.\ , \forall d \qquad(5)\]

where \(\overline{x^t_{i,d}}\) represent the bit flip for binary strings, or a random assignation between the domain of such dimension \(n\), when working with discrete values.

As shown in Fig. 1 these three operators takes place until a population of \(M\) new solutions has been produced, and then, the \(M\) best solutions among the original and new populations are taken for the next generation, while the rest are discarded (Eq. 7). This is known as greedy selection, which requires to join the two populations \(X^k\) and \(X^t\) and sort the solution in terms of their fitness value (Eq. 6), descending for maximization or ascending for a minimization problem.

\[ X' = X^k \cup X^t \mid \forall i, f(x'_i) < f(x'_{i-1}), i > 1 \qquad(6)\]

\[ X^{k+1} = [x'_1, x'_2, ..., x'_M]; \qquad(7)\]

In practice, both selection operators seek to maintain elitism in the population; therefore, they enforce exploitation. These operators can be purely greedy or modified to admit some underfit solutions to avoid losing diversity. On the other hand, Mutation is meant to be a strictly explorative operator, which seek to prevent premature convergence to local optima. Although Crossover is usually considered as an exploitation operator, recent works have shown that it conveys a balance between exploration and exploitation when the population is dispersed, gradually shifting to exploitation as the population converges (Črepinšek, Liu, and Mernik 2013).

The Code

The implementation considers the General Metaheuristics Framework which can be found on this GitHub repository. In this case, the GA class overwrite the operators function, which has the function of generate each iteration new trial solutions, evaluate their fitness and maintain the population size. The code is divided on the four main operators of GA. First \(M\) solutions are selected as parents, using the tournament function. Then, each pair of parent crossover their information, using a random crossover pattern, in the crossoverRandi function. The generated trial solutions are mutated over each dimension, with regard the mutationRate meta-parameter. Finally, the trial population is evaluated in the fitness function and the keepTheBest function implemented in the metaheuristics class, is in charge of maintain the best \(M\) solutions between the previous population and the newly generated trial population.

function operators(obj)
% Parent Selection
parents = zeros(obj.sizePopulation,1);
for i = 1:obj.sizePopulation
parents(i) = obj.tournament();
end

% Crossover
trial_population = zeros(size(obj.population));
for i = 1:floor(obj.sizePopulation/2)
parent1 = obj.population(parents((i-1)*2+1),:);
parent2 = obj.population(parents((i-1)*2+2),:);
[child1, child2] = crossoverRandi(obj, parent1, parent2);
trial_population((i-1)*2+1,:) = child1;
trial_population((i-1)*2+2,:) = child2;
end

% Mutation
for i = 1:floor(obj.sizePopulation/2)
for j = 1:obj.noDimensions
if rand < obj.mutationRate
trial_population(i,j) = randi(obj.rangePerDimension);
end
end
end

% Population Survival
trial_fitnesses = obj.evalPopulation(trial_population);
obj.keepTheBest(self, trial_population, trial_fitnesses);
end
function index = tournament(obj)
temp = randi(obj.sizePopulation,1,2);
if obj.fitness(temp(1)) < obj.fitness(temp(2))
index = temp(1);
else
index = temp(2);
end
end
function [child1, child2] = crossoverRandi(obj, parent1, parent2)
if rand < obj.crossoverRate
temp = randi([0,1],1,obj.noDimensions)==1;
child1 = parent1;
child1(temp) = parent2(temp);
child2 = parent2;
child2(temp) = parent1(temp);
else
child1 = parent1;
child2 = parent2;
end
end

References

Črepinšek, Matej, Shih-Hsi Liu, and Marjan Mernik. 2013. “Exploration and Exploitation in Evolutionary Algorithms.” ACM Computing Surveys 45 (3). Association for Computing Machinery (ACM): 1–33. https://doi.org/10.1145/2480741.2480752.

Reeves, Colin R. 2010. “Genetic Algorithms.” In Handbook of Metaheuristics, 109–39. Springer US. https://doi.org/10.1007/978-1-4419-1665-5_5.