“A metaheuristic is formally defined as an iterative generation process which guides a subordinate heuristic by combining intelligently different concepts for exploring and exploiting the search space, learning strategies are used to structure information in order to find efficiently near-optimal solutions.”

Osman and Laporte (1996)

## Introduction

Metaheuristic algorithms are global optimizations techniques based on intelligent random search patterns, looking for either the maximum or minimum possible value of any given objective function. Unlike complete algorithms, heuristics are approximate algorithms which sacrifice the guarantee of finding optimal solutions for the sake of finding "good solutions" in a significantly reduced amount of time (Blum and Roli 2003). This is specially necessary in problem with NP-hard complexity, as no polynomial time complete algorithm is known or exist (if $N\ne NP$), thus requiring exponential computation time in the worst-case.

The term heuristic refers to "to find", and it is associated to fast and problem specific search algorithms (Blum and Roli 2003). This algorithms, either generate solutions from scratch by adding components, to an initially empty partial solution, until a solution is complete (constructive) or start from initial complete solutions and improve them locally until a local optima is found (local search). In the other hand, the term meta makes reference to algorithms that are “beyond, or in an upper level” from the heuristics algorithms, this term has first used by (Glover 1986). Differently from these, metaheuristics are designed to work as black boxes over any given optimization problem with none or few modifications (Brownlee 2011) and to avoid premature convergence, i.e., escape from local optima (Yang and He 2019). Avoiding premature convergence is achieved by either allowing worsening moves or generating new starting solutions for the local search in a more “intelligent” way than just providing random initial solutions.

Most novel and popular metaheuristics on the literature are population-based (Fausto et al. 2019). These begins by generating $M$ initial random solutions as the population, with the idea to evolve them iteratively through a variety of mechanisms, usually inspired by natural phenomena. This initial random start makes an unbiased sample of the search space with the objective of recognize potentially "good" regions in the search space. Then, the information of the known solutions in the population and other possible data storage in memory, is used to make generate educated guesses (bias sampling), either by exploration (diversification) or exploitation (intensification). Exploration refers to the algorithm's ability to generate solutions in unknown regions of the search space, while exploitation is the refining of previously obtained "good" solutions (Črepinšek, Liu, and Mernik 2013). The skillfulness of these algorithms to find suitable solutions is mostly attributed to having the right balance between exploration and exploitation. However, the question of how to achieve the appropriate balance, or even measure it, is still an open issue (Ser et al. 2019). For this reason, multiple metaheuristic methods are usually applied and compared in real-world problems to correctly select one that offers the best solutions for a given problem [REF].

The generation of educated guesses relies on probabilistic decisions made during the search trough random variables and a variety of operators. Given the strong dependency of random values in these algorithms, it might seems that these will behave similarly as pure random search. "The main difference to pure random search is that in metaheuristic algorithms randomness is not used blindly but in an intelligent, biased form (Stützle 1998)."

Given the non-deterministic nature of these algorithms, each execution travels a unique search route and returns a different final solution when the stop criteria is meet. This stop criteria is necessary as the value of the optimal is usually unknown and there is no guarantee that, otherwise, this can be found. This criteria also controls the algorithm time response, or computational complexity, as it limits the number of time the objective function is evaluated.

In this chapter, we present a general framework for implementing population-based metaheuristic algorithms and compare some well-known and some novel metaheuristic algorithms, using a set of multi-modal benchmark functions and the Channel Allocation (CA) problem in Ultra-Dense Networks (UDNs). The set of well-known algorithms chosen were Genetic Algorithm (GA), Differential Evolution (DE) and Particle Swarm Optimization (PSO), which are popular metaheuristics for their simplicity and proven efficiency. The more recent approaches, States of Matter Search (SMS) and Whale Optimization Algorithm (WOA), were elected and used to test the application of new research in the field of Artificial Intelligence. The application of these algorithms follows the framework and specifications presented in (Fausto et al. 2019) and their implementation is distributed as open source in (Reyna-Orta 2019).

## Population-based Metaheuristics, General Framework

Population based metaheuristics can be implemented using a general framework, independently of the natural phenomenon inspiration. This allows their study to be focus on the mechanisms that modify the solutions in the population through the iterative process.

Usually, the first step involves the definition of a set of $M$ randomly initialized solutions as follows:

$\vec{x}_i \in X \mid i = 1, 2, \ldots, M \qquad(1)$

such that:

$\vec{x}_i = [x_{i,1}, x_{i,2}, \ldots, x_{i,D}] \; | \; x_{i,d} = R(B^{low}_d, B^{high}_d) \qquad(2)$

where $D$ is the dimensionality of the problem (number of decision variables or parameters), $\vec{x}_i$ is the $i$ solution of the population $X$ and $x_{i,d} = R(B^{low}_d, B^{high}_d)$ is the uniform random assignation bounded by the target solution space (search space) for the dimension $d$. The solutions $\vec{x}_i$ (or individuals) represent candidate solutions to be tested on the optimization problem. These are assigned to a corresponding quality value (or fitness) related to the objective function $f( \cdot )$ that describes the optimization problem, such that:

$f_i \in F \mid f_{i} = f\left( \vec{x}_{i} \right),\; i=1,2,\ldots,M \qquad(3)$

Using the population $X_{M\times D}$ and fitness $F_{M\times 1}$ information, new candidate solutions are generated by modifying currently available solutions. This is achieved by applying the algorithm operators (usually designed by drawing inspiration from an observed natural phenomenon). For most cases, this process may be illustrated by the following expression:

$\vec{x}_{i}^t = \vec{x}_{i} + \mathrm{\Delta} \vec{x}_{i} \qquad(4)$

where $\vec{x}_{i}^t$ denotes the trial solution generated by adding up a specified update vector $\mathrm{\Delta} \vec{x}_{i}$ to $\vec{x}_{i}$. This update vector $\mathrm{\Delta} \vec{x}_{i}$ depend on the specific algorithm operators, which are explained in the rest sections of this chapter. The generated solutions are evaluated in the fitness function to obtain their corresponding score, such as $F' = f(X')$.

Given the dimensional bounds on each dimension ($B^{low}_d, B^{high}_d$) depend on problem in question, metaheuristic algorithms can consider a unified domain range in their operators, as $x_{i,d} \in [0, 1] \forall d$. This leaves the scaling of the solution vectors into the hands of the fitness function $f(\vec{x}_i)$, separating the metaheuristic logic from the problem logic. This consideration allows to simplify the of the algorithms equations and the comparison of these between algorithms.

Finally, most nature-inspired algorithms include some kind of selection process, in which the newly generated solutions are compared against those in the current population $X^{k}$ (with $k$ denoting the current iteration) in terms of solution quality, typically with the purpose of choosing the best individual(s) among them. As a result of this process, a new set of solutions $X^{k + 1}\mathbf{= \{}\vec{x}_{1}^{k + 1}\mathbf{,} \vec{x}_{2}^{k + 1}\mathbf{,\ldots,} \vec{x}_{M}^{k + 1}\mathbf{\}}$, corresponding to the following iteration (or generation) '$k + 1$', is generated.

This whole process is iteratively repeated until a particular stop criterion is met (i.e., a maximum number of iterations is reached). Once this happens, the best solution found by the algorithm $\vec{x}^{*}$ is reported as the best approximation for the global optimum. The best solution is the one encounter in whole optimization process with the maximum fitness value, can be expressed as follows:

$f(\vec{x}^{*}) \geq f(\vec{x}_i^k) \; \forall i, k \qquad(5)$

The fig. 1 shows visually the whole general framework for the population-based metaheuristics, proposed in (Fausto et al. 2019) and its open source implementation (Reyna-Orta 2019).

Blum, Christian, and Andrea Roli. 2003. “Metaheuristics in Combinatorial Optimization.” ACM Computing Surveys 35 (3). Association for Computing Machinery (ACM): 268–308. https://doi.org/10.1145/937503.937505.

Brownlee, Jason. 2011. Clever Algorithms : Nature-Inspired Programming Recipes. Place of publication not identified: Lulu.

Črepinšek, Matej, Shih-Hsi Liu, and Marjan Mernik. 2013. “Exploration and Exploitation in Evolutionary Algorithms.” ACM Computing Surveys 45 (3). Association for Computing Machinery (ACM): 1–33. https://doi.org/10.1145/2480741.2480752.

Fausto, Fernando, Adolfo Reyna-Orta, Erik Cuevas, Ángel G. Andrade, and Marco Perez-Cisneros. 2019. “From Ants to Whales: Metaheuristics for All Tastes.” Artificial Intelligence Review, January. Springer Nature. https://doi.org/10.1007/s10462-018-09676-2.

Glover, Fred. 1986. “Future Paths for Integer Programming and Links to Artificial Intelligence.” Computers & Operations Research 13 (5). Elsevier BV: 533–49. https://doi.org/10.1016/0305-0548(86)90048-1.

Osman, Ibrahim H., and Gilbert Laporte. 1996. “Metaheuristics: A Bibliography.” Annals of Operations Research 63 (5). Springer Science; Business Media LLC: 511–623. https://doi.org/10.1007/bf02125421.

Reyna-Orta, Adolfo. 2019. “Aeroreyna/Aisearchmatlab: Alpha Working.” Zenodo. https://doi.org/10.5281/zenodo.3247876.

Ser, Javier Del, Eneko Osaba, Daniel Molina, Xin-She Yang, Sancho Salcedo-Sanz, David Camacho, Swagatam Das, Ponnuthurai N. Suganthan, Carlos A. Coello Coello, and Francisco Herrera. 2019. “Bio-Inspired Computation: Where We Stand and Whats Next.” Swarm and Evolutionary Computation 48 (August). Elsevier BV: 220–50. https://doi.org/10.1016/j.swevo.2019.04.008.

Stützle, Thomas G. 1998. “Local Search Algorithms for Combinatorial Problems-Analysis.” In.

Yang, Xin-She, and Xing-Shi He. 2019. “Nature-Inspired Algorithms.” In SpringerBriefs in Optimization, 21–40. Springer International Publishing. https://doi.org/10.1007/978-3-030-16936-7_2.