2,491
Views
42
CrossRef citations to date
0
Altmetric
Research Article

Levy Flights in Metaheuristics Optimization Algorithms – A Review

&

ABSTRACT

In recent years, Levy flight (LF) is increasingly being employed as search mechanism in metaheuristics optimization algorithms (MOA) to solve complex real world problems. LF-based algorithms are found to exhibit superior or equivalent results to their non-LF counterparts and are expected to be advantageous in situations where no prior information is accessible, the targets are challenging to ascertain, and the distribution of targets is scarce. It has emerged as an alternative to Gaussian distribution (GD) to achieve randomization in MOA and have applications in diverse biological, chemical, and physical phenomena. The purpose of this article is to make the readers acquainted with the applicability of LF in some of the latest optimization algorithm applied in the field of metaheuristics. Also the present study deals with examination of basic underlying principle in the working of LF along with their related properties.

Introduction

Optimization (Rao Citation2009; Tayarani and Bennett Citation2014) is an act of obtaining the best results under the given set of conditions or constraints. In design and development of any engineering system, engineers have to take many technological and managerial decisions at different levels. The ultimate goal of every such decision is to minimize the effort required, as well as, to maximize the desired benefits. Optimization can, thus, be defined as the process of finding the conditions that give the maximum or minimum value of a function while satisfying or respecting certain constraints of the problem under consideration. Mathematically, it can be expressed as (Yang Citation2010a):

(1) minimizexRn fix, i=1,2,..,M(1)
(2) subject to hjx=0, j=1,2,,J(2)
(3) gkx0, k=1,2,,K(3)
(4) x=(x1,x2,..,xn)T(4)

where fix,  hjx, and gkx are functions of the design vector. Function fix is called as objective function (Koziel, Leiffson, and Yang Citation2014) or cost function or energy function which is to be maximized or minimized. The component xi of x is called as decision variable or design variable. It can be discrete or continuous or combination of the two. The space spanned by the decision variables is called as design space or search space  Rn, while the space formed by the objective function values is called the solution space or response space. Function hjx is called as equality constraint while the function gkx is called as inequality constraint of the optimization problem.

Optimization problems are encountered in almost all fields of science and engineering. Traditional optimization techniques are derivative-based, robust and have proven their effectiveness in solving different types of optimization problems. However, such techniques often encounter limitations such as getting trapped in local minima, highly sensitive to starting point, too much computational complexity and being or not being applicable to particular classes of objective functions (Radosavljević Citation2016). These led to the necessity of developing an alternative class of solution methods that could overcome these limitations. Metaheuristics (Blum and Roli Citation2003) are such methods developed in this direction. In recent years, metaheuristic algorithms are based upon the principle of evolutionary computation (Karafotias, Hoogendoorn, and Eiben Citation2015; Sutcliffe and Neville Citation2014) and have been successfully applied to hard (He, Chen, and Yao Citation2015) optimization problems. These are capable of solving general N-dimensional, linear and nonlinear optimization problems.

Metaheuristics

Metaheuristic is a solution blueprint which makes use of trial-and-error approach to determine admissible solutions (Hosny Citation2010; Sun, Garibaldi, and Hodgman Citation2012) to a circuitous problem within the time coercion. While the primary motive is to determine the feasible solutions, the complexity of the problem, sometimes, makes it impractical to explore every admissible solution in a legitimate timeframe. Further, there is no assurance that the best solutions will be explored. The endeavor is that an efficient and practical algorithm is developed that works most of the time and is able to produce good quality solutions. It is expected that among the best quality solutions, some are nearly optimal, though again there is no guarantee for such optimality (Osman and Laporte Citation1996; Talbi Citation2009; Yang Citation2010b).

The word heuristic has its origin from the Greek word “heuriskein”, which means the art of discovering new strategies (rules) to solve problems. The term “meta” is also a Greek word which means “upper level methodology”. Metaheuristic is thus defined as an upper level general methodology that can be used as guiding strategies in designing underlying heuristics to solve specific complex optimization problems (Vob et al. Citation1999; Yang Citation2013). However there are no agreed definitions of heuristics and metaheuristics and both the terms are interchangeably used in the literature. Some examples of metaheuristic algorithms (Slowik and Kwasnicka Citation2018) are Ant colony optimization (Korouzhdeh, Naddaf, and Nik Citation2017), Artificial bee colony algorithm (Karaboga and Basturk Citation2007), Bacterial foraging optimization algorithm (Das et al. Citation2009), Bat algorithm (Yang Citation2010e), Biogeography based optimization (Simon Citation2008), Cuckoo search (CS) (Yang and Deb Citation2009), Firefly algorithm (FA) (Yang Citation2009), Eagle strategy (ES) (Yang and Deb Citation2010b), Flower pollination algorithm (FPO) (Yang Citation2012), Genetic algorithm (GA) (Goldberg Citation1989), and Particle swarm optimization (PSO) (Kennedy and Eberhart Citation1995). These approaches provide near optimal solution in shorter time (Rezende, Lima, and Guimarães Citation2018) when exhaustive search is not a viable methods specifically for NP hard problems. Further if any algorithm ‘A’ supersedes/outperforms another algorithm ‘B’ in order to search for maxima or minima of an objective function, then algorithm ‘B’ will supersede/outperform ‘A’ over other objective functions (Yuen, Chow, Zhang, and Lou Citation2016). This implies that on an average algorithms ‘A’ as well as algorithm ‘B’ will perform equally well over all possible function space (Ho and Pepyne Citation2001; Wolpert and Macready Citation1997). Randomization avoids the situation that the solutions are getting trapped at local optima and at the same time it increases the diversity of the solutions. A good combination of selection of the best and randomization ensures that the global optimality (Ren, Zhang, and Koh Citation2013) is achieved and one of the most efficient ways of achieving randomization is the use of random walk (RW).

Random walk (RW)

A RW is a random process which consists of taking a series of arbitrary steps given by:

(5)  SN=i=1NXi=X1++XN,(5)

where  SN  denotes the sum of each consecutive random step Xi drawn from a random distribution. EquationEquation (5) can also be written as:

(6) SN=i=1N1Xi+XN=SN1+XN(6)

which shows that the next state SN depends on the current existing state SN1 and the transition from existing state to next state. In order to find the solution to an optimization problem, we can explore the search space by conducting a RW beginning from an appropriate initial but random expected solution. Variety of solutions is often realized via randomization. However, simple or blind RW is not efficient. To be computationally efficient and potent in exploring for new solutions, the best solutions found so far may be retained, and the mobility of the RW should be increased so as to explore the search space more effectively. RW should be disciplined in such a way that it can travel in the direction of optimal solutions more swiftly, rather than depart from the possible best solutions. The search of food by many animals such as wolf is operated in a random or quasi random manner whereby the next move in its foraging direction depends upon its current location and transition probability to the next location and thus constitutes a RW (Labed, Fizazi, Mahi, and Galvan Citation2018).

As RWs are widely used for randomization and local search, a proper step size is very important. The step size in RW determines how much distance a random walker can travel for a given number of iterations. If it is too large, then the new solution generated will be too far away from the old solution. Then, such a move is unlikely to be accepted. If it is too small, the variation is too minute to be practical significance, and hence such search is again not efficient. So a proper step size is important to maintain an efficient search throughout the search space (Yang Citation2014).

From the theory of simple isotropic RWs, average distance r travelled in a d dimensional space is given by:

(7) r2=2dDt(7)

where D is the effective diffusion coefficient given by:

(8) D=s2/2τ(8)

where τ is the time taken for each step size s. Solving EquationEquations (7) and (Equation8) we get:

(9) s2=τr2td(9)

As the iterations are discrete, we can take τ = 1 and for a typical length scale L of a dimension of interest, the local search is typically limited in the region of L/10.Therefore EquationEquation (9) is simplified to:

(10) s=rtd=L/10td(10)

Step size in a RW can be determined using EquationEquations (9) and (Equation10). Levy walk and Brownian motion (BM) are a kind of RW which are described in the subsequent sections.

Levy flight: basic concepts

Levy flights (LFs) are a class of non-Gaussian random processes whose stationary increments are distributed according to a Levy stable distribution (Yang et al. Citation2013a). The term ‘Levy’ in LF is originated from the name of French mathematician Paul Pierre Levy who first studied the Levy motion. Here the term flight is defined as the maximum straight line distance between two points that an object in motion covers without directional variation or halt. In literature, LF or Levy walk have been used interchangeably. It has been observed that scenario in which birds or animals acquire very little or no knowledge on where the resources might actually be present, LF can provide encounters with seldom and randomly distributed targets (e.g., prey) more effectively than BM which are efficient where targets are plenty and are more foreseeable (Yang Citation2014).

LF is a RW whose step size is derived from the Levy distribution (LD), which is a distribution of the sum of N identically and independently distributed random variables whose Fourier transform takes the following form:

(11)  FNk=expNkβ(11)

and the LD is given by the integral:

(12) Ls=1π0cosτseατβdτ, (0<β2)(12)

where τ is short time interval during each step s and α is a constant which controls the scaling of the distribution and can take any positive value. For most applications the value of α is set equal to unity.

EquationEquation (12) can be approximated as:

(13) Lss1β(13)

which is heavy tailed. The variance of such a power-law distribution is infinite for 0<β<2. Index β is a constant and is known as shape parameter. It controls the heaviness of the distribution. It means that it determines the decay of the tail. A small value of β implies that the tail of the distribution contains considerable accumulation of the data, i.e., the random variable values are more likely to be far away from the mean of the distribution. On the other hand, a larger value of β indicates that values of the random variable are at small distances from the mean of the distribution. Two special cases are β = 1 and β = 2. For β = 1, EquationEquation (12) is called as the Cauchy distribution and for β = 2, it becomes the normal distribution. In this case, LF becomes the standard BM (Yang Citation2010a). In the d dimensional space, the variance of Brownian RW can be written as:

(14) σ2t=|v0|2t2+2dDt(14)

where v0  is the drift velocity of the system and D is the is the effective diffusion coefficient which is related to the step size, s, over a short time interval τ during each step given by:

(15) D=s22τ(15)

BM thus obeys a GD with zero mean and time-dependent variance. A diffusion process when obeys the GD can be viewed as a series of BM.

Properties and attributes of LF

LF represents a widely used tool in the description of anomalous stochastic processes. By their mathematical definition, LF is Markovian and their statistical limit distribution emerges from independent identically distributed random variables, by virtue of the generalized central limit theorem. Various properties and attributes of LF are as follows.

Stable distribution

LD is a stable distribution whereby the sum of two independent random variables having a β stable distribution with index β is again β stable with the same index β. This invariance property, however, does not hold for different values of β.

Infinite mean and infinite variance

The mean as well the variance of LF is infinite (Rehman, Ali, and Khan Citation2017). All stable distribution with fatter tails than the GD has infinite variance (Yang Citation2010b). The key property of an infinite variance distribution is the average of independent draws is no less uncertain than a single draw. Tails with infinite mean and variance progress to zero slowly, while normal distribution goes to zero relatively rapidly.

Heavy tailed probability distribution

LF are a special class of generalized RW in which the step span during the walk are described by a heavy or fat tailed probability distribution or LD. The significance of the term “tail being heavy” here means that the tail of LD falls off much more gently as compared to a GD (Yang et al. Citation2013b). Further heavy tailed distribution is not exponentially bounded.

Inherent ability to escape local optimum

Variance of LD exhibits divergence characteristics. As a result, enormously long jumps may happen, and classical flights are self-similar, on all scales displaying bunches of smaller jumps intermingled by lengthy expeditions (Yang and Deb Citation2010b). The advantage of this kind of flight pattern is that it permits effective and efficient search of the distant regions of the function landscape in global optimization problems and aids the algorithm to escape local minimum particularly when the function landscape has profound and widespread basins.

Faster algorithm convergence

Use of LF minimizes the likelihood of returning back to already explored locations in the problem search space. This enhances the performance of the algorithm as the solution is not struck in any local optima and escapes out from it to explore another solution. It has been found that the number of iterations required to execute the program reduces by a factor of 104 by the use of LF as compared to GD. It further reduces by a factor of 108  by the use of ES in combination with LF and thus speeds up the overall convergence of the algorithm. The same is being mathematically explained in the subsequent sections.

Implementation of random numbers with LF

Implementation of random numbers with LF consists of two phases. The first phase consists of selecting the random direction and the second phase takes into consideration the generation of steps that follow the chosen LD. The selection of random direction is drawn from a uniform distribution.

The step length s can be calculated by:

(16) s=uv1/β(16)

where u and v are drawn from normal distributions. This implies that:

(17) uN0,σu2, vN0,σv2,σu=Γ1+βsinπβ/2Γ1+β/2β2β1/21/β,σv=1(17)

This distribution follows the LD for s  so, where so is the smallest step.

Search efficiency of LF

LF has been witnessed among foraging activities of eagle, honeybee, shark, spider monkey and many other birds, animals and insects to name a few. Physical phenomena involving diffusion process of fluorescent molecules, the noise characteristics and cooling behavior also yields LF features under the accurate set of conditions. Studies have shown that LF-based algorithms yield superior or comparable outcomes from their non-LF counterparts in situations where no information about the availability of resources is available beforehand, the targets are difficult to detect, and the distribution of targets is meagre. Further, it has been found that LF has weaker fine-tuning capability than Gaussian distributed RW for small to mid-range search regions. The algorithms based on the GD control the search of the problem search space by varying its mean and standard deviation (σ). However, this mechanism of searching the solution space cannot be used due to infinite variance of LD.

If we consider a d – dimensional space with characteristic length L and accuracy  δ, the number of steps or iterations  Nmax needed by RW which obeys the GD is given by the relationship:

(18) NmaxL2δ2d(18)

In EquationEquation (18), if we substitute the value of L = 10, d = 10 and δ=105, then the value of Nmax comes out to be approximately,

(19) Nmax1011(19)

Now if employ LF instead of GD, the value of Nmax can be approximated (Yang Citation2014) as:

(20) NmaxL2δ2d1/3β, β=1.5, Nmax2107(20)

Thus it can be inferred from EquationEquation (20) that LF reduces the number of iterations by about 4 orders [O(104)] from [O(1011)] to [O(107)]. Reduction can be even more, if other values of β are used. ES in combination with LF can further reduce the number of iterations more significantly, from [O(1011)] to [O(103)] (Yang Citation2014).

LF in metaheuristics

LF has been successfully applied to some of the latest optimization algorithms (Fister et al. Citation2013b) in the field of metaheuristics such as FA, CS, Bat algorithm, FPO, ES, and many more. These recent algorithms have been efficaciously realized in wide range of problems in engineering and sciences. The usage of LF in these algorithms is described as follows.

Flower pollination algorithm

FPA is inspired from the pollination process among flowering plants in which the reproduction between flowers takes place by transfer of pollen from one flower to another (Yang Citation2012). Pseudocode and rules governing the FPA are described in (Chakravarthy and Rao Citation2015; Yang, Karamanoglu, and He Citation2013).

In the global pollination process, flower pollen gametes are carried by pollinators such as insects, and pollen can travel over a long distance because insects can often fly and move in a much longer range obeying Levy walk characteristics (Gautam, Malmathanraj, and Srivastav Citation2015). This can be represented mathematically as:

(21) xit+1=xit+γLxitg(21)

where xit is the pollen i or solution vector xi at iteration t, and g is the current best solution found among all solutions at the current generation/iteration. Here γ is a scaling factor to control the step size. Since insects may move over a long distance with various distance steps, LF can be used to mimic this characteristic efficiently. LD for > 0 is given by:

(22) LβΓβsin(πβ/2)π1s1+β,(ss0>0)(22)

here, Γ β is the standard gamma function, and this distribution is valid for large steps > 0 and s0 is the smallest step.

Local pollination (Huang et al. Citation2015) can be mathematically represented as:

(23) xit+1=xit+ϵxjtxkt(23)

where xjt and xkt are pollen from different flowers of the same plant species. If xjt and xkt comes from the same species or selected from the same population, it becomes a local RW if ϵ is drawn from a uniform distribution.

Cuckoo search

CS algorithm was developed by Yang and Deb in 2009. In CS each egg in a nest represents a solution, and a cuckoo egg represents a new solution. The aim is to use the new and potentially better solutions (cuckoos) to replace a not-so-good solution in the nests (Rajabioun Citation2011) .For pseudocode, rules governing CS and its detailed mathematical analysis readers may refer to Yang and Deb (Citation2009).

When generating new solutions xit+1 for, say cuckoo i, a LF is performed as:

(24) xit+1=xit+αLevyβ(24)

where α > 0 is the step size which is related to the scales of the problem of interest. The product means entry-wise multiplications. Levyβ is the LF which provides a RW and their random steps are drawn from a LD given by EquationEquation (22). The consecutive jumps/steps of a cuckoo form a RW process which obeys a power-law step length distribution with a heavy tail. In the real world, if a cuckoo’s egg is very similar to a host’s eggs, then this cuckoo’s egg is less likely to be discovered. Thus the fitness is related to the difference in solutions (Chawla and Duhan Citation2014; Yang and Deb Citation2013). The comparative study of the CS on a small set of test functions by Yang and Deb (Citation2010a) against PSO and GA showed that the convergence behavior of CS in terms of mean and standard deviation of number of function evaluations is superior to both GA and PSO. The use of LF in the CS (Rehman, Ali, and Khan Citation2017) suggests that it is possible to find all the optima in a problem search space for multimodal functions.

Eagle strategy

Foraging activity of an eagle is a two-stage approach (Yang, Deb, and He Citation2013). Initially, in search of food an eagle flies freely in a random manner much like a LF and executes the Levy walk in the entire territory. Once it is able to detect a target it shifts to a chase stratagem so as to get hold of the target as efficiently as possible. ES mixes a good combination of exploration with an intensive and efficient local search method. In order to evade the solution being stuck locally, an exploratory search is executed again followed by exploitation search and the procedure is repeated iteratively (Yang et al. Citation2014). ES uses two diverse algorithms at varied stages and at different times of the iterations. It employs LF for global exploration. This first step provides adequate randomness to search the function landscape as distinctly and efficiently as probable. In the second step process, the algorithm used for the intensive local search should be an efficient local optimizer such as PSO or FPA or any other metaheuristics to find the local optimum within a minimum number of function evaluations. This stage is fast and efficient compared to the first global exploration stage. It has been tested that ES outperformed PSO and other metaheuristics in terms of efficiency and success rate (Yang and Deb Citation2012).

Firefly algorithm

FA developed by Yang is based on the idealized behavior of the flashing characteristics of fireflies. Rules governing the FA and its pseudocode can be found in literature (Yang Citation2010c, Citation2010d).

Attractiveness of a firefly is determined by its brightness or light intensity which in turn is associated with the encoded objective function. For optimization problems (Marichelvam, Prabaharan, and Yang Citation2014; Yang Citation2015), the brightness of a firefly at a particular location x can be chosen as Ix. This brightness is proportional to the value of the objective function fx. The brightness of a firefly can be mathematically expressed as:

(25) I=I0eγr2(25)

where  I0istheoriginallightIntesity, r is the distance between two fireflies and γ is the light absorption coefficient.

The distance between any two fireflies i and j at xi and xj , respectively, is given by:

(26) rij=xixj2+yiyj2(26)

The attractiveness β between two fireflies is given by:

(27) β=β0eγr2(27)

where β0istheattractivenessatr=0. Theoretically the value of γ ϵ[0, ). It characterizes the variation of the attractiveness, and its value is crucially important in determining the speed of the convergence and how the FA algorithm behaves.

The movement of a firefly i is attracted to another more attractive (brighter) firefly j is determined by:

(28) xi=xi+β0eγrij2xjxi+α sign rand12Levy(28)

where the second term is due to the attraction while the third term is randomization via LF with αϵ0,1 being the randomization parameter. The sign rand12 where rand ε [0, 1] essentially provides a random sign or direction while random step length is drawn from LD. Levy is same as described in EquationEquation (22). Ever since its inception FA has been applied to almost all fields of engineering including industrial optimization, image processing, antenna design, business optimization, civil engineering, robotics, semantic web, meteorology, wireless sensor networks (Fister et al. Citation2013a).

Bat algorithm

Bat algorithm (BA) is based upon the echolocation behavior of bats (Yang Citation2010a). Echolocation works as a type of sonar. Bats, mainly microbats, emit a loud and short pulse of sound, wait for it to strike an object and, after a fraction of time, the echo returns back to their ears . Thus, bats can compute how far they are from an object. Echolocation capabilities of microbats can be associated with the objective function to be optimized and optimization algorithms can be formulated that mimic this bat behavior in finding the optimal solution (Chawla and Duhan Citation2015).

To understand the pseudocode of BA along with its defined set of rules as to how the BA works, readers may refer to Yang (Citation2011).

New solutions are generated by adjusting frequencies, loudness and pulse emission rates of the bats, while the proposed solution is accepted or not depends on the quality of the solutions controlled or characterized by loudness and pulse rate which are in turn related to the closeness or the fitness of the locations/solution to the global optimal solution (Pandya, Dabhi, and Joshi Citation2015).

For BA to be used in simulation analysis, rules are defined as to how their positions xi and velocities vi are updated in d-dimensional search space. The new solutions, xit and velocities, vit at time step t are given by:

(29)  vit=vit1+xitxfi(29)
(30) xit=xit1+vit(30)

where x is the current global best location (solution) which is located after comparing all the solutions among all the n bats at each iterations t. Frequency fi is used to adjust the velocity change and is given by:

(31) fi=fmin+fmaxfminβ(31)

where  fmin, fmax are the minimum/maximum frequency of the pulse emitted by the bats; β0,1 is a random vector drawn from a uniform distribution. Once a solution is selected among the current fittest solution, a new solution for each bat is generated locally using RW given by:

(32) xnew=xold+ϵAt(32)

where ϵ is a random number in the range [−1, 1] and At=At is the average loudness of all the bats at this time step.

Once a bat has found a prey, its loudness decreases while the rate of pulse emission increases. The bat moves toward optimal solution according to two design equations given by:

(33) Ait+1=αAit(33)
(34) rit+1=ri01expγt(34)

where α and γ are constants. The value of α lies between 0 and 1, i.e., 0<α<1,while that of γ is greater than 0 .i.e., γ>0 . Ait+1 is the loudness of the bat at time step t+1 and Ait is the loudness of the bat at time step t rit+1 is the rate of pulse emission at time step t+1 and ri0 is the initial rate of pulse emission. As the time step, t approaches infinite average loudness of the bat goes to zero, while the rate of pulse emission approaches initial rate of pulse emission. This can be expressed as:

(35)  Ait0, ritri0, as t(35)

LF was incorporated in BA employing chaotic sequence (Lin et al. Citation2012), whereby, EquationEquation (32) was replaced by a new equation given by:

(36) xnew=xold+csLevy(36)

where Cs is the chaotic sequence and is given by:

(37) Cst+1=4Cst1Cst, 0Cs01(37)

Use of LF in BA was also successfully performed by Xie, Zhou, and Chen (Citation2013).Here the bats perform the LF before the position updating and is given by:

(38) xit=xit1+μ signrand0.5Levy(38)

where μ is a random parameter drawn from a uniform distribution, sign, rand and Levy are same as used in EquationEquation (28). BA is a very promising algorithm and has been proven to have good convergence properties on different benchmark functions (Jamil and Yang Citation2013).

Levy particle swarm

Use of LF in PSO (Kennedy and Eberhart Citation1995), modified PSO (Knypinski, Nowak, and Jedryczka Citation2015) was performed by Richer and Blackwell (Citation2006) by replacing the motion of the particles with LF in order to solve optimization problem more effectively. In order to analyze the effectiveness of Levy based PSO, a series of experiments were conducted on problem set of nine different benchmark functions. It was found that Levy based PSO outperformed standard PSO in terms of faster convergence. The better performance of Levy PSO was due to LD with fat tail characteristics.

Fast evolutionary programming

Lee and Yao (Citation2004) proposed fast evolutionary programming (FEP) by replacing Gaussian based mutation with a Cauchy distribution which is a special case of LD. FEP converged to a better solution compared to conventional evolutionary programming for many multivariate functions that were investigated in the study. The proposed algorithm (Lee and Yao Citation2004) was tested and compared on 14 different benchmark functions with three unimodal functions, six multimodal functions with many local minima, and five multimodal functions with relatively fewer local minima. Through a series of experiments carried out, it was observed that both the nonadaptive and adaptive Levy mutations were more robust compared to the evolutionary programming built on Gaussian and Cauchy mutations.

LF-based artificial bee colony optimization algorithm

LF-based artificial bee colony (ABC) optimization algorithm (Sharma, Bansal, Arya, and Yang Citation2016) designated as LFABC incorporates local and global search capability simultaneously by automatically tuning the step size by means of regulating the LF parameters. In this algorithm, new solutions were generated in the vicinity of best solutions depending upon the newly introduced parameter called as perturbation rate which was missing in the original ABC algorithm. Simulation results obtained by the authors revealed that LFABC outperformed other algorithms in the test performed on five real world optimization problems in terms of the accuracy, reliability, and efficiency.

LF-based ant colony optimization

The performance of original ant colony optimization (ACO) algorithm was improved by using LF to perform the local search. This new variant (Wang and Liang Citation2016) of ACO based on LF was given the name as Levy flight ant colony optimization (LFACO). The performance of LFACO was compared with ACO, GA, and PSO and found to have superior results on test performed on 20 different benchmark functions. The improvement in results was due to the reduced search time and search stagnation by the use of LF.

LF-based gray wolf optimizer with back propagation

Amirsadri, Mousavirad, and Komleh (Citation2017) introduced LF-based gray wolf optimizer (GWO) blended with back propagation (BP) for training neural networks. In the first step, GWO algorithm was improved by utilizing LF capabilities which increases the exploration, and then in the next step improved GWO algorithm in combination with BP (which increases the exploitation capability) was applied to train the neural network. Each wolf in this new approach represented the weights and the biases set in the neural network. The performance of this new algorithm was experimentally evaluated by comparing it with well-known MOA on 12 function approximation and classification data sets and was found to have superior results.

LF animal migration optimization algorithm

LF animal migration optimization (LAMO) algorithm (Bhamb and Kumar Citation2016) emulate the foraging behavior of animals and how they get access to a secure habitat. This algorithm is simulated by intellectual demeanor of animals as to how they move from the present habitat to a new habitat and how some animal leave the group while few others add up to the group. There are two phases involved in this procedure. One is called as the migration, while the other is the population update. The population update step in the basic animal migration optimization (AMO) algorithm was improved using LF and thus was abbreviated as LAMO. The recital of LAMO was carried out on 21 benchmark functions and showed superior grades over AMO in terms of improved reliability, robustness, and efficiency.

LF trajectory based whale optimization algorithm

LF trajectory-based whale optimization algorithm (LWOA) developed by Ling, Zhou, and Luo (Citation2017) enhances the performance of original whale optimization algorithm (WOA) by making the algorithm more robust, yielding faster results, and avoiding premature convergence. LWOA is characterized by quick convergence and high precision by the use of LF .The LF trajectory escalates the diversity of the populace against premature convergence and raises the potential of jumping out of local optimal optima. The performance of LWOA was evaluated on 23 benchmark functions and the results obtained were compared with other well-known MOA available in the literature. Comparison of simulation results exhibited superior performance of LWOA over existing techniques.

Other algorithms based on LF

Recently, there have been several other MOA incorporating LF to improve upon their performance characteristics as reported in the literature. Some of them includes LF moth flame optimization (LMFO) algorithm (Li, Zhou, Zhang, and Song Citation2016) inspired by the direction-finding technique of moth flies; patch levy-based bees algorithm (PLBA) (Husseina, Sahranb and Abdullah Citation2015) motivated by the food-searching behavior of swarm of honeybees; LF in binary optimization (Klimt, Kukal, and Mojzes Citation2013) introducing binary mutations inspired by LF; hybrid gravitational search with Levy flight (HGSLF) (Ali Citation2015) emulating the natural phenomenon of law of gravity and mass interactions; differential crow search algorithm (Xuejing, et al. Citation2018) based on LF for solving discount {0–1} knapsack problem and improved crow search algorithm (Díaz et al. Citation2018) using LF applied to energy problems.

Conclusions

A LF is a RW in which the steps are defined in terms of the step spans, which follow Levy probability distribution, with the directions of the steps being isotropic and arbitrary. In recent years, the use of LF has arisen as a substitute to GD to achieve randomization in metaheuristics optimization algorithms (MOA). Random walkers based on the LD spends a greater amount of CPU time in exploring a wide area of the search space and lesser time in exploiting the small local neighborhood compared to random walkers based on the GD. LF seems to be propitious in unraveling a vast domain of multimodal problems without the need to enact function stretching or niching procedures. These methods are commonly employed in MOA to help the algorithm find multiple solutions. The probability density functions of Levy stable laws decay in asymptotic power-law form with diverging variance due to their heavy tail and belongs to a special class of symmetric β stable LD and thus appear instinctively in the determination of many vacillation processes with predominantly scattering statistics typified by bursts or vast outliers. Also, Levy stable laws provide statistical explanation for widespread class of operations in physical, chemical, biological and even financial domains. In the years to come, LF will certainly be used by the researchers in various disciplines of science and engineering all over the world to explore the search space more efficiently in order to solve real world global optimization problems.

References

  • Ali, A. F. 2015. A hybrid gravitational search with levy flight for global numerical optimization. Information Sciences Letters, 4 (2): 71–83.
  • Amirsadri S., S. J. Mousavirad, and H. E. Komleh 2017. A Levy flight-based grey wolf optimizer combined with back propagation algorithm for neural network training. Neural Comput and Applic, 1–14.
  • Bhamb, P. and S. Kumar 2016. Levy flight based animal migration optimization algorithm. International Conference on Recent Advances and Innovations in Engineering (ICRAIE), Jaipur, 1–5.
  • Blum, C., and A. Roli. 2003. Metaheuristics in combinatorial optimization: Overview and conceptual comparison. Journal of ACM Computing Surveys 35: 268–308. doi:10.1145/937503.
  • Chakravarthy, V. V. S. S. S., and M. P. Rao. 2015. On the convergence characteristics of flower pollination algorithm for circular array synthesis. Proceedings of the 2nd IEEE international conference on electronics and communication systems, Coimbatore, India, 485–89.
  • Chawla, M., and M. Duhan. 2014. Applications of recent metaheuristics optimization algorithms in biomedical engineering – A review. International Journal of Biomedical Engineering and Technology 16 (3): 268–78. doi:10.1504/IJBET.2014.065807.
  • Chawla, M., and M. Duhan. 2015. Bat algorithm – A survey of the state of the art. Applied Artificial Intelligence 29 (6): 617–34. doi:10.1080/08839514.2015.1038434.
  • Das, S., A. Biswas, S. Dasgupta, and A. Abraham. 2009. Bacterial foraging optimization algorithm: Theoretical, foundations, analysis and applications. In Foundations of computational intelligence, ed. A. Abraham, A. E. Hassanien, P. Siarry, and A. Engelbrecht, vol. 3, 23–55. Studies in computational intelligence 203. Springer-Verlag Berlin Heidelberg, Germany.
  • Díaz, P., M. P. Cisneros, E. Cuevas, O. Avalos, J. Gálvez, S. Hinojosa, and D. Zaldivar. 2018. An improved crow search algorithm applied to energy problems. Energies 11, 1–22.
  • Fister, I., I. Fister Jr., X. S. Yang, and J. Brest. 2013a. A comprehensive review of firefly algorithms. Swarm and Evolutionary Computation 13 (1): 34–46. doi:10.1016/j.swevo.2013.06.001.
  • Fister, I., Jr., X. S. Yang, I. Fister, J. Brest, and D. Fister. 2013b. A brief review of nature-inspired algorithms for optimization. Elektrotehni Ski Vestnik 80 (3): 1–7.
  • Gautam, U., R. Malmathanraj, and C. Srivastav. 2015. Simulation for path planning of autonomous underwater vehicle using flower pollination algorithm, genetic algorithm and Q-learning. Proceedings of the IEEE international conference on cognitive computing and information processing, Noida, India, 1–5.
  • Goldberg, D. E. 1989. Genetic algorithms in search, optimization and machine learning. Reading: Addison-Wesley.
  • He, J., T. Chen, and X. Yao. 2015. On the easiest and hardest fitness functions. IEEE Transactions on Evolutionary Computation 19 (2): 295–305. doi:10.1109/TEVC.2014.2318025.
  • Ho, Y. C., and D. L. Pepyne. 2001. Simple explanation of the no free lunch theorem of optimization. Proceedings of the 40th IEEE conference on decision and control, Orlando, FL, 4409–14.
  • Hosny, M. I. 2010. Investigating heuristic and meta-heuristic algorithms for solving pickup and delivery problems. PhD Thesis, Cardiff University School of Computer Science and Informatics.
  • Huang, S. J., P. H. Gu, W. F. Su, X. Z. Liu, and T. U. Tai. 2015. Application of flower pollination algorithm for placement of distribution transformers in a low-voltage grid. Proceedings of the IEEE international conference on industrial technology, Seville, 1280–85.
  • Husseina, W. A., S. Sahranb, S. N. H. S. Abdullah 2015. An improved bees algorithm for real parameter optimization. International Journal of Advanced Computer Science and Applications, 6 (10): 23–39.
  • Jamil, M., and X. S. Yang. 2013. A literature survey of benchmark functions for global optimization problems. International Journal of Mathematical Modelling and Numerical Optimisation 4 (2): 150–94. doi:10.1504/IJMMNO.2013.055204.
  • Karaboga, D., and B. Basturk. 2007. A powerful and efficient algorithm for numerical function optimization: Artificial bee colony algorithm. Journal of Global Optimization 39 (3): 459–71. doi:10.1007/s10898-007-9149-x.
  • Karafotias, G., M. Hoogendoorn, and A. E. Eiben. 2015. Parameter control in evolutionary algorithms: Trends and challenges. IEEE Transactions on Evolutionary Computation 19 (2): 167–87. doi:10.1109/TEVC.2014.2308294.
  • Kennedy, J., and R. Eberhart. 1995. Particle swarm optimization. Proceedings of the IEEE international conference on neural networks, Washington, DC, IEEE Computer Society, 1942–48.
  • Klimt M., J. Kukal, and M. Mojzes. 2013. Levy flights in binary optimization. Archives of Control Sciences 23 (4): 447–454.
  • Knypinski, L., L. Nowak, and C. Jedryczka. 2015. Optimization of the rotor geometry of the line-start permanent magnet synchronous motor by the use of particle swarm optimization. Compel – The International Journal for Computation and Mathematics in Electrical and Electronic Engineering 34(3): 882–92. doi:10.1108/COMPEL-10-2014-0276.
  • Korouzhdeh, T., H. E. Naddaf, and M. G. Nik 2017. An improved ant colony model for cost optimization of composite beams. Applied Artificial Intelligence 31 (1): 44–63.
  • Koziel, S., L. Leiffson, and X. S. Yang. 2014. Solving computationally expensive engineering problems – Methods and applications. Switzerland: Springer Proceedings in Mathematics and Statistics 97, Springer International Publishing.
  • Labed K., H. Fizazi, H. Mahi, and I. M. Galvan 2018. A comparative study of classical clustering method and cuckoo search approach for satellite image clustering: application to water body extraction. Applied Artificial Intelligence 32 (1): 96–118.
  • Lee, C. Y., and X. Yao. 2004. Evolutionary programming with mutations based on Levy probability distribution. IEEE Transactions on Evolutionary Computation 8: 1–13. doi:10.1109/TEVC.2003.816583.
  • Li Z., Y. Zhou, S. Zhang, and J. Song 2016. Lévy-flight moth-flame algorithm for function optimization and engineering design problems. Mathematical Problems in Engineering, Hindawi Publishing Corporation, Article ID 1423930, 1–23.
  • Lin, J. H., C. W. Chou, C. H. Yang, and H. L. Tsai. 2012. A chaotic Levy flight Bat algorithm for parameter estimation in nonlinear dynamic biological systems. Journal of Computer and Information Technology 2 (2): 56–63.
  • Ling Y., Y. Zhou, and Q. Luo 2017. Lévy flight trajectory-based whale optimization algorithm for global optimization. IEEE Access 5, 6168–6186.
  • Marichelvam, M. K., T. Prabaharan, and X. S. Yang. 2014. A discrete firefly algorithm for the multi-objective hybrid flowshop scheduling problems. IEEE Transactions on Evolutionary Computation 18 (2): 301–305. doi:10.1109/TEVC.2013.2240304.
  • Osman, I. H., and G. Laporte. 1996. Metaheuristics: A bibliography. Annals of Operations Research 63: 513–623.
  • Pandya, K. S., D. A. Dabhi, and S. K. Joshi. 2015. Comparative study of bat and flower pollination optimization algorithms in highly stressed large power system. Proceedings of the IEEE conference on power systems, Clemson University, Clemson, SC, USA, 1–5.
  • Radosavljević, J. 2016. A solution to the combined economic and emission dispatch using hybrid PSOGSA algorithm. Applied Artificial Intelligence 30 (5): 445–474.
  • Rajabioun, R. 2011. Cuckoo optimization algorithm. Applied Soft Computing Journal 11 (8): 5508–18. doi:10.1016/j.asoc.2011.05.008.
  • Rao, S. S. 2009. Engineering optimization theory and practice. Hoboken, NJ: John Wiley and Sons Inc.
  • Rehman, S., S. S. Ali and S. A. Khan 2016. Wind farm layout design using cuckoo search algorithms. Applied Artificial Intelligence 30 (10): 899–922.
  • Ren, Z., D. Zhang, and C. S. Koh. 2013. Multi objective optimization approach to reliability-based robust global optimization of electromagnetic device. Compel – The International Journal for Computation and Mathematics in Electrical and Electronic Engineering 33 (1/2): 191–200. doi:10.1108/COMPEL-11-2012-0341.
  • Rezende M. D., B. S. L. P. D. Lima and S. Guimarães 2018. A greedy ant colony system for defensive resource assignment problems. Applied Artificial Intelligence 32 (2): 138–152.
  • Richer, T. J., and T. M. Blackwell. 2006. The Levy particle swarm. Proceedings of the IEEE congress on evolutionary computation, Vancouver, Canada, 808–15.
  • Sharma, H., J. C. Bansal, K. V. Arya and X.S. Yang 2016. Lévy flight artificial bee colony algorithm. International Journal of Systems Science, 47:11, 2652–2670.
  • Simon, D. 2008. Biogeography based optimization. IEEE Transactions on Evolutionary Computation 12 (6): 702–13. doi:10.1109/TEVC.2008.919004.
  • Slowik, A., H. Kwasnicka, 2018. Nature inspired methods and their industry applications—Swarm intelligence algorithms. IEEE Transactions on Industrial Informatics 14 (3): 1004–1015.
  • Sun, J., J. M. Garibaldi, and C. Hodgman. 2012. Parameter estimation using metaheuristics in systems biology: A comprehensive review. IEEE/ACM Transactions on Computational Biology and Bioinformatics 9 (1): 185–202. doi:10.1109/TCBB.2011.63.
  • Sutcliffe, A. W. C., and R. S. Neville. 2014. Applying evolutionary computing to complex systems design. IEEE Transactions on Systems, Man and Cybernetics – Part A: Systems and Humans 37 (5):770–79. doi:10.1109/TSMCA.2007.902653.
  • Talbi, E. G. 2009. Metaheuristics from design to implementation. Hoboken, NJ: John Wiley and Sons, Inc.
  • Tayarani, M. H. N., and A. P. Bennett. 2014. On the landscape of combinatorial optimization problems. IEEE Transactions on Evolutionary Computation 18 (3): 420–34. doi:10.1109/TEVC.2013.2281502.
  • Vob, S., S. Martello, I. H. Osman, and C. Roucairol. 1999. Metaheuristics – Advances and trends in local search paradigms for optimization. Dordrecht, The Netherlands: Kluwer Academic Publishers.
  • Wang, H. and C. Liang 2016. An improved ant colony algorithm for continuous optimization based on levy flight. Chemical Engineering Transactions 51, 487–492.
  • Wolpert, D. N., and W. G. Macready. 1997. No free lunch theorems for optimization. IEEE Transactions on Evolutionary Computation 1 (1): 67–82. doi:10.1109/4235.585893.
  • Xie, J., Y. Zhou, and H. Chen. 2013. A novel bat algorithm based on differential operator and Levy flights trajectory. Computational Intelligence and Neuroscience. Article ID 453812. www.hindawi.com/journals/cin/aip/453812.pdf.
  • Xuejing, L., H. Yichao, L. Fengjia, W. Congcong, and C. Xiufeng. 2018. Differential crow search algorithm based on lévy flight for solving discount {0-1} knapsack problem. Journal of Computer Applications 38 (2):433–442.
  • Yang, X. S. 2009. Firefly algorithms for multimodal optimization. Stochastic Algorithms: Foundations and Applications, Lecture Notes in Computer Sciences 5792: 169–78.
  • Yang, X. S. 2010a. Nature inspired metaheuristic algorithms, vol. 2/e. United Kingdom: Luniver Press.
  • Yang, X. S. 2010b. Engineering optimization: An introduction with metaheuristic applications. Hoboken, NJ: John Wiley and Sons, Inc.
  • Yang, X. S. 2010c. Firefly algorithm, Levy flights and global optimization. In Research and development in intelligent systems XXVI, ed. M. Bramer, R. Ellis, and M. Petridis, 209–18. London: Springer.
  • Yang, X. S. 2010d. Firefly algorithm, stochastic test functions and design optimization. International Journal of Bio Inspired Computation 2 (2): 78–84. doi:10.1504/IJBIC.2010.032124.
  • Yang, X. S. 2010e. A new metaheuristic Bat-inspired algorithm. In J. R. Gonzalez et al. (Eds.), Nature inspired cooperative strategies for optimization, vol. 284, 65–74. Berlin: Springer.
  • Yang, X. S. 2011. Bat Algorithm for multiobjective optimization. International Journal of Bio Inspired Computation 3 (5): 267–74. doi:10.1504/IJBIC.2011.042259.
  • Yang, X. S. 2012. Flower pollination algorithm for global optimization. Unconventional Computation and Natural Computation 7445: 240–49.
  • Yang, X. S. 2013. Artificial intelligence, evolutionary computing and metaheuristics. In Studies in computational intelligence, vol. 427. Springer-Verlag Berlin Heidelberg, Germany.
  • Yang, X. S. 2014. Nature inspired optimization algorithms. London: Elsevier.
  • Yang, X. S. 2015. Recent advances in swarm intelligence and evolutionary computation, Studies in computational intelligence vol. 585. Switzerland: Springer International Publishing.
  • Yuen, S. Y., C. K. Chow, X. Zhang and, Y. Lou. 2016. Which algorithm should I choose: An evolutionary algorithm portfolio approach. Applied Soft Computing 40: 654–673
  • Yang, X. S., and S. Deb. 2009. Cuckoo search via Levy flights. Proceedings of the world congress on nature and biologically inspired computing, India, USA, IEEE Publications, 210–14.
  • Yang, X. S., and S. Deb. 2010a. Engineering optimization by cuckoo search. International Journal of Mathematical Modeling and Numerical Optimization 1 (4): 330–43. doi:10.1504/IJMMNO.2010.035430.
  • Yang, X. S., and S. Deb. 2010b. Eagle strategy using Levy walk and firefly algorithms for stochastic optimization. In Nature inspired cooperative strategies for optimization studies in computational intelligence, ed. J. R. Gonzalez et al., vol. 284, 101–11. Berlin: Springer.
  • Yang, X. S., and S. Deb. 2012. Two stage eagle strategy with differential evolution. International Journal of Bio-Inspired Computation 4 (1),1–5. doi:10.1504/IJBIC.2012.044932.
  • Yang, X. S., and S. Deb. 2013. Multiobjective cuckoo search for design optimization. Computers and Operations Research 40:1616–24. doi:10.1016/j.cor.2011.09.026.
  • Yang, X. S., S. Deb, and X. He. 2013a. Eagle strategy with flower algorithm. Proceedings of the IEEE international conference on advances in computer, communications and informatics, Mysore, India, 1213–17.
  • Yang, X. S., M. Karamanoglu, and X. He. 2013b. Multiobjective flower algorithm for optimization. In International Conference on Computational Science, Procedia Computer Science 18: 861–68. doi:10.1016/j.procs.2013.05.251.
  • Yang, X. S., M. Karamanoglu, T. O. Ting, and Y. X. Zhao. 2014. Applications and analysis of bio-inspired eagle strategy for engineering optimization. Neural Computing and Applications 25 (2): 411–20. doi:10.1007/s00521-013-1508-6.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.