761
Views
2
CrossRef citations to date
0
Altmetric
Articles

A Memetic Algorithm Based on Breakout Local Search for the Generalized Traveling Salesman Problem

, &

ABSTRACT

The Traveling Salesman Problem (TSP) is one of the most popular Combinatorial Optimization Problem. It is well solicited for the large variety of applications that it can solve, but also for its difficulty to find optimal solutions. One of the variants of the TSP is the Generalized TSP (GTSP), where the TSP is considered as a special case which makes the GTSP harder to solve. We propose in this paper a new memetic algorithm based on the well-known Breakout Local Search (BLS) metaheuristic to provide good solutions for GTSP instances. Our approach is competitive compared to other recent memetic algorithms proposed for the GTSP and gives at the same time some improvements to BLS to reduce its runtime.

Introduction

The Generalized Traveling Salesman Problem (GTSP) is an extension of the well-known Traveling Salesman Problem (TSP), they are both Combinatorial Optimization Problems (COP). The problem has numerous applications such as postal routing, airport selection, sequencing computer files and many others (Laporte, Asef-Vaziri, and Sriskandarajah Citation1996), the work done by Ilhan (Citation2017) is a concrete example of real life applications of such problems. Several approaches and studies have been considered since GTSP was presented by Henry-Labordere (Citation1969) and Srivastava et al. (Citation1969). Fischetti, Salazar Gonzalez, and Toth (Citation1997) introduced branch-and-cut which is an exact algorithm. Such algorithms are important but very greedy in runtime. Lin-Kernighan, one of the most successful heuristic for the TSP, has been adapted for the GTSP by Karapetyan and Gutin (Citation2011). The only metaheuristics that rivaled with Lin-Kernighan in the TSP are those combine genetic and local searches algorithms, called memetic algorithms (Hart, Krasnogor, and Smith Citation2004). Last and best studies for the GTSP focused on memetic with different local search algorithms and genetic approaches.

We introduce in our paper a new memetic algorithm based on the Breakout Local Search (BLS) as the main method for running local searches. Iterated Local Search (ILS) (Lourenço, Martin, and Stutzle Citation2001) is known to suffer from lack of effectiveness in escaping some local optima, BLS follows the basic scheme of this framework and improves it by combining the features of other efficient methods during perturbation process. BLS find a local optimum with a local search procedure, then run the most appropriate perturbations in order to jump from one neighborhood to another in the search space. Those perturbations are done with a variable degree of diversification by determining dynamically how many jumps to perform and making an adaptive selection from several types of dedicated movements. BLS perturbation strategy is based on both the history and state of search.

BLS was developed by Benlic and Hao in 2012. Since then, it has been used for solving some COP, such as the Quadratic Assignment Problem (Benlic and Hao Citation2013b), Maximum Clique Problems (Benlic and Hao Citation2013a), Max-Cut problem (Benlic and Hao Citation2013c) and Minimum Sum Coloring Problem (Benlic and Hao Citation2012), and has given very good results. BLS was also used in other studies of by other researchers (Fu and Hao Citation2014; Ghandi and Masehian Citation2015). This paper provides a memetic algorithm based on BLS metaheuristic for the GTSP, whose performance will be evaluated by solving 39 benchmark instances of the GTSPLIB (Reinelt Citation1991).

We’ll talk in the next Section about the GTSP by defining the problem and presenting some previous works. We introduce later in Section 3 our memetic approach where we devote an important part to the BLS metaheuristic which is the main contribution. Experimental tests and comparisons are exposed in Section 4 and finally a conclusion in Section 5.

The Generalized Traveling Salesman Problem

Problem Formulation

The GTSP can be considered as a weighted complete graph G = (V, E, c) where V = (v1, v2, …, vn) represents the set of vertices/cities, E is the set of all edges/paths (vi, vj) for i, j = 1, 2, … n and c is the cost function E → N+ (or R+) giving the weight for each edge which is the distance between two nodes. While the TSP consists of visiting all the given nodes V, the GTSP nodes are distributed over a set of clusters C = (C1, C2, … Cm) such that and the intersection of any pair of clusters is an empty set.

A GTSP solution can be formulated as a subgraphS=N,DG. N = {n1, n2, … nm} is the set of nodes where each cluster Ck has one node nt for k, t = 1, 2, … m.DE is the subset of edges composed by N and forming a cyclic tour which total distance is calculated by the formula (1) below:

(1) WT=cpm,p1+i=1m1cpi,pi+1(1)

Previous Works

First solutions provided for the GTSP were methods based on dynamic programming (Henry-Labordere Citation1969; Srivastava et al. Citation1969). Laporte, Mercure, and Nobert (Citation1987) proposed an exact method using integer programming (Branch and Bound) techniques. Later, Fischetti, Salazar Gonzalez, and Toth (Citation1997) made a solution based on Branch and Cut which provide optimal solutions for any instance with up to 442 cities and 89 clusters.

Several researchers (Ben-Arieh et al. Citation2003; Laporte and Semet Citation1999; Noon and Bean Citation1993) suggested transformations of a GTSP instance into a TSP one as there are efficient heuristics and exact algorithms for the TSP. This idea is very limited since we must have exact solutions from obtained TSP instance, otherwise, it may lead to an infeasible GTSP solution.

A variety of tour construction heuristics inspired from the TSP have been adapted on the GTSP, Noon and Bean (Citation1991) proposed a nearest-neighbor heuristic, while Fischetti, Salazar Gonzalez, and Toth (Citation1997) presented other adaptations such as the farthest, nearest and cheapest insertion.

Several local search heuristics were studied in (Karapetyan and Gutin Citation2011, Citation2012; Pourhassan and Neumann Citation2015; Smith and Imeson Citation2017) and some others. Many researchers worked also on Genetic Algorithm(s) (GA) (Bontoux, Artigues, and Feillet Citation2010; Silberholz and Golden Citation2007; Snyder and Daskin Citation2006) while others made hybrid algorithms with local search heuristics and Genetic Algorithms yielding memetic algorithms.

Memetic algorithms (Hart, Krasnogor, and Smith Citation2004) are known for their competitiveness since they hybrid between local searches which favor exploitation and genetic algorithms which are good for diversification. It’s known in the TSP that the only metaheuristics that can compete with the famous Lin-Kernighan are memetic algorithms. This fact lets many researchers focus on memetic algorithms for the GTSP (Bontoux, Artigues, and Feillet Citation2010; Gutin and Karapetyan Citation2010; Gutin, Karapetyan, and Krasnogor Citation2007) and motivates us to work on it.

Memetic Breakout Local Search

The BLS Framework

BLS is a metaheuristic based on the Iterated Local Search (ILS) framework: it consists of discovering the search space by moving from a neighborhood to a new one via perturbation(s) once the algorithm meets a local optimum. BLS provides an interesting strategy which establishes perturbations whose moves and number of jumps are elected dynamically on each iteration.

From an initial permutation π0 (usually produced by a construction method), BLS land in a local optimum via a local search procedure. It browses the entire neighborhood to find the best improving permutation to alter the current local optimum. The best move found during the iteration is saved in a history matrix H which will be solicited later during the algorithm.

In addition to the history matrix, BLS perturbations are determined by two other parameters: (i) L decides how many jumps to perform in each perturbation. It is incremented if the previous one didn’t help to escape the current neighborhood, otherwise downgraded to a minimal value L0. (ii) ω counts the number of the consecutive non-improving best solution found so far. It is incremented by one if the best permutation is not updated, else it will be reset to zero. ω has a maximal value T, if reached then BLS run a strong perturbation which resides on performing a perturbation with an important number of jump Lmax, then ω is reset to zero. Algorithm 1 below describes the overall approach of the BLS framework.

Algorithm 1: Breakout Local Search Framework

Require: π: initial solution, Descmax: maximal descents to perform, L0: initial number of jumps, T: maximal consecutive visited local optima without any improvement, Lmax: number of jumps when reaching T

Ensure: A solution πbest

1.  c ← objectiveValue(π)

2.  πbest ← π   /* πbest saves best solution found */

3.  cbest ← c   /* cbest saves best objective value */

4.  ω ← 0   /* ω gives the number of consecutive non-improving best solution found so far */

5.  L ← L0   /* L saves number of jumps to perform, set to it’s minimal value L0 */

6.  cp ← c   /* cp saves objective value of last descent */

7.  Desc ← 0  /* Desc saves current number of descents */

8.  Iter ← 0   /*global iteration counter*/

9. while Desc 〈 Descmax do

10.     while ∃ 2optMove(x,y) such that (c+delta2Opt(π,x,y)〈c) do

11.       π ← π ⨁ 2OptMove(x,y) /* perform the best improving move*/

12.       c ← c + delta2Opt(π,x,y) /* cost variation of π with (x,y) move*/

13.       Update the matrix H with iteration number when edges move was last performed

14.       Iter ← Iter+1

15.     end while

16.     if c 〈 cbestthen

17.       Update πbest and cbest

18.       ω ← 0   /* reset counter for consecutive non-improving local optima */

19.    else

20.       ω ← ω+1

21.    end if

22.    if ω 〉 T then

23.      π ← Perturbation(π, Lmax, H, Iter, ω, M) /* performing strong perturbation with Lmax moves */

24.      ω ← 0

25.    else if c = cpthen /* search returned the previous local optimum */

26.      L ← L+1  /*increment moves if the previous move didn’t help to improve */

27.    else  /*Search escaped from the previous local optimum, reinitialize indicator */

28.       L ← L0

29.    end if

30.    cp ← c      /* update the objective value of the previous local optimum */

31.    if strong perturbation not performed then

32.      π ← Perturbation(π, L, H, Iter, ω, M) /* see algorithm 2 */

33.    end if

34.  end while

35.  return πbest

Every local search ends in a local optimum and needs diversification to explore a new neighborhood. Perturbations are an important step in the ILS framework (and thus for BLS too) since they allow to jump to a new area in the search space. The distance between this area and the current local optimum depends on the nature of the perturbation. BLS introduces a new well adapted adaptive perturbations defined by the nodes/cities to be involved in the movement and how many perturbations to perform before to start a new local search.

Principle of Adaptive Perturbations

Main Idea

Perturbations in the ILS framework must be well chosen. Indeed, perturbations with low intensity allow to explore gradually the search space, but maybe insufficient when the local optimum is attractive which may imply a stagnation in the current neighborhood. While strong perturbations ensure to escape the neighborhood but may seem to be a new random start. BLS proposes perturbations that adapt on the search state, the number of jumps and movements depend on how long the search is trapped in the current neighborhood.

Instead of completing arbitrarily moves in each iteration, BLS come up with three different perturbations: (i) directed, (ii) recency-based and (iii) random. Each one is called in specific circumstances following a probability and generates a set M of moves that will be solicited during the perturbation with the constraint that the resulting tour must be feasible.

The Three Types of Perturbation Moves

The directed perturbation is designed to select movement candidates resulting to a new solution with minimal degradation. These movements should not belong to a tabu list (Glover Citation1989, Citation1990) (which length is γ) unless one of them can produce a new solution which fitness is better than the best found so far. Candidates for the directed perturbation are set by the set A:

(2) A={swapMoveu,v|mindelta2Optπ,u,v,Huv+γ<IterdeltaSwapπ,u,v+c<cbest,uv}(2)

The recency-based perturbation is, as indicated by his name, established from the history of movements which is saved in H. The selected moves are the oldest ones or those who have never been solicited if there are. This candidates’ set is defined by the set B where

(3) B={swapMoveu,v|minHuv,uv}(3)

The random perturbation is the third and last one in the BLS adaptive perturbation. Candidates (C) are chosen randomly without criteria …

(4) C={swapMoveu,v|uv}(4)

BLS selects for each jump one of the three perturbations described above. The selection process is done pseudo-randomly following a probability calculated by ω and T. Directed perturbation have a high probability to be selected when the current neighborhood is recently discovered (ω is then small). This probability becomes lower when the local optimum seems to be more attractive (ω begins to tend toward T), recency-based and random perturbations are then privileged to offer a stronger diversification.

Using the directed perturbation is done with a probability P which has a threshold P0 (Eq. (5)). While the recency-based and the random perturbations are decided respectively by (1-P) x Q and (1-P) x (1-Q) where Q is a constant from [0,1].

(5) P={eω/T,ifeω/T>P0P0,otherwise(5)

From a local optimum (π), BLS perform a sequence of L jumps, each one selects its move from a set of candidates built by one the three formulas (2–4). Current solution and moves History are updated after each jump and if ever BLS meet a solution that improves the best one found so far, πbest is updated and ω is reset to zero.

Algorithm 2: Perturbation(π, L, H, Iter, ω, M)

Require:π: current solution, which is a local optimality, L: Number of jumps, H: Matrix of moves historic, Iter: Global iteration counter, ω: Number of consecutive non-improving local optima, M: Set of candidate moves

Ensure:       A perturbed solution π

1.      for i from 1 to L do

2.         take a pair(x,y) ∈ M

3.         update π, c, and H

4.         Iter ← Iter + 1

5.         if c 〈 cbestthen

6.            πbest ← π

7.           cbest ← c

8.           ω ← 0

9.         end if

10.     end for

In each iteration of the perturbation process, BLS builds the set M of candidates following one of the three perturbations. The directed and recency-based perturbations need to browse all the history matrix which cost O(n2). Building the set M “L times” during only one perturbation impacts the runtime especially for large instances and make BLS very slow.

Instead of looking through all the matrix H, we propose in our adaptation for the GTSP to pick randomly N moves and keep those who will satisfy the selected perturbation. This improvement will decrease the complexity of each jump from O(n2) to O(n) and will highly contribute reducing the BLS runtime.

The Memetic BLS

We propose in our work a memetic algorithm (MA) by combining the studied BLS (in subsection 3.1) with the benefits of genetic algorithm (GA) (Davis Citation1991).

A GA follows the natural mechanism of selection. It consists of constructing a population of candidate solutions, called chromosomes, which will be the first generation. The population converges gradually from a generation of chromosomes to new ones to reach a final solution having the best objective value from all met candidates.

The generations evolve at each iteration of the GA heuristic. Several procedures are run each time on the chromosomes to enhance the population by finding better individuals that can improve the overall fitness and become the best solution found so far. We expose in this subsection all the components that compose a MA framework.

Initialization

M/2 solutions are generated to constitute the population, where M is the number of clusters. Each solution is created with the semi-random construction heuristic. It consists of generating a random permutation of clusters and select the best node in each cluster. This can be done using the Cluster Optimization heuristic (Fischetti, Salazar Gonzalez, and Toth Citation1997).

Improvements

Improvements in MA algorithms are usually made with one or many local search procedures. Each solution in the population will be improved by BLS in order to get a population with good quality solutions.

Crossover

The crossover operator produces a new solution(s) for the population by coupling two existing solutions (S1 and S2) belonging to the previous generation and selected following a strategy. Uniform crossover (UC) (Syswerda Citation1989) is used in our memetic approach, it produces two children from S1 and S2 which are picked from the population by a tournament selection (Miller, Goldberg, and others Citation1995), each one is selected among three solutions picked randomly.

Mutation

The mutation is a genetic operator that alters some solutions in the population in order to reduce the probability of convergence by maintaining a diversified population. The mutation is performed on a solution without any criteria and may naturally guide to the worst solution.

We use in our memetic approach the double bridge move (Martin, Otto, and Felten Citation1991) as a mutation operator in order to minimize chances of coming back to the same solution after the improvement step.

Termination Condition

We stop producing new generations once we found the best-known fitness for the concerned instance I. Else, the maximum number of generations is equal to the number of clusters in I. The output solution will be the best solution in the last generation.

Experimental Results

Experimental Protocol

Memetic BLS is implemented in Java 1.7, and run on a Pentium Dual-Core CPU T4400 with 2.20 GHz and 2.8 GB of memory. 39 instances from the TSPLIB benchmark (and converted to GTSP instances by the standard clustering procedure (Fischetti, Salazar Gonzalez, and Toth Citation1997)) are solved whose sizes are from 10 clusters and 48 nodes to 89 clusters and 442 nodes, each one is run 20 times.

Computational Results and Comparisons

Memetic BLS is compared during our tests with another memetic approach by (Bontoux, Artigues, and Feillet Citation2010) and a random-key genetic algorithm proposed by (Snyder and Daskin Citation2006). below are listed the performance measures used for evaluating our Memetic BLS and comparing it with the branch-and-cut runtime (Fischetti, Salazar Gonzalez, and Toth Citation1997) and the two other approaches.

  • (i) The average deviation of obtained solutions from the best known solution, denoted dev: dev=100×favgbest/best% where favg is the fitness average of the 20 solutions obtained from 20 runs of Memetic BLS, and best is the best known fitness.

  • (ii) the CPU time in seconds.

Table 1. Comparative results between BLS and standard local search using 2-opt.

Experimental results done on the Memetic BLS are quite satisfying. Our approach succeeds to reach the optimal solution 35 times from the 39 studied instances, while the average deviation (for all instances) is 0.04% which let our method competitive regarding the others using in the comparison. On the other hand, memetic BLS seems to be greedy in runtime which not strange for BLS (Benlic and Hao Citation2013b). Recall that BLS has been improved in this work and without this improvement, runtime could be much higher.

Conclusion

We presented in this paper a new memetic approach for solving the GTSP where the BLS metaheuristic is used in the improvement stage. BLS is a performing metaheuristic that proved its performances in many COPs. Results given in all related works (including this one) are very convincing especially in terms of fitness and deviation from the optimal solution.

The exposed results in the previous Section demonstrate the competitiveness of the memetic BLS by finding optimal solutions in most of the studied instances. The comparative study done consolidates the good performances of our work. The metaheuristic complexity still restrictive because of some heavy components which can be considered as a drawback, depending on decision makers. Thus, we proposed an improvement to the perturbation component by reducing the complexity from O(n2) to O(n).

References

  • Ben-Arieh, D., G. Gutin, M. Penn, A. Yeo, and A. Zverovitch. 2003. Transformations of Generalized ATSP into ATSP. Operations Research Letters 31 (5):357–65. Elsevier. doi:10.1016/S0167-6377(03)00031-2.
  • Benlic, U., and J.-K. Hao. 2012. A study of breakout local search for the minimum sum coloring problem. In Simulated evolution and learning, 128–37. Springer.
  • Benlic, U., and J.-K. Hao. 2013a. Breakout local search for maximum clique problems. Computers {&} Operations Research 40 (1):192–206. Elsevier {BV}. doi:10.1016/j.cor.2012.06.002.
  • Benlic, U., and J.-K. Hao. 2013b. Breakout local search for the quadratic assignment problem. Applied Mathematics and Computation 219 (9):4800–15. Elsevier {BV}. doi:10.1016/j.amc.2012.10.106.
  • Benlic, U., and J.-K. Hao. 2013c. Breakout local search for the max-cutproblem. Engineering Applications of Artificial Intelligence 26 (3):1162–73. Elsevier {BV}. doi:10.1016/j.engappai.2012.09.001.
  • Bontoux, B., C. Artigues, and D. Feillet. 2010. A memetic algorithm with a large neighborhood crossover operator for the generalized traveling salesman problem. Computers & Operations Research 37 (11):1844–52. Elsevier. doi:10.1016/j.cor.2009.05.004.
  • Davis, L. 1991. Handbook of Genetic Algorithms.
  • Fischetti, M., J. J. Salazar Gonzalez, and P. Toth. 1997. A branch-and-cut algorithm for the symmetric generalized traveling salesman problem. Operations Research 45 (3):378–94. INFORMS. doi:10.1287/opre.45.3.378.
  • Fu, Z.-H., and J.-K. Hao. 2014. Breakout local search for the steiner tree problem with revenue, budget and hop constraints. European Journal of Operational Research 232 (1):209–20. Elsevier. doi:10.1016/j.ejor.2013.06.048.
  • Ghandi, S., and E. Masehian. 2015. A Breakout Local Search (BLS) method for solving the assembly sequence planning problem. Engineering Applications of Artificial Intelligence 39:245–66. Elsevier. doi:10.1016/j.engappai.2014.12.009.
  • Glover, F. 1989. Tabu search{\textemdash}part I. {ORSA} Journal on Computing 1 (3):190–206. Institute for Operations Research and the Management Sciences ({INFORMS}). doi:10.1287/ijoc.1.3.190.
  • Glover, F. 1990. Tabu search{\textemdash}part {II}. {ORSA} Journal on Computing 2 (1):4–32. Institute for Operations Research and the Management Sciences ({INFORMS}). doi:10.1287/ijoc.2.1.4.
  • Gutin, G., and D. Karapetyan. 2010. A memetic algorithm for the generalized traveling salesman problem. Natural Computing 9 (1):47–60. Springer. doi:10.1007/s11047-009-9111-6.
  • Gutin, G., D. Karapetyan, and N. Krasnogor. 2008. Memetic algorithm for the generalized asymmetric traveling salesman problem. In Nature inspired cooperative strategies for optimization (NICSO 2007), 199–210. Springer.
  • Hart, W. E., N. Krasnogor, and J. E. Smith. 2004. Recent advances in memetic algorithms, vol. 166. Springer Science & Business Media.
  • Henry-Labordere, A. L. 1969. The record balancing problem: A dynamic programming solution of a generalized travelling salesman problem. RIRO B-2:43–49.
  • İlhan, İ.. 2017. An Application On mobile devices with android and IOS operating systems using google maps APIs for the traveling salesman problem. Applied Artificial Intelligence 1–14. Taylor & Francis. doi:10.1080/08839514.2017.1339983
  • Karapetyan, D., and G. Gutin. 2011. Lin–Kernighan heuristic adaptations for the generalized traveling salesman problem. European Journal of Operational Research 208 (3):221–32. Elsevier. doi:10.1016/j.ejor.2010.08.011.
  • Karapetyan, D., and G. Gutin. 2012. Efficient local search algorithms for known and new neighborhoods for the generalized traveling salesman problem. European Journal of Operational Research 219 (2):234–51. Elsevier. doi:10.1016/j.ejor.2012.01.011.
  • Laporte, G., A. Asef-Vaziri, and C. Sriskandarajah. 1996. Some applications of the generalized travelling salesman problem. Journal of the Operational Research Society 47 (12):1461–67. Springer. doi:10.1057/jors.1996.190.
  • Laporte, G., and F. Semet. 1999. Computational evaluation of a transformation procedure for the symmetric generalized traveling salesman problem. INFOR: Information Systems and Operational Research 37 (2):114–20. Taylor & Francis.
  • Laporte, G., H. Mercure, and Y. Nobert. 1987. Generalized travelling salesman problem through N sets of nodes: The asymmetrical case. Discrete Applied Mathematics 18 (2):185–97. Elsevier. doi:10.1016/0166-218X(87)90020-5.
  • Lourenço, H. R., O. C. Martin, and T. Stutzle. 2001. Iterated local search. arXiv Preprint Math/0102188.
  • Martin, O., S. W. Otto, and E. W. Felten. 1991. Large-step Markov chains for the traveling salesman problem. Oregon Graduate Institute of Science and Technology, Department of Computer Science and Engineering.
  • Miller, B. L., and D. E. Goldberg others. 1995. Genetic algorithms, tournament selection, and the effects of noise. Complex Systems 9 (3). [Champaign, IL, USA: Complex Systems Publications, Inc., c1987-… ():193–212. .
  • Noon, C. E., and J. C. Bean. 1991. A lagrangian based approach for the asymmetric generalized traveling salesman problem. Operations Research 39 (4):623–32. INFORMS. doi:10.1287/opre.39.4.623.
  • Noon, C. E., and J. C. Bean. 1993. An efficient transformation of the generalized traveling salesman problem. INFOR: Information Systems and Operational Research 31 (1):39–44. Taylor & Francis.
  • Pourhassan, M., and F. Neumann. 2015. On the impact of local search operators and variable neighbourhood search for the generalized travelling salesperson problem. In Proceedings of the 2015 annual conference on genetic and evolutionary computation, 465–72.
  • Reinelt, G. 1991. {TSPLIB}{\textemdash}A traveling salesman problem library. {ORSA} Journal on Computing 3 (4):376–84. Institute for Operations Research and the Management Sciences ({INFORMS}). doi:10.1287/ijoc.3.4.376.
  • Silberholz, J., and B. Golden. 2007. The generalized traveling salesman problem: A new genetic algorithm approach. In Extending the horizons: Advances in computing, optimization, and decision technologies, 165–81. Springer.
  • Smith, S. L., and F. Imeson. 2017. GLNS: An effective large neighborhood search heuristic for the generalized traveling salesman problem. Computers & Operations Research 87:1–19. Elsevier. doi:10.1016/j.cor.2017.05.010.
  • Snyder, L. V., and M. S. Daskin. 2006. A random-key genetic algorithm for the generalized traveling salesman problem. European Journal of Operational Research 174 (1):38–53. Elsevier. doi:10.1016/j.ejor.2004.09.057.
  • Srivastava, S. S., S. Kumar, R. C. Garg, and P. Sen. 1969. Generalized travelling salesman problem through N Sets of Nodes. CORS Journal 7:97–101.
  • Syswerda, G. 1989. Uniform crossover in genetic algorithms. In Proceedings of the 3rd international conference on genetic algorithms, 2–9. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc. http://dl.acm.org/citation.cfm?id=645512.657265.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.