261
Views
7
CrossRef citations to date
0
Altmetric
Original Articles

GACO – A HYBRID ANT COLONY OPTIMIZATION METAHEURISTIC FOR THE DYNAMIC LOAD-BALANCED CLUSTERING PROBLEM IN AD HOC NETWORKS

&
Pages 570-598 | Published online: 01 Oct 2009

Abstract

We present the design of a novel hybrid genetic ant colony optimization (GACO) metaheuristic. Genetic ant colony optimization is designed to address the dynamic load-balanced clustering problem formulated from ad hoc networks. The main algorithm in GACO is ACO. Whenever an environment change occurs (i.e., nodes move), it makes use of a genetic algorithm to quickly achieve adaptation by refocusing the search process around promising areas of the search space induced by the new problem structure. Compared to other ACO approaches for dynamic problems, GACO does not depend on any problem-specific heuristics to repair or deconstruct solutions. Genetic ant colony optimization also does not require the knowledge of the specific changes that occurred. We compare GACO with three other adaptation methods, namely, P-ACO, PAdapt, and GreedyAnts. P-ACO is a population-based ACO approach that invokes a repair algorithm on its population of solutions when an environment change occurs. PAdapt works by adapting the values of major ACO parameters, while GreedyAnts employs a group of ants that construct solutions in a greedy manner. Empirical results show that GACO is able to react and recover faster from any performance degradation. Genetic ant colony optimization also produces better solutions within the allowable recovery window. These results are shown to be statistically significant.

The ant colony optimization (ACO) metaheuristic (, Citation1999; Dorigo, Di Caro, and Gambardella, Citation1999) is a stochastic search method based on the foraging behavior of natural ants. Ant colony optimization is grouped under a larger group of techniques called swarm intelligence (Bonabeau and Théraulaz, Citation2000). Ant colony optimization can be viewed as an iterative multi-agent approach to problem-solving. During each iteration, each individual ant (i.e., the agent) constructs a solution. Each ant will then share its solution construction experience with the entire colony by updating a global data structure called the pheromone matrix. This data structure simulates the pheromone trails that natural ants produce while locating foraging areas. Each entry in the pheromone matrix represents the desirability of each solution component, and is referred to as its pheromone intensity. At the end of each iteration, the pheromone associated with each solution component is reinforced based on the quality of the solution that comprises the particular solution component. In subsequent iterations, ants will use the pheromone intensities of available solution components to guide solution construction. As a result of repeated pheromone reinforcements, a subset of solution components will emerge to have pheromone intensities much higher than the others. At this point, ants will construct identical or nearly identical solutions using these highly desirable components. When this occurs, the ACO search is said to have converged.

We differentiate research efforts concerning ACO according to whether the combinatorial optimization (CO) problem being solved is static or dynamic. A static problem is where the structural definition and problem-related data remain the same as the problem is being solved. Numerous applications of the ACO metaheuristic to popular static problems have been widely published. These include the traveling salesman problem (Dorigo and Gambardella, Citation1996), quadratic assignment problem (Maniezzo and Colorni, Citation1999), job-shop scheduling (Colorni, Dorigo, Maniezzo, and Trubian, Citation1994), vehicle routing (Gambardella, Taillard, and Agazzi, Citation1999), sequential ordering (Gambardella and Dorigo, Citation1997), graph coloring (Costa and Hertz, Citation1997), and shortest common supersequence (Michel and Middendorf, Citation1998). More recently, ACO has been used to learn rule-based classifiers (Martens et al., Citation2007). Ant colony optimization has also been used to tackle intractable problems in bioinformatics. One such example is the work by Shmygelska and Hoos (Citation2005) on a protein-folding problem.

In a dynamic problem, changes in the problem structure or problem-related data (will be referred to as environmental change in the rest of the article) take place while the problem is being solved. The earliest applications of ACO to dynamic problems have focused on problems arising from communication networks, most notably routing (Di Caro and Dorigo, Citation1998a; Subramanian, Druschel, and Chen, Citation1997; Schoonderwoerd, Holland, and Bruten, Citation1997; Di Caro and Dorigo, Citation1998b). It is to be noted that these ACO algorithms for routing are distributed in nature. However, of late, there has been increasing interest in enhancing ACO algorithms for addressing dynamic CO problems that take a centralized approach by making use of global information. Examples of such efforts can be found in Eyckelhof and Snoekm (Citation2002), Weicker (Citation2002), and Ho and Ewe (Citation2005b).

When an environmental change occurs, the collective search experience encoded by the pheromone matrix does not accurately portray the desirability of the solution components with respect to the new problem structure. When this happens, the solution quality degrades. Hence, the ACO metaheuristic must be able to adapt to the new problem structure effectively. The capability to adapt after an environment change is measured according to the following criteria:

Is the metaheuristic able to produce solutions whose quality is within an acceptable range from a predetermined threshold? If so, we say that it is able to recover.

How fast (measured in iterations) can the metaheuristic recover? This measure is known as reactivity (Weicker, Citation2002).

Is the metaheuristic able to produce good solution at the point of recovery and throughout the subsequent iterations?

In this work, we develop a novel hybrid ACO approach called genetic ACO (GACO) to attack the dynamic load-balanced clustering problem arising from ad hoc networks (Toh, Citation2001). The base algorithm of GACO is essentially ACO. To enable it to adapt to an environment change in a timely manner with good quality solutions, it employs the search intensification capability of a genetic algorithm (GA). We describe the GACO model, with particular focus on how the pheromone matrix of ACO is used to initialize the GA, and subsequently, how the best GA solution is used to bias the ACO solution construction process. As a precursor to this work, we have studied the performance of ACO in the presence of environment changes (Ho and Ewe, Citation2005b). In another recent work, we have demonstrated a positive synergy between ACO and GA in addressing the minimum dominating set problem (Ho, Singh, and Ewe, Citation2006).

PROBLEM DESCRIPTION

An ad hoc network containing N nodes can be modeled as an undirected graph G = (V, E), where V is the set of vertices representing the nodes of the network and E is the set of edges. Let ‖ij‖ be the Euclidean distance between nodes i and j. An edge e ij  ∊ E exists between i and j if they can receive each other's transmission, i.e., if ‖ij‖ ≤R, where R is the transmission range of the nodes. Nodes are organized into clusters to enhance manageability (Gerla and Tsai, Citation1995). Communication activities in a cluster are coordinated by a node that plays the role of cluster head (CH). All other nodes that are not CHs are at most one hop away from its CH. When organized in this way, the set of CHs forms a dominating set (Chartrand and Lesniak, Citation2005). Ad hoc networks with centralized architecture have also been proposed and constructed (Oikonomou et al., Citation2003; Vaios, Oikonomou, and Stavrakakis, Citation2003; Habetha and Nadler, Citation2001). These networks are based on the wireless HiperLAN/2 technology. In such networks, clusters are computed by the access point based on the topological information.

The problem of load-balancing the clusters arises due to the limitation of resources available to CHs. Cluster heads can efficiently support up to a certain number of nodes. It is desirable to have each CH support about the same number of nodes. We define the load of a CH as the number of nodes it has in its cluster. This is based on the fact that in ad hoc networks, the work load of a CH is directly proportional to its cluster size. Therefore, the task of constructing load-balanced clusters can be formulated as follows:

Find a set of dominating nodes V′ (the CHs) such that the nodes in V − V′ can be distributed as evenly as possible among the members of V′ subject to the constraint that the number of dominated nodes per CH is at most max_load.

The quality of the constructed clusters can then be measured by the variance of the load of the CHs, and is given by the following:

n c is the number of clusters, x i is the load of CH i, and μ = (N − n c )/n c . A smaller load balancing factor (LBF) signifies better load distribution.

RELATED WORK – ACO FOR DYNAMIC ENVIRONMENTS

Early work in this area saw researchers propose techniques that directly modify the pheromone information to reflect the new problem structure. One of such effort was reported in Guntsch and Middendorf (Citation2001). It proposed three pheromone equalization methods for addressing the dynamic traveling salesman problem (TSP). The first two, known as the η-strategy and τ-strategy, perform local pheromone modifications. Both strategies rely on the knowledge of where the change in the problem structure occurred. The third strategy, known as the restart strategy performs global pheromone modification by reinitializing all pheromone values by the same computed quantity. This global strategy produced very good results even though it does not need to know the changes that occurred. However, the authors commented that the success of the global strategy may be due to the very minute environment change—only one city was inserted or deleted.

The population-based ACO (P-ACO) (Guntsch and Middendorf, Citation2002a), which was originally proposed for static problems, has been demonstrated to be promising in addressing dynamic problems. It maintains a population of solutions that determines the updates to the pheromone intensities. Initially, the population is empty. The best solution from each cycle is added into the population until the maximum population size is reached. After this has occurred, a strategy is used to replace a population member with a new solution from the latest cycle. When the population member is removed, the pheromone contributed by it is also subtracted from the pheromone matrix. This gives P-ACO the ability to undo previous pheromone updates. In a subsequent work (Guntsch and Middendorf, Citation2002b), the authors showed how the original P-ACO can be adapted to dynamic problems. Whenever a change in the problem structure occurs, each solution in the population is repaired using a problem-specific heuristic. Changes to the population are then effected back to the pheromone matrix.

P-ACO looks promising for dynamic problems provided that dynamic changes to the problem are not too severe. Furthermore, it is assumed that there is always good, problem-specific heuristics available for repairing solutions in the population after a change occurred. The ability to repair solutions implies firstly, the need for the explicit identification of the exact changes that occurred, and secondly, how these changes affect current solutions in the population. The success of these repair techniques highly depends on how accurate the latter can be performed.

In a more recent work, Randall (Citation2005) proposed a solution deconstruction strategy for making ACO adapt better to environment changes. When a change in environment occurs, the deconstruction process is performed on the best solution from the current colony. This process is iterative. In each iteration, a solution component from the best solution is selected based on the nature of the event that occurred, and this component is removed from the solution. If this action results in worse violations of constraints, the solution component will be reinserted, and another different component will be tried. This process continues until zero constraint violation is achieved. In the worst case, all solution components making up the best solution will be processed. Compared to P-ACO, the solution deconstruction strategy can work with partially constructed solutions instead of complete solutions. Solution deconstruction resembles P-ACO in that it also performs local repair to an existing elite solution. As with the problem-specific heuristics employed by P-ACO, the solution deconstruction process must have the ability to adequately identify interactions between solution components. This is necessary to gauge the net effect of adding, removing, or moving a particular solution component.

There are also a number of other efforts that have successfully applied ACO to dynamic optimization problems. Montemanni, Gambardella, Rizzoli, and Donati (Citation2003) developed an ant colony system-based algorithm for the dynamic vehicle routing problem. The algorithm dynamically places new orders into an evolving schedule. Garlick and Barr (Citation2002) proposed the use of ACO to perform dynamic wavelength routing in Wavelength Division Multiplying (WDM) networks. The proposed algorithm considers dynamic traffic and attempts to minimize the total network connection blocking. In a more recent work, Camilo, Carreto, Sa Silva, and Boavida (Citation2006) presented an ant-based routing algorithm to perform energy-efficient routing in wireless sensor networks in order to maximize network lifetime. In relation to ad hoc networks, AntHocNet (Di Caro, Ducatelle, and Gambardella, Citation2004; Di Caro, Ducatelle, and Gambardella, Citation2006) is an ant-based routing algorithm that has the potential to outperform conventional routing protocols under certain conditions.

THE GACO APPROACH

Genetic ACO is a hybrid metaheuristic which incorporates an ACO algorithm that we have proposed for the static version of the load-balanced clustering problem (Ho and Ewe, Citation2005a). In the context of GACO, we refer to it as the base algorithm. We begin by describing this base algorithm, and then the architecture of GACO and the interactions between its major components. We then elaborate the main concepts behind GACO, in particular the interfaces between GA and ACO. We also describe the details of the GA used within GACO.

Overview of the Base ACO Algorithm for GACO

The first step in designing an ACO algorithm is to determine the solution components. For this problem, the solution components are the network nodes. Hence, pheromones are associated with the nodes. The pheromone level of a node v at iteration t is represented as τ v (t). Each node is also associated with another quantity called heuristic information, and is represented as η v for each node v. The heuristic information is usually obtained via a greedy heuristic. In an iteration, an ant will incrementally construct a solution, starting out with the empty solution. At each step of an iteration, an ant will expand its partial solution by probabilistically selecting the next solution component from a pool of candidates. The selection probability of a solution component is proportional to the product of its pheromone level and heuristic information. When a solution component is selected, it is now designated a CH, and a heuristic will be invoked to determine its cluster members. At the end of the iteration, the pheromone level for every node will be reinforced by the iteration-best ant. Apart from reinforcement, the pheromones are also evaporated by an amount controlled by the evaporation rate, represented as ρ. Details of the heuristic information, the selection of the next solution component, and the heuristic for determining cluster membership are presented below.

The Heuristic Information

Each node is assigned a weight that indicates the number of nodes it can cover (i.e., dominate) if it is selected as a CH. The weight of each node v is denoted weight v , and is initialized to its degree. The variance of the CHs’ load can be reduced if a majority of the CHs have loads that are closest to the median of the possible loads 1, 2,…, max_load. Based on this idea, ACO computes the heuristic information η associated to node v at t r (which means at step r of iteration t) using Equation (Equation2). weight v (t r ) denotes the weight of node v at t r .

where R and S are constants,
Using Equation (Equation2), nodes with weight equal to med has the largest chance. The heuristic information decreases proportional to the distance of weight v (t r ) from med. This applies if weight v (t r ) ≤ max_load. Nodes whose weight is larger than max_load have smaller heuristic information, hence a smaller chance to be selected. The heuristic information is dynamic. As neighbors of a node are covered, the weight of that particular node decreases.

Selecting the Next Solution Component

Taking into account the pheromone and heuristic information, an ant k will select node v to be included in its partially constructed solution S k (t r ) at step r of iteration t with probability:

where α and β controls the relative importance of the pheromone and heuristic information respectively. The term allowed k (t r ) is the set of nodes that ant k can choose from.

Heuristic for Determining Cluster Membership

After selecting node v as a CH, the next step is to determine which of v's neighbors should become its cluster members. If weight v  ≤ max_load, then all uncovered neighbors of v can become its cluster members without violating the maximum load constraint. If weight v  > max_load, then we need to determine which neighbor(s) should or should not be covered by v. The heuristic will first check each uncovered neighbor of v to detect two boundary cases.

FIGURE 1 The two boundary cases in determining cluster membership. (a) Boundary case 1; (b) Boundary case 2.

Boundary case 1: This is depicted in Figure . Assume that all of v's neighbors are uncovered, and therefore have been added to its cluster members. If max_load = 3 (also assume this value for the second boundary case), then one of the nodes has to be removed. If there exists a node x as in Figure , it should not be removed to avoid formation of a singleton cluster (a cluster with one member only, i.e., the CH). The formal description for case 1 is given below:

Here, the weight(x) refers to the weight of node x, neighbor(x, v) expresses that both x and v are neighbors, and cluster_member(x, v) states that x belongs to the cluster in which v is the CH.

FIGURE 1 The two boundary cases in determining cluster membership. (a) Boundary case 1; (b) Boundary case 2.

Boundary case 2: This is depicted in Figure . Again node v is selected as CH. Notice that if node b is dominated by v, then node d will have to form a singleton cluster. In this case, we disallow b to be dominated by v and allow b and d to form a cluster. The formal definition for case 2 is given below.

If the maximum load constraint is still violated after checking for the boundary cases, we arbitrarily select max_load number of the uncovered neighbors of v that have not been ruled out by the boundary cases as cluster members. As one cluster is constructed, the selected CH and its cluster members are marked as covered and the weight update procedure in Listing is performed.

Listing 1 The weight update procedure.

Listing 1 The weight update procedure.

At the end of iteration t, the pheromone for node i is updated for use in iteration t + 1 using Equation (Equation4):

where ρ is the pheromone evaporation factor, and Δτ i is the pheromone deposited after iteration t, defined as:
where Q is a constant and f(S ib ) is the objective function applied to the solution constructed by the iteration-best ant. The objective function is given by Equation (Equation1).

Architecture of GACO

The operation of GACO is illustrated in Figure . Genetic ACO consists of two main components—the ACO algorithm and a GA. Genetic ACO begins with the execution of the ACO on the input problem instance. When node movements occur (step 1 in Figure ), the GACO controller will retrieve the current pheromone intensities from the ACO (step 2), and make use of this to create the initial population for the GA (step 3). This enables the cumulative ACO search experience to be encoded in the GA initial population. The GA then evolves its initial population by making use of the new problem structure. After a predetermined number of generations (we set this to 500), the GA terminates and returns its best solution to the GACO controller (step 4). This will represent a good solution for the new problem. The controller will then reinforce the pheromone intensities corresponding to the solution components that appear in the GA's best solution (step 5). This implements the type one bias. The next step implements the type two bias by transforming a selected proportion of the ant population into G-ants (step 6). The ACO then resumes operation, but now with the G-ants working in synergy with their ordinary counterparts to construct solutions. After the lifetime (expressed in number cycles) of the G-ants expires, they are converted back to normal ants (step 7), thus removing the type two bias.

FIGURE 2 The GACO architecture.

FIGURE 2 The GACO architecture.

GACO Main Concepts

Empirical evaluations in Ho and Ewe (Citation2005a) indicated that the proposed ACO outperformed the GA proposed by Turgut, Das, Elmasri, and Turgut (Citation2002). This GA did not work for graphs consisting of cliques of a size larger than max_load. Thus, we had to enhance to overcome this limitation. The other properties of the GA remain the same. An interesting observation is that almost half of the time, the GA converged to solutions with a majority of clusters being singleton clusters. A singleton cluster is one in which the only member of the cluster is also the cluster head. If all clusters are singletons, then a perfectly balanced load would be achieved. This is an extreme that needs to be avoided. Despite converging to undesirable solutions, the GA showed an attractive ability to quickly focus its search on promising areas of the search space during the early generations.

The hybrid model proposed in this work (i.e., GACO) harnesses this ability by making use of the GA to quickly adapt to the new problem structure. The GA is invoked for a short number of generations, (hence prevented from converging) whenever node movements occur. The pheromone intensities immediately before the environment change are used to create the initial population for the GA. This is the ACO to GA interface. The best solution produced by the GA is then used to bias subsequent ACO solution construction to suit the new problem structure. This is the GA to ACO interface. For the GA to ACO interface, GACO implements two types of biases as explained in the following:

Type 1 bias: Pheromones associated to solution components appearing in the GA solution are reinforced.

Type 2 bias: This is in addition to the type 1 bias. A proportion of the ants in the colony are selected to be G-ants (GA ants). These special ants assign higher weights to solution components based on the order that these components appear in the GA solution.

Compared to P-ACO and the solution deconstruction strategy, the proposed GACO has the following advantages:

GACO does not depend on any problem-specific heuristics to repair or deconstruct solutions.

GACO does not require the knowledge of the specific changes that occurred.

As a direct consequence of the first two advantages, GACO does not need to identify interactions between solution components.

Detecting Environment Changes

The occurrence of an environmental change is detected by monitoring a measure that we call the pseudo-entropy of the pheromone values. This measure is given by the following:

where H is the pseudo-entropy measure, τ is the vector of pheromone values, and τ i is the pheromone value associated to node i. This works because only the iteration-best ant updates the pheromones. When the solution quality keeps improving, the pseudo-entropy measure decreases steadily at almost a linear rate. When an environment change occurs, the solution quality degrades almost all the time. Thus, the pheromone update pattern will change. This will be reflected by the pseudo-entropy measure, which will increase abruptly during the first few iterations following the environment change.

The ACO to GA Interface – Constructing the GA Initial Population

The ACO to GA interface refers to the mechanism of using ACO to create the initial population of the GA. An initial population of the GA of size j is constructed by first having j ants construct j solutions using the existing pheromone intensities. Since previous search experiences encoded by the pheromone no longer accurately reflect the new problem structure, the reliance on pheromone during this phase should be reduced, but not entirely absent since there may be similarities between the old and new structure. To this end, we set a smaller value of α = 0.15 instead of the usual α = 1. Note that it is not necessary to globally identify which nodes have moved. This information is implicitly given by the heuristic information that can be updated through suitable local neighborhood discovery protocols. Solution construction should be primarily guided by the heuristic information. However, care is taken to ensure that the initial population has the right amount of diversity. Investigation on the composition of the initial solutions showed poor diversity with the original setting of β = 3. In order to overcome this, we use β = 2. Using a β value that is too small may not provide the necessary level of intensification towards the new problem structure. Once constructed, the solutions are then encoded into the corresponding genetic representations which are consistent with the decoding procedure explained in the section-entitled “Overview of the GA in GACO.”

The GA to ACO Interface – Implementing the Two Types of Biases

This interface has to ensure that the promising solution components identified by GA can be communicated, understood, and utilized by ACO. This objective is achieved through the two types of biases introduced earlier.

Type 1 Bias – Using the Ga Best Solution to Perform Pheromone Updates

The pheromone information accumulated in the cycles before the node movements occurred is not reset but maintained. This allows search experience relevant to parts of the problem that did not change to be used. Pheromones associated with solution components that the GA finds promising as CHs will receive additional reinforcement. The amount of additional reinforcement is determined by Equations (Equation4) and (Equation5). However, instead of using 0.5 as the value for Q, we set Q = 1.0 for this purpose. This makes those components found favorable by the GA receive twice the reinforcement than they would under normal ACO operation.

Type 2 Bias – Using G-Ants for Solution Construction

The GA thrives on exploring the sequence by which solution components are added to construct a complete solution. The type 1 bias only tells which solution components appear as CHs in the GA best solution. It does not, however, capture the sequencing information for the solution components. Such information is advantageous because the selection of one solution component into the solution affects the heuristic information of neighboring nodes. Hence, the objective of the type 2 bias is to transfer such information from the GA to the ACO.

This is realized by having a proportion of the ants scale up the numerator of Equation (Equation3) associated to a solution component that appears in the GA solution. We call this special group of ants G-ants, and their number is p percent of the population. The other 100 −p percent of the ants will behave as usual. For the experiments, p is set to 30%. Say that the solution produced by GA is decoded to S GA  = (s 1, s 2,…, s z ); then a G-ant will give the highest scaling to s 1 and the lowest to s z , where s 1 is the first and s z is the last solution component. The usual equation, Equation (Equation3), which governs the probabilistic selection of solution components, is modified with the addition of a scaling function c(v), resulting in Equation (Equation6).

and c(v) is defined as
where z is the total number of solution components in S GA , i is the index of v in S GA with 1 ≤i ≤ z. For example, if i = 2, then v is the second solution component that was selected. The parameter ϖ serves to control how rapidly the scaling function value decreases as i increases. A plot of Equation (Equation7) for z = 250 with T = 2.5 is shown in Figure . A smaller value for ϖ causes c(v) to decrease more rapidly as evident in the figure. The parameter T determines the range for c(v). An example plot is shown in Figure using z = 250 and ϖ = 0.999, while T is set to 2.5, 3.0, and 3.5. A larger T value gives a bigger range. However, if T is too large, there is a risk of overemphasizing the solution components of S GA . In turn, this will retard the search diversification capability of the G-ants. For the empirical evaluations, we use ϖ = 0.999 and T = 2.5. Pheromone updates after a complete cycle will be performed by two ants: the best ant from among the normal ants, and the best G-ant. The type 2 bias will be applied only for the first max_cycle iterations after node movements occur. After that, the G-ants will be converted back to normal ants. For the empirical evaluations, max_cycle = 5.

FIGURE 3 The effect of ϖ on the scaling function c(v).

FIGURE 3 The effect of ϖ on the scaling function c(v).

FIGURE 4 The effect of T on the scaling function c(v).

FIGURE 4 The effect of T on the scaling function c(v).

Overview of the GA in GACO

The GA uses a string of integers as genetic representation. Each candidate solution is encoded as a sequence of nodes (to be more precise, the node labels) g 1, g 2,…, g N , with N being the number of nodes. During fitness evaluation, the encoding is decoded first using the following procedure:

  1. Select a CH from the sequence g 1, g 2,…, g N starting with g 1. The following conditions should be satisfied:

    The node under consideration is currently not a CH.

    It is not a member of any of the already constructed clusters.

    The weight of that particular node is not more than max_load.

  2. The selected node from step 1 and all its neighbors will form a cluster. Mark them as “processed.” They will no longer be considered for CH selection.

  3. Repeat step 1 to select the next CH until all nodes in g 1, g 2,…, g N have been considered.

  4. After going through the sequence once, a second run is performed to find any node that has not been assigned as CH or cluster member.

The GA employs the order crossover operator for recombination. This crossover constructs an offspring by selecting a subsequence of nodes from one parent and preserving the relative order of nodes from the other parent (Davis, Citation1995). Mutation is performed using the swap mutator operator. The swap mutator, when operating on the genetic encoding, swaps two randomly selected nodes. The GA also uses tournament selection with a tournament size of three.

COMPUTATIONAL SET-UP

This section describes in detail the experiment set-up, the three strategies that will be compared against GACO, and the performance measures used for comparisons.

Comparing GACO with Other Strategies

The objective of the empirical evaluations is to assess the performance of GACO in the presence of disruptive environment changes. The amount of environment change, i.e., the number of nodes that moved is set to 5% and 10% of the total number of nodes. The 10% change is considerably severe when compared to the extent of environmental changes cited in previous efforts, which employs repair algorithms as previously described. We compare GACO against three other strategies, namely, P-ACO, PAdapt, and GreedyAnts. These will be described in the ensuing subsections.

Strategy 1 – Adapting Major Parameters: α, β, ρ, and Q

This strategy attempts to adapt the search process to the new problem structure by increasing or decreasing the values of the four ACO parameters—α, β, ρ, and Q. Similar to the GACO type 2 bias, only a proportion of the ant population will execute this strategy. Instead of using p = 30% as in type 2 bias, we set p = 50%. Thus, half the ants will be constructing solutions using the adapted parameter values. At the end of every iteration, the best solutions from both types of ants are used to reinforce the pheromones.

The four parameters are adapted in the following manner:

Reduce α to 0.1 from 1.0 – Since the environment change rendered the pheromone information inaccurate, reducing α will allow the ants to rely less on this information.

Increase β to 5.0 from 3.0 – Solution construction should be largely driven by the new problem structure implicitly obtainable from the heuristic information.

Increase ρ to 0.4 from 0.1 – This higher evaporation rate helps the ants to slightly “forget” knowledge from past search experience.

Increase Q to 1.0 from 0.5 – This enables promising solution components to receive more reinforcement.

Ants that work on these new parameter values will exist only for max_cycle number of iterations after the environmental change. As in the type 2 bias, max_cycle is set to 5. In the subsequent parts of this paper, we refer to Strategy 1 as PAdapt.

Strategy 2 – Using Greedy Ants

Instead of adapting the parameters, we use a proportion of ants (p percent, with p = 50%) from the population to construct solutions for max_cycle = 5 number of iterations after the environmental change. We refer to these ants as greedy ants. A greedy ant will grow its partial solution by deterministically selecting a solution component with the highest potential computed using only the heuristic information (Equation (Equation2)). At the end of each iteration, the best Greedy ant and the best ordinary ant will update the pheromone information. In subsequent parts of this aticle, we will refer to Strategy 2 as GreedyAnts.

Strategy 3 – P-ACO

The implementation of P-ACO for this problem follows that of Guntsch and Middendorf (Citation2002b) and employs the age-based population update strategy without elitism. Results in Guntsch and Middendorf (Citation2002b) showed that the age-based strategy is versatile and produced the best results for both the TSP and QAP problems. P-ACO uses an additional parameter labeled q 0, which represents the probability that an ant will select the next solution component that maximizes the quantity [τ v (t)]αη v (t r )]β. If this probability value is not met, the ant will follow Equation (Equation3). P-ACO imposes a maximum value for the pheromone values. We represent this as τ max . The P-ACO parameters that are tuned are shown in Tables and . The parameter tuning technique will be explained in the section-titled “Experiment Set-up.” When an environmental change occurs, a repair heuristic will be invoked to repair the solutions in the population. We use the following repair heuristic:

TABLE 1 Parameters and their possible values for the different strategies

TABLE 2 Parameter values after tuning

For each node v that moves and leaves its current cluster C i , perform the following steps:

Step 1: Find a neighbor cluster C j with the smallest load(C j ), i.e., the smallest number of cluster members.

Step 2: If load(C j ) < max_load, then assign v to cluster C j . Then stop and process the next node that moved.

Step 3: If load(C j ) ≥ max_load, then break cluster C j and form two new clusters, C x and C y , from the members of C j and node v. The two new clusters are constructed so that their loads are as similar as possible.

Strategy 4 – The Control

No intervention in any form is administered to the ACO algorithm when an environmental change occurs. From the empirical evaluation viewpoint, this strategy serves as a reference to gauge improvements produced by the other three strategies. In subsequent parts of this article, we will refer to this strategy as control.

Performance Measures

The effectiveness of the four strategies in guiding ACO to adapt to the new problem structure is evaluated using two performance criteria: reactivity (Weicker, Citation2002) and LBF (as given by Equation (Equation1)). Reactivity refers to the number of iterations that ACO requires to recover from any solution quality degradation, and is given by Equation (Equation8). ε is a parameter that is set to 0.01 for this study, m is the iteration in which environmental change occurred, min f [m−5, m−1] is the best solution quality across the five iterations preceding iteration m. Take note that in Weicker (Citation2002), the authors used f m−1 instead of min f [m−5, m−1]. We illustrate the need to take the minimum over the m − 5 to m − 1 instead of just taking the iteration-best solution quality at m − 1 using Figure .

The denominator of the ratio in Equation (Equation8) (for the case m′ ≤rwindow) serves to determine a performance baseline. If the situation in Figure is encountered, then taking f m−1 as the denominator will make the performance baseline too relaxed. This is due to the fact that two iterations ago (i.e., at m − 3), the iteration best solution was better. Therefore, taking min f [m−5, m−1] gives a stricter performance baseline for determining reactivity. A smaller reactivity value is desirable. Each strategy is allowed only rwindow number of iterations to recover. Otherwise, we say that the strategy failed to react. rwindow is set to 50. We use the terms “react” and “recover” interchangeably.

FIGURE 5 Possible changes of LBF values before environment change at m.

FIGURE 5 Possible changes of LBF values before environment change at m.

For each problem instance, assuming the reactivity for each run is recorded as r 1, r 2,…, r 50, results for reactivity are reported with the following measure:

BEST_REACT: The best reactivity observed across each run. When reporting this measure for GACO, we factor in the number of generations the GA component incurs. In all runs, it was found that the GA component, which was run for 500 generations with parameters given in Table , takes less than 10 ACO cycles. Therefore, 10 cycles are added to the raw reactivity recorded from GACO. If there is a run in which a particular strategy failed to react within the allowable window, the reactivity value is set to rwindow (i.e., 50 in this case), representing worst case performance.

The above measure concerns only the reactivity time. The only hint of solution quality is that for any approach that managed to react, we know that the solution quality has satisfied the performance baseline. To complement the reactivity measure, we use the following two LBF measures to report on solution quality for each problem instance:

LBF at recovery (i.e., at m′ in Equation (Equation8)): This gives the LBF measure of the best solution found in iteration m′. From here on, this measure will be abbreviated as LBF_AT. Again, if there is a run in which a particular strategy failed to react within the allowable window, the LBF_AT value is set to the average iteration-best solution quality within the recovery window.

Best LBF: This gives the best solution found within rwindow. This is necessary because LBF given by LBF_AT will most likely improve in the next iterations after m’. Even if a particular approach failed to react, this measure will reflect its best performance. From here on, this measure will be denoted as LBF_BEST.

Experiment Set-Up

To begin, we generate four base problem instances, each having N = 400, 350, 300, and 250 nodes. Each base problem is generated by randomly (uniform distribution) placing N nodes in an area of size L × L meters. These base instances are named Net1, Net2, Net3, and Net4, respectively. Each base problem instance is then used to generate three graphs using three different transmission ranges. For example, the three graphs generated from Net1 are Net1R210, Net1R220, and Net1R230 using transmission ranges of 210, 220, and 230 meters, respectively. There are a total of 12 graphs. To ensure that the graphs are connected at the selected transmission range, the recommendations in Bettstetter (Citation2002) are adopted.

For each of these graphs, ACO is run four times. For each run, an environmental change is randomly generated after the 10th but before the 130th iteration. Environmental changes before the 10th iteration are avoided because the pheromone information may not have stabilized to a certain level which would allow any environment change to cause considerable disruption. The algorithm state and problem structure after the environment change is saved and is the problem instance. Therefore, each graph will generate four problem instances, one from each run. The four instances from Net1R210 will be labeled Net1R210s1, Net1R210s2, Net1R210s3, and Net1R210s4. The other problem instances will assume the same naming convention. There are a total of 48 problem instances.

An environmental change is characterized by three parameters: the number of nodes that move (denoted by j), the displacement vector d = {d 1, d 2,…, d j } where d i is the displacement for node i, and the direction vector θ = {θ1, θ2,…, θ n } where θ i represents the direction node i will move. The displacement vector is sampled from the range [10, R]. Possible values for the movement direction are within [0°, 360°]. The nodes that will move, the iteration in which mobility will occur, as well as the displacement and direction vectors, are determined randomly according to uniform distribution. Each strategy is run 50 times on each problem instance. When started, a strategy will first load the intended problem instance, and is given up to a maximum of rwindow iterations to recover. The number of nodes that move is expressed as a percentage of the total number of nodes and is represented using symbol M. In the empirical evaluations, we set M to 5% and 10%.

In general, many parameters in ACO can be considered for parameter tuning. However, we focus on the parameters of the three adaptation strategies. For the control strategy, the ACO will use the parameter values as reported in Ho and Ewe (Citation2005a). The candidate values for the parameters tuned by the rank-based method were first determined with a simpler procedure. This had to be done due to the expensive computational cost of the rank-based parameter tuning. To illustrate, for GACO, the candidate values for Q for the rank-based tuning is {0.8, 1.0, 1.5} (as in Table ). Before arriving at these three values, we have considered a wider range, that is, [0.5, 2.0] with increments of 0.1. For each of these increments, we ran each algorithm for 20 iterations. However, in this simpler procedure, the other parameters were held constant at values that our previous experiences indicate to be good. The relevant parameters are listed in Table along with their potential values. For tuning the selected parameters, we use the first problem instance from each of the 12 graphs, i.e., Net1R210s1, Net1R220s1, Net1R230s1…Net4R150s1. For the sake of generality, assume that for each strategy, the u number parameters to be tuned are represented as W 1, W 2W u , and w j1, w j2w j|W j | ∊ W j are the potential values for the jth parameter to be tuned. Each strategy is run 20 times on each selected tuning problem instance for each combination of parameter values c i  ∊ W 1 × W 2… ×W u , with 1 ≤i ≤ |W 1 × W 2… ×W u |. This produces results with the form shown in Figure . The BEST_REACT and LBF_AT measures are averaged over the 20 runs.

FIGURE 6 The table for recording the parameter tuning results.

FIGURE 6 The table for recording the parameter tuning results.

The parameter tuning is performed with respect to two performance measures—BEST_REACT and LBF_AT. For each problem instance, each parameter value combination is ranked according to BEST_REACT (Rank A) and LBF_AT (Rank B). The best rank is one. The best parameter setting for each strategy is determined by computing the overall rank for each c i , Rank(c i ) as follows:

where size is the number of tuning instances. The selected c i will be the one having the lowest value for Rank(c i ). Based on this technique, the selected set of parameter values are shown in Table .

For all approaches, we set max_load = 5, and the initial pheromone values are set to 1/N.

EMPIRICAL RESULTS

In order to compare results among different problem instances on each performance measure, we normalized the results by computing the gain (in percent) of any strategy over control. The gain is computed as follows:

X represents the compared strategy and PERF is any one of the three performance measures previously described. For all three performance measures, positive gains translate to better performance compared to control. The gain provides a measure of performance independent from different instance hardness.

For each performance measure, we first show the boxplot of the gains over all problem instances. The boxplots are constructed from 2400 samples obtained from the 50 runs on each of the 48 problem instances. Then we check whether these results are also statistically significant. To achieve this, we perform the Kruskal–Wallis nonparametric one-way analysis of variance (ANOVA). The results from this test are then used to perform multiple comparisons using the Tukey–Kramer method to establish statistical significance. For multiple comparisons, we show a graph displaying the average ranks of the gains with confidence intervals around them. The multiple comparisons tests use a significance level of 0.01.

Performance Comparison on BEST_REACT

Figure shows the boxplots and confidence intervals for gains produced by GACO, P-ACO, PAdapt, and GreedyAnts over control. For M = 5% (see Figure ), both GACO and P-ACO outperform PAdapt and GreedyAnts. However, observe that P-ACO exhibits larger negative gains than the other three. Comparing GACO and P-ACO, the upper half of their boxplots are very similar. The median for GACO is slightly larger than that of P-ACO. Nevertheless, the multiple comparison test results shown in Figure indicate that the performance difference observed between GACO and the other three strategies is statistically significant. For M = 10%, a striking observation is that the performance gap between GACO and P-ACO is now larger, with GACO performing better. This is confirmed by the confidence intervals for the mean ranks in Figure . It is also interesting to note that PAdapt is able to perform better than P-ACO for M = 10%.

FIGURE 7 Performance comparisons on BEST_REACT. (a) The boxplot for M = 5%; (b) The mean ranks with comparison intervals for M = 5%; (c) The boxplot for M = 10%; (d) The mean ranks with comparison intervals for M = 10%.

FIGURE 7 Performance comparisons on BEST_REACT. (a) The boxplot for M = 5%; (b) The mean ranks with comparison intervals for M = 5%; (c) The boxplot for M = 10%; (d) The mean ranks with comparison intervals for M = 10%.

From the boxplots and the multiple comparison plots, we can conclude that the improvements in reactivity time exhibited by GACO over P-ACO, PAdapt, and GreedyAnts are statistically significant. Empirical results point towards the fact that the use of GA within GACO is able to quickly adapt the search process to the new problem structure. Even though the use of GA incurs additional processing time, the reactivity time is still smaller (after factoring in this additional cost) on average as compared to the competing strategies.

Performance Comparison on LBT_AT

We now compare the LBF measure recorded for each strategy when reactivity is successful. The gains over control computed on this performance measure and the respective confidence intervals are shown in Figure . For M = 5%, it is clear that both P-ACO and GACO are superior than PAdapt and GreedyAnts (see Figure ). Genetic ACO has only a slight edge over P-ACO. Nevertheless, Figure shows that this difference is significant as the intervals for GACO and P-ACO do not overlap. When M is increased to 10%, the performance gap between GACO and P-ACO has also increased. Figure shows that the best 50% of the total 2400 GACO runs produced gains within the 10% to 45% range. Contrast this to P-ACO where the best 50% runs showed gains between only 5% and 34%. Figure shows that the mean ranks and the confidence intervals for GACO and P-ACO are further apart compared to Figure .

FIGURE 8 Performance comparisons on LBF_AT. (a) The boxplot for M = 5%; (b) The mean ranks with comparison intervals for M = 5%; (c) The boxplot for M = 10%; (d) The mean ranks with comparison intervals for M = 10%.

FIGURE 8 Performance comparisons on LBF_AT. (a) The boxplot for M = 5%; (b) The mean ranks with comparison intervals for M = 5%; (c) The boxplot for M = 10%; (d) The mean ranks with comparison intervals for M = 10%.

These empirical results show that on top of being able to react faster, GACO is also able to produce better solutions at the point of recovery. This means that the GA is effective in identifying promising areas of the search space for the ACO to subsequently work on.

Performance Comparison on LBF_Best

The boxplots and confidence intervals for this performance measure are depicted in Figure . Between P-ACO and GACO, a similar performance trend as that for LBF_AT can be observed. When M = 5%, GACO has only a small advantage over P-ACO. This is depicted in Figure . However, this small difference remains significant as the corresponding confidence intervals do not overlap in Figure . Again, the advantage of GACO is evident when M increases to 10%. In Figure , the upper quartile, lower quartile, median and maximum gain (not counting outliers) for the GACO boxplot are larger than P-ACO. Their performance difference is statistically significant as substantiated by Figure . Also note that the performance for P-ACO with M = 10% is very similar to PAdapt. In Figure , their confidence intervals nearly overlap. This result is important because it shows that not only GACO is able to produce better solutions at the point of recovery, but it also managed to maintain, and in most cases, find better solutions beyond the point of recovery.

FIGURE 9 Performance comparisons on LBF_BEST. (a) The boxplot for M = 5%; (b) The mean ranks with comparison intervals for M = 5%; (c) The boxplot for M = 10%; (d) The mean ranks with comparison intervals for M = 10%.

FIGURE 9 Performance comparisons on LBF_BEST. (a) The boxplot for M = 5%; (b) The mean ranks with comparison intervals for M = 5%; (c) The boxplot for M = 10%; (d) The mean ranks with comparison intervals for M = 10%.

CONCLUSIONS

We have presented a hybrid ACO metaheuristic called GACO to address the load-balanced clustering problem arising from the operations of ad hoc networks. GACO invokes a GA when an environment change occurs. The GA enables GACO to quickly adapt to the new problem structure by discovering new promising areas of the altered search space. The effectiveness of GACO highly depends on how the interfaces between ACO and GA are designed. To this end, we have described how the pheromone information, collectively gathered by the ants, is used to construct the initial population for the GA. This is the ACO to GA interface. The GA to ACO interface is concerned with the manner in which the solution produced by GA is used by ACO. Firstly, it is used to reconcile the outdated pheromone information with the new problem structure. Secondly, it used to guide the solution construction task of a specialized group of ants, which we call G-ants. These ants will give more bias to solution components appearing in the GA solution. The degree of additional bias given to such solution components depends on the order of their appearance within the GA solution.

Three other approaches for adapting ACO search in the presence of dynamic environments are used for performance comparison. These are P-ACO, PAdapt, GreedyAnts. For all performance measures, GACO is competitive with P-ACO for relatively small amount of disruptions, i.e., M = 5%. When the number of mobile nodes doubled (M = 10%), the advantage of GACO can be clearly seen. Even though P-ACO with the repair heuristic seems promising for this problem, it is found that GACO is more robust when it comes to larger environmental changes.

Through these performance comparisons, we can conclude that GACO has been empirically shown to perform better. Firstly, it is able to recover more rapidly from performance degradation caused by environmental changes. Secondly, not only it is able to react faster, its solution quality at the point of recovery is also better compared to P-ACO, PAdapt, and GreedyAnts. Finally, the ability of GACO to further improve solution quality beyond the point of recovery has been demonstrated. These results point to the effectiveness of the proposed GACO approach in attacking the dynamic load-balanced clustering problem.

DECLARATION OF INTEREST

The authors report no conflicts of interest. The authors alone are responsible for the content and writing of the paper.

REFERENCES

  • Bettstetter , C. 2002 . On the minimum node degree and connectivity of a wireless multihop network . In: Proceeding of MOBIHOC'02 , pp. 80 – 90 . Lausanne , Switzerland .
  • Bonabeau , E. and G. Théraulaz . 2000 . Swarm smarts . Scientific American 282 ( 3 ): 72 – 79 .
  • Camilo , T. , C. Carreto , J. Sa Silva , and F. Boavida . 2006 . An energy-efficient ant-based routing algorithm for wireless sensor networks . In: Proc. the Fifth International Workshop on Ant Colony Optimization and Swarm Intelligence, Lecture Notes in Computer Science 4150 : 49 – 59 .
  • Chartrand , G. and L. Lesniak . 2005 . Graphs and Digraphs. Boca Raton , FL : Chapman & Hall/CRC .
  • Colorni , A. , M. Dorigo , V. Maniezzo , and M. Trubian . 1994 . Ant system for job-shop scheduling . Belgian Journal of Operations Research, Statistics and Computer Science (JORBEL) 34 : 39 – 53 .
  • Costa , D. and A. Hertz . 1997 . Ants can color graphs . Journal of the Operations Research Society 48 : 295 – 305 .
  • Davis , L. 1995 . Applying adaptive algorithms to epistatic domains . In: Proceeding of the International Joint Conference on Artificial Intelligence , pp. 162 – 164 .
  • Di Caro , G. and M. Dorigo . 1998a . AntNet: Distributed stigmergetic control for communications networks . Journal of Artificial Intelligence Research 9 : 317 – 365 .
  • Di Caro , G. and M. Dorigo . 1998b . Extending AntNet for best-effort quality-of-service routing . Presentation at ANTS 98 – From Ant Colonies to Artificial Ants: First International Workshop on Ant Colony Optimization , Brussels , Belgium .
  • Di Caro , G. , F. Ducatelle , and L. M. Gambardella . 2004 . AntHocNet: An ant-based hybrid routing algorithm for mobile ad hoc networks . In: Proc. Parallel Problem Solving from Nature (PPSN VIII), Lecture Notes in Computer Science 3242 : 461 – 470 .
  • Di Caro , G. , F. Ducatelle , and L. M. Gambardella . 2006 . An analysis of the different components of the AntHocNet routing algorithms . In: Proc. the Fifth International Workshop on Ant Colony Optimization and Swarm Intelligence, Lecture Notes in Computer Science 4150 : 37 – 48 .
  • Dorigo , M. and G. Di Caro . 1999 . The ant colony optimization metaheuristics . In: New Ideas in Optmization , eds. D. Corne , M. Dorigo , and F. Glover , pp. 245 – 260 . Maidenhead , U.K. : McGraw-Hill .
  • Dorigo , M. and L. M. Gambardella . 1996 . Ant Colony System: Optimization by a colony of cooperating agents . IEEE Trans. Systems, Man, and Cybernetics – Part B 26 ( 1 ): 29 – 41 .
  • Dorigo , M. , G. Di Caro , and L. M. Gambardella . 1999 . Ant algorithms for discrete optimization . Artificial Life 5 ( 2 ): 137 – 172 .
  • Eyckelhof , J. C. and M. Snoekm . 2002 . Ant systems for a dynamic TSP . In: Ant Algorithms, Third International Workshop, ANTS 2002 , eds. M. Dorigo , et al., Lecture Notes in Computer Science 2463:88–99 .
  • Gambardella , L. M. and M. Dorigo . 1997 . HAS-SOP: An hybrid ant system for the sequential ordering problem. Technical Report 11-97, The Swiss Institute for Artificial Intelligence (IDSIA) , Lugano , CH .
  • Gambardella , L. M. , E. Taillard , and G. Agazzi . 1999 . Ant colonies fir vehicle routing problems . In: New Ideas in Optimization , eds. et al. ., New York : McGraw-Hill .
  • Garlick , R. M. and R. S. Barr . 2002 . Dynamic wavelength routing in WDM networks via ant colony optimization . In: Ant Algorithms, Third International Workshop, ANTS 2002 , eds. et al. ., Lecture Notes in Computer Science 2463:27–41 .
  • Gerla , M. and J. T. Tsai . 1995 . Multicluster, mobile multimedia radio network . Wireless Networks 1 : 255 – 265 .
  • Guntsch , M. and M. Middendorf . 2001 . Pheromone modification strategies for ant algorithms applied to dynamic TSP . Lecture Notes in Computer Science 2037 : 213 – 222 .
  • Guntsch , M. and M. Middendorf . 2002a . A population based approach for ACO . Lecture Notes in Computer Science 2279 : 72 – 81 .
  • Guntsch , M. and M. Middendorf . 2002b. Applying population based ACO to dynamic optimization problems. In: Ant Algorithms, Third International Workshop, ANTS 2002 , eds. et al.., Lecture Notes in Computer Science 2463:111–122.
  • Habetha , J. and M. Nadler . 2001 . Outline of a centralized multihop ad hoc network . Computer Networks 37 : 63 – 71 .
  • Ho , C. K. and H. T. Ewe . 2005a . A hybrid ant colony optimization approach (hACO) for constructing load-balanced clusters . In: Proceeding of the IEEE Congress on Evolutionary Computation , pp. 65 – 72 .
  • Ho , C. K. and H. T. Ewe . 2005b . Performance of an ant colony optimization (ACO) algorithm on the dynamic load-balanced clustering problem in ad hoc networks . Lecture Notes in Artificial Intelligence 3801 : 622 – 629 .
  • Ho , C. K. , Y. P. Singh , and H. T. Ewe . 2006 . An enhanced ant colony optimization metaheuristic for the minimum dominating set problem . Applied Artificial Intelligence 20 ( 10 ): 881 – 903 .
  • Maniezzo , V. and A. Colorni . 1999 . The ant system applied to the quadratic assignment problem . IEEE Trans. Knowledge and Data Engineering 11 ( 5 ): 769 – 778 .
  • Martens , D. , M. D. Backer , R. Haesen , J. Vanthienen , M. Snoeck , and B. Baesens . 2007 . Classification with ant colony optimization . IEEE Trans. Evolutionary Computation 11 ( 5 ): 651 – 665 .
  • Michel , R. and M. Middendorf . 1998 . An Island model based ant system with lookahead for the shortest supersequence problem . In: Proceedings of PPSN-V, Fifth International Conference on Parallel Problem Solving from Nature , ed. A. E. Eiben , pp. 692 – 701 .
  • Montemanni , R. , L. M. Gambardella , A. E. Rizzoli , and A. V. Donati . 2003 . A new algorithm for dynamic vehicle routing problem based on ant colony system . In: Proc. 2nd International Workshop on Freight Transportation and Logistics , pp. 27 – 30 . Palermo , Italy .
  • Oikonomou , K. , A. Vaios , S. Simeons , P. Pellati , and I. Stavrakakis . 2003 . A centralized ad-hoc network architecture (CANA) based on enhanced HiperLAN/2 . In: Proceedings of the 14th IEEE Personal, Indoor and Mobile Radio Communications 2:1336–1340 .
  • Randall , M. 2005 . Generalizing ant colony optimization for dynamic optimization problems. Technical Report 05-01, Faculty of Information Technology, Bond University, Australia .
  • Schoonderwoerd , R. , O. Holland , and J. Bruten . 1997 . Ant-like agents for load-balancing in telecommunications networks . In: Proceeding of First International Conference on Autonomous Agents , pp. 209–216 .
  • Shmygelska , A. and H. H. Hoos . 2005 . Ant colony optimization algorithm for the 2D and 3D hydrophobic polar protein folding problem . BMC Bioinformatics 6 ( 30 ).
  • Subramanian , D. , P. Druschel , and J. Chen . 1997 . Ants and reinforcement learning: A case study in routing in dynamic networks . In: Proceeding of International Joint Conference on Artificial Intelligence , pp. 832–838 .
  • Toh , C. K. 2001 . Ad Hoc Mobile Wireless Networks: Protocols and Systems. Upper Saddle River , NJ : Prentice Hall .
  • Turgut , D. , S. K. Das , R. Elmasri , and B. Turgut . 2002 . Optimizing clustering in mobile ad hoc networks using genetic algorithmic approach . In: Proceeding of IEEE Globecomm , pp. 62–66 .
  • Vaios , A. , K. Oikonomou , and I. Stavrakakis . 2003 . A centralized routing scheme supporting ad hoc networking in dual mode HiperLAN/2 . In: Proceedings of the IST Mobile & Wireless Communications Summit , Aveiro , Portugal .
  • Weicker , K. 2002 . Performance measures for dynamic environments . Lecture Notes in Computer Science 2439 : 64 – 73 .

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.