281
Views
23
CrossRef citations to date
0
Altmetric
Original Articles

Metric-topological–evolutionary optimization

, &
Pages 41-58 | Received 05 Sep 2011, Accepted 11 Sep 2011, Published online: 05 Oct 2011

Abstract

This article shows a novel approach for optimization and inverse problems based on evolutionary computation with the aim to satisfy two opposite requirements: exploration and convergence. The proposed approach is particularly suitable for parallel computing and it gives its best both for multimodal problems and for problems in which bad initializations can occur. The proposed algorithm has been called MeTEO to point out its metric-topological and evolutionary inspiration. In fact, it is based on a hybridization of two heuristics coming from swarm intelligence: the flock-of-starlings optimization (FSO; which shows high exploration capability but a lack of convergence), the standard particle swarm optimization (which is less explorative than FSO but with a good convergence capability) and a third evolutionary heuristic: the bacterial chemotaxis algorithm (that has no collective behaviour, no exploration skill but high convergence capability). Finally, with the aim of speeding up the algorithm, a technique that we call fitness modification is proposed and implemented. Suitable tests regarding optimization benchmarks and inverse problems will be presented with the aim of pointing out the MeTEO performances compared with those of each single heuristic used for hybridization.

1. Introduction

Exploration and convergence are two important requirements for the algorithms which are devoted to inverse-problems and/or optimization. In many applications, it is usual to prefer those algorithms showing better capabilities of convergence to the global optimum. On the other hand, we have to consider that: (a) many optimization problems require finding more than one single global optimum; (b) in many cases, the global optimum must be found within a solution space having very high dimensions Citation1. Frequently, the previous points (a) and (b) are simultaneously present. Thus, the designer of the algorithms must decide if it is better to expand the search or to privilege the convergence on a subspace. This is very hard to be decided a priori Citation2. In fact, the risk of being entrapped into a local minimum is higher if an algorithm does not explore a suitably large space. On the other hand, an algorithm could make an over-sampling of the space and spending long time before achieving convergence to a good solution Citation2. This article shows a novel approach based on evolutionary computation able to enhance exploration preserving convergence. The proposed algorithm has been called MeTEO for pointing out its metric-topological and evolutionary inspiration. In fact, it is based on a hybridization of two heuristics coming from swarm intelligence: the flock-of-starlings optimization (FSO; topological swarm), the standard particle swarm optimization (PSO; metric swarm) and a third evolutionary heuristic: the bacterial chemotaxis algorithm (BCA) that has no collective behaviour. In particular, the FSO has been first described and applied in Citation3,Citation4 as a modification of the well-known PSO Citation5 by adding topological rules to the metric rules that are typical of the PSO. The FSO is inspired by the recent naturalistic observation about the real starling flight Citation5. As it is shown in Citation3,Citation4, the FSO is particularly suitable for exploration and multimodal analysis. The BCA Citation6 is based on the emulation of the motion of a real bacterium looking for food (i.e. fitness function). It is a heuristic that shows its better performances in local search Citation1. The present approach uses the FSO to explore the solution space, the PSO to investigate subspaces in which the global optimum could be present and finally the BCA to refine solutions. A parallel strategy is implemented: the FSO is permanently running, and each time it finds a possible solution, the PSO is launched. After that, it is transformed in BCA, and so on. A final important computational strategy completes the present approach: the fitness function is deliberately made worse just in those suitable narrow regions in which the FSO has decided to launch PSO–BCA. This fitness modification (FM) aims to prevent FSO from coming back in an already explored subspace. MeTEO has been tested on the main optimization benchmarks currently used in the literature but also novel harder benchmarks are presented. Moreover, MeTEO has been tested also for an inverse problem regarding the identification of the parameters in the Jiles–Atherton (J–A) magnetic hysteresis model.

2. Heuristics components of MeTEO

2.1. Swarm algorithms: PSO and FSO

The PSO is one of the most used and studied among the optimization methods. It was proposed by Kennedy and Eberhart Citation7 in 1995, starting from the works of Reynolds Citation8 and Heppner and Grenander Citation9. It is an algorithm based on some metric rules applied for simulating the collective behaviour of swarms. The metric approach consists of describing the behaviour of a flying bird which must keep a distance from its neighbours that is forced to stay within a fixed interaction range, i.e. all birds keep alignments of velocity among all flock members. Although the metric rule allowed the simulation of a collective movement of animals, significant differences still remained compared to the behaviour of a real flock. However, on the basis of this paradigm, Kennedy and Eberhart Citation7 first proposed their approach by establishing a one-to-one relationship between the motion of a flock governed by metric rules during its food searching and the iterative steps of a numerical algorithm searching the solution for optimization problems. As a second step, they found that some metric rules of the paradigm were an obstacle for multi-objective optimization tasks. Thus, they modified the algorithm by removing some parts of it. For example, the matching with the velocity of the nearest neighbour was removed, and so on. These variations have altered the virtual collective movement of the swarm and the final algorithm simulates a behaviour more similar to a swarm of insects than to a flock of birds. Thus, we can say that the original PSO has been created by starting from the simulation of a flock of birds for arriving to the emulation of a swarm of generic ‘particles’. The introduction of topological rules into PSO is the hub of the algorithm called FSO. The FSO Citation3,Citation4 adopts an approach based on recent naturalistic observations Citation5 on the collective behaviour of the European starlings (Sturnus vulgaris). The authors of paper Citation5 have discovered an interaction among members of the same flock which has a topological nature: it is relevant to know how many intermediate birds separate two starlings, not how far apart they are from the other. This means that the main property of the topological interaction is that each starling interacts with a fixed number of neighbours, i.e. their metric distance is not crucial. Thus, real flocks of starlings have a behaviour that can be numerically simulated by using topological rules rather than metric rules. In fact, the topological approach is able to describe the density changes that are typical of flocks of birds, whereas the metric approach is not able to do it. In real flocks, each generic k-th bird controls and follows the flights of a generic number of other members of the flock, no matter which positions are their inside the flock.

The pseudo-codes of the implemented PSO and FSO referred to a generic function, , to be minimized in the search space having dimension are recalled in and .

Figure 1. Main parameters managed by PSO and FSO.

Figure 1. Main parameters managed by PSO and FSO.

Figure 2. Pseudo-code for PSO and FSO.

Figure 2. Pseudo-code for PSO and FSO.

In particular in , we have indicated in the boxes numbered from 1 to 11 that the definition of the main common parameters to be used by both the PSO and FSO algorithms. They are: the dimension of the search space, , the values of the main parameters for each j-th particle with ; the maximum number of iterations, ; the fitness function ; the maximum value of each velocity component ; the initialization of velocities (where is a function returning a random value between 0 and 1); the personal fitness ( indicates an arbitrarily large value); the global fitness ; the position of each j-th particle with ; the value of the inertial coefficient ; the maximum value of cognize coefficient, ; the maximum value of social coefficient, ; the fitness threshold . Moreover, in , we have indicated the parameters used just for FSO (i.e. they do not appear in the PSO algorithm), by the blocks numbered from 12 to 13: the maximum value of the topological coefficient, among all topological coefficients ; the quantity ; and the number of the birds that are controlled by each other single bird within the flock. In , the pseudo-code valid both for PSO and for PSO is reported.

However, it is evident from that the PSO and the FSO differ just for the expressions written in blocks #20 and #21, respectively. This apparently small difference produces huge differences in the behaviour of the two heuristics. In fact, even if the PSO shows a quite good exploration capability as well as a good degree of convergence, it has been noted that, especially when the searching space becomes larger and larger, it can be trapped into a local minimum without the possibility of escaping Citation1. Furthermore, for multi-modal functions, the algorithm is not comprehensive Citation1. In fact, it is important to note that the standard PSO has undergone many changes. In fact, many authors have derived new PSO versions and published theoretical studies on the effects produced on the algorithm by changing the values of the various parameters (e.g. , , and so on Citation10). Other authors have investigated the effects produced by changing some aspects of the algorithm (see e.g. Citation11,Citation12,] and the references therein). It is important to note, however, that the cited papers focus their attention more on convergence rather than on exploration. On the other hand, in Citation3,Citation4, it has been shown that FSO has high exploration capability, avoids entrapments into local minima and is particularly suitable for multimodal optimizations. Unfortunately, FSO shows a lack of convergence. This is the cost to be paid for obtaining a high exploration capability for the algorithm. Practically, the FSO does not stop running without user intervention. But this also is the reason that allows the FSO to operate a complete mapping of the solution space. This makes the FSO immune from the worst initializations, i.e. it can return a good solution regardless of how large is the dimension of the solution space. In fact, the FSO has the potential to find each subspace in which a minimum (local or global) lies. Although the FSO can or cannot find better solutions depending on the values assigned to the parameters appearing in , the PSO cannot ever achieve the exploration capability of the FSO simply by means of a modification of its parameters. We can conclude these notes by saying that FSO is a very good explorer but it is not convergent, whereas PSO makes a balance between exploration and convergence, favouring (maybe too much) convergence.

2.2. The BCA

The BCA has been proposed in Citation6 and is an algorithm based on the emulation of the motion of a real bacterium looking for food (i.e. fitness function). A mathematical description of a 2D bacterium motion can be developed by assuming an assigned speed v and by the determination of suitable probabilistic distributions of the motion duration and of the direction shown by each individual. The 2D description can be easily extended to n-dimensional hyperspaces defining, for the virtual-bacterium path, a vector made of n positions , with , and a vector made of directions , with . Thus, the virtual bacterium motion follows the rules proposed in Citation6 and summarized here in and :

Figure 3. Main parameters managed by BCA.

Figure 3. Main parameters managed by BCA.

Figure 4. Pseudo-code of BCA.

Figure 4. Pseudo-code of BCA.

In , we have indicated the two expectation values of a suitable exponential probability density functions with T and μ, the standard deviation with σ, the minimum mean time with T0, the module of the difference between the new position vector, , and the old position vector, , of individuals with and the function cost to be minimized with ; the parameters T0, τ are usually called strategy parameters. Regarding directions, the probability density function describing the turning angle between two consecutive trajectories is Gaussian.

shows the flow-chart of BCA pseudo-code. The path of each bacterium is composed by adding the vector to the old position of virtual bacteria, step by step, by the equation , with . Moreover, the b parameter is another strategy parameter, which is introduced to modify the dynamic of individuals’ movement. Finally, for each bacterium, is the difference between the current fitness function value (), referred to the current position (current algorithm step), and the value of that () referred to previous position (previous algorithm step).

The BCA effectiveness is strongly influenced by the choice of the parameters: T0, τ and b. They are usually obtained empirically depending on the typology of the optimization problems. In particular, the BCA convergence becomes difficult if the T0 value is excessively large. This is because T0 is the shortest time interval of a single position change of the bacterium. This shortest movement should be small enough for allowing the BCA to achieve the requested accuracy. On the other hand, if the elementary bacterium movement were too small, the BCA running-time would be excessive. Furthermore, in the presence of a large gradient (i.e. the motion excitation for the bacterium), b should be chosen small enough for avoiding the risk of a removal from the attraction zone.

3. Parallel architecture, FM and the activation strategy of the different MeTEO components

3.1. Parallel architecture

MeTEO provides its best performances on a distributed architecture and the algorithm has been fully designed for parallel computation based on a Master-Slaves configuration. According to the peculiarities of each previously described single heuristic (PSO, FSO and BCA), MeTEO uses FSO just on the Master node whereas PSO and BCA on Slave ones. The FSO performs the exploration of the whole space of solutions and whenever it finds a sub-region in which there is a high probability of discovering a global minimum (briefly called ‘suspected region’), two simultaneous operations are made: (1) the current fitness is modified in such a way to prevent FSO from exploring again any found ‘suspected region’; (2) MeTEO launches the PSO algorithm on a Slave node of the cluster, being the PSO population initialized by means of the best result found by FSO at the current iteration. Let us explain these points in more detail.

3.2. FM and activation strategy of each single MeTEO components

The FM is inspired to the famous Tabu-search algorithm Citation13, and in particular to the Tabu list. Since the global optimum is coincident with the smallest value achievable for the fitness functions, the FM has to ensure that the new fitness function must never indicate again a ‘suspected region’ if it has been already detected by the FSO. In fact, the FM consists in adding to the past fitness function a positive Gaussian function centred into the best co-ordinates found by FSO at the current iteration. A further control is made by MeTEO on the number of iterations for which the fitness does not improve its value. In other words, if the fitness does not improve for a number of FSO iterations equal to , MeTEO assumes the coordinates of the last found global best as the centre of a ‘suspected region’ and activates a Slave node in which PSO and BCA are acting on. The metric dimension of this region is automatically set by the FM as it is next described. It is important to remark that the effect of the FM acts just on the PC-Master, i.e. it is valid just for FSO, whereas the fitness function holds the original starting expression for both PSO and BCA working on the PC-Slave. In more detail, let us show how the FM acts when the FSO finds a generic k-th ‘suspected region’. Before finding a ‘suspected region’, FSO was managing a fitness function that was fixed at the moment in which the FSO found the ‘suspected region’: with , (it is clear that the starting original fitness will be indicated with ). Let us assume to be a vector collecting the co-ordinates minimizing the value of . Then, FM operates according to the following equation: (1) where A is a suitable constant that must have opposite sign compared with and the standard deviation that defines the size of the k-th ‘suspected region’ detected by FSO. In this way, the new fitness , which will be managed by the FSO till it will find a further ‘suspected region’, will not show a minimum in , anymore, i.e. the FSO will not be attracted anymore by that region. Obviously, in the Slave node, no FM is made. Let us see now what happens in the Slave node. When a k-th ‘suspected region’ is detected, a Slave node is activated. On this node, the PSO is initialized by whereas the fitness function is always set equal to the starting one: . The PSO will be left to run for a prefixed number of iterations . When the PSO iterations are equal to , the PSO ends its task and it is substituted, on the same Slave node, by the BCA. The BCA population will be initialized by means of the best result achieved by PSO so far. The BCA plays the role of the local search. It searches the minimum until its own number of iterations is equal to a maximum number that has been set before to running MeTEO.

While the process on a Slave node is running, FSO still continues to explore the space on the Master node. Any time FSO finds a further ‘suspected region’, MeTEO again launches a further process on a different Slave node and so on. Any final result coming from Slave nodes is stored in a dedicated file that is located in a shared folder in the Master node. After delivering the found solution to the Master node, a Slave is ready to be used again for a further exploration on a new ‘suspected region’. Finally, when all processes will be ended and all PC-Slaves have completed the PSO + BCA procedures previously described, a list of all minima detected will be available in the file in which all the results coming from Slaves have been stored. At the end, from this stored list, the best minimum can be trivially identified. Finally, some remark on FSO stopping criterion must be made. A simply criterion consists of stopping MeTEO whenever the FSO performs a number of iterations that are set by the user before running MeTEO. However, this strategy could not be effective in all cases. In fact, some hard multi-modal optimizations, i.e. in those cases in which the FSO needs a higher number of iterations for detecting all the global minima, it could be convenient to alternatively use a stopping criterion based on a maximum number of ‘suspected regions’ . In other cases, it will be recommendable to use a Boolean OR logic gate. Obviously, the choice of a specific MeTEO stopping criterion depends on the user experience on the optimization problem or inverse problem to be solved.

4. Validation

MeTEO has been validated on typical optimization benchmarks, as well as on typical inverse problems, i.e. the identification of the J–A hysteresis model by the knowledge of an experimental hysteresis loop and the fit problem called ‘Amid_pro’ that is one of the 295-ODE-test examples proposed by Schittkowski Citation14.

4.1. Validation on optimization benchmarks

In , test results performed by MeTEO applied to an ad hoc benchmark are shown. The used benchmark is: (2)

Table 1. List of benchmarks used for tests.

In , the cross-sections of (2) in planes at constants x and y are shown. The benchmark (2) shows its smallest value (global minimum) in the point that is located at the border of the variable-range: . Thus, if MeTEO is initialized at the opposite corner of the variable-range: (), it has to exit to the closest ‘attractive’ local minimum in , and to the other local minima () that are on the ‘hill’ that separates the starting point from the global minimum.

Figure 5. Cross-sections of (2) in planes at constants x and y.

Figure 5. Cross-sections of (2) in planes at constants x and y.

Each single heuristics that composes MeTEO has to explore a different range within the solution space according to the different exploration capabilities it shows. Thus, for problem (2), whereas the FSO explores the whole range: , the PSOs and the BCAs explore smaller sub-regions depending on the size of the ‘suspected region’ detected by FSO.

At the end of the whole processing, MeTEO detected 14 ‘suspected regions’ for problem (2). Among them, MeTEO found the global minimum in the last detected suspected region. In more detail, in this last region, the FSO detected the smallest value equal to , then the PSO performed and finally the BCA found the best fitness equal to −363.77 in the point (−19.82, 19.58).

Taking into account that the true global-minimum co-ordinates are in (−20, 20), the good accuracy of MeTEO performance is evident. The total PC-Master processing time (FSO) has been 54.8 s. After that, the processing time of PC-Slaves working in parallel has been about 67 s (the average time taken by PSOs was 63 s whereas a time of 4 s has been necessary for BCA).

MeTEO performances have been validated also for classical optimization benchmarks () and they are also compared with those returned by each single heuristic PSO, FSO and BCA for solving the same problems.

The MeTEO performances for each benchmark presented in have been evaluated by averaging the results obtained on 50 different launches both for MeTEO and for each single heuristics. A multi-launch is required for the stochastic nature of the algorithms. For each different launch, both MeTEO and the other heuristics have been always initialized at the same starting point. Obviously, the starting point has been differently chosen for different benchmarks. The dimension of the search space has been intentionally made sizeable in order to make the optimization more difficult Citation1, and just 1000 iterations have been set both for MeTEO and for the other single heuristics.

All the simulation results are listed in and for analyses and comparisons. In particular, reports the number of global optima correctly detected by each algorithm. We have considered as a success the event in which the algorithm finds a minimum showing a percentage error on the true minimum smaller than a fixed threshold.

Table 2. Performance of MeTEO and its components.

Table 3. MeTEO performances.

As it is possible to see in , for the unimodal optimization, Levy and Schaffer, the MeTEO always obtains a success like FSO, whereas both PSO and BCA fail for Schaffer function. For harder multimodal benchmarks, the power of MeTEO is more evident. In fact, whereas it finds anyway at least one global minimum, and in many cases all the global minima, FSO, PSO and BCA cannot assure the same performances.

In , the best fitness results obtained by MeTEO are listed. In particular, shows the best performance achieved by each single MeTEO component in the best ‘suspected region’. For multimodal functions, the best results obtained for one of the global minima are indicated. Finally, in the last column of , the number of ‘suspected regions’ detected by MeTEO during the elaboration is also reported.

4.2. Validations on inverse problems

MeTEO has been tested also on an inverse problems as follows.

4.2.1. Identification of the J–A model

The Jiles–Atherton hysteresis model identification. Let us remind J–A model Citation15: (3) where Man is the anhysteretic magnetization provided by the Langevin equation: (4) in which He is the effective magnetic field . In (3), is the irreversible magnetization component defined by (5) where .

The parameters to be identified of the J–A model are and their physical meaning is: is a form factor, the coefficient of reversibility of the movement of the walls, the saturation magnetization, and finally, represent the hysteresis losses and the interaction between the domains, respectively.

The performed test is based on the use of a given set of parameters Citation15 inserted into Equations (3–5) for obtaining by integration a pseudo-experimental loop. The sampled points of this pseudo-experimental loop have been used to estimate the error between that loop and that returned by MeTEO and the other single heuristics. All algorithms have been randomly initialized; the maximum number of ‘suspected regions’ has been set to 80.

The tests consist of 50 different launches as for MeTEO as well as for PSO, FSO and BCA working alone. In particular, the best result obtained by each algorithm is listed in . A further statistical analysis referred to on the 50 launches made has been reported in by using two indicators: mean percentage error (MPE) and (determination coefficient). As it is possible to see, MeTEO shows the lowest MPE in comparison with those obtained by any single heuristic FSO or PSO or BCA working alone, as it is shown in .

Figure 6. Comparison of the simulation results obtained by MeTEO and FSO, PSO and BCA when each one works alone.

Figure 6. Comparison of the simulation results obtained by MeTEO and FSO, PSO and BCA when each one works alone.

Table 4. Best results among 50 launches of MeTEO and its components for the J–A inverse problem.

Table 5. Statistical analysis of MeTEO and its components for the J–A inverse problem.

4.2.2. Amid proton replacement with protein folder inverse problem

A further classical inverse problem has been taken into consideration with the aim to validate MeTEO: the Amid proton replacement with protein folder inverse problem. In particular, we have taken into account the model called Amid_pro proposed by Schittkowski in Citation14 among a list of 295 fit problems. The problem under analysis is described by the following set of differential equations: (6)

The performed test is based on the use of a given set of parameters Citation14 inserted into Equation (6) for obtaining a pseudo-experimental data set by integration. Also, in this validation test, the performances of the used heuristics have been computed over 50 launches with random selection of guess parameters. The suspected regions have been set equal to 80. Also, for this test, we have observed that MeTEO has performed the best solution in comparison with those returned by each single heuristic working alone. All the best results obtained among the 50 different launches per heuristic are listed in and in are plotted the corresponding curves obtained by integrating (6). A statistical analysis of the several heuristic performances is shown in . It is evident in this case also that MeTEO has returned the best performances also in statistical terms.

Figure 7. Comparison of the simulation results obtained by MeTEO and FSO, PSO and BCA when each one works alone on the Amid_pro inverse problem.

Figure 7. Comparison of the simulation results obtained by MeTEO and FSO, PSO and BCA when each one works alone on the Amid_pro inverse problem.

Table 6. Best results among 50 launches of MeTEO and its components for the Amid_pro inverse problem.

Table 7. Statistical analysis of MeTEO and its components for the Amid_pro inverse problem.

5. Conclusions

A hybrid optimization algorithm called MeTEO based on evolutionary computation has been proposed. The aim of MeTEO is to build an algorithm that assures convergence without sacrificing exploration capability. For this reason, MeTEO uses three different heuristics: two coming from swarm intelligence and the last from the simulation of the behaviour of bacteria into a habitat.

Moreover, MeTEO has been designed (and implemented) on a parallel Master–Slaves computing system. All presented validation tests were applied both to optimization benchmarks and to two inverse problems indicating that MeTEO shows competitive performances compared with the results of the single-heuristic PSO, FSO and BCA when each one works alone.

References

  • Fulginei, FR, and Salvini, A, 2007. Comparative analysis between modern heuristics and hybrid algorithms, COMPEL, 26 (2) (2007), pp. 264–273.
  • Liew, CW, and Labiri, M, , Exploration or Convergence? Another Metacontrol Mechanism for GAs, Proceedings of 18th International Florida AI Research Society Conference, FLAIRS, Miami Beach, FL, USA, 2005.
  • Fulginei, FR, and Salvini, A, 2009. Hysteresis model identification by the flock-of-starlings optimization, Int. J. Appl. Electromagnet. Mech. 30 (3–4) (2009), pp. 321–331.
  • Fulginei, FR, and Salvini, A, 2011. Influence of topological rules into the collective behavior of swarm intelligence: The flock of starling optimization, Stud. Comput. Intell. 327/2011 (2011), pp. 129–145.
  • Ballerini, M, Cabibbo, N, Candelier, R, Cavagna, A, Cisbani, E, Giardina, I, Lecomte, V, Orlandi, A, Parisi, G, Procaccini, A, Viale, M, and Zdravkovic, V, , Interaction ruling animal collective behavior depends on topological rather than metric distance: Evidence from a field study, Proc. Natl Acad. Sci. 105 (2008), pp. 1232–1237.
  • Muller, SD, Marchetto, J, Airaghi, S, and Kournoutsakos, P, 2002. Optimization based on bacterial chemotaxis, IEEE Trans. Evol. Comput. 6 (1) (2002), pp. 16–29.
  • Kennedy, J, and Eberhart., R, 1995. Particle Swarm Optimization. Vol. IV. Perth, Australia: Proceedings of the IEEE International Conference on Neural Networks, IEEE Service Center; 1995. pp. 1942–1948.
  • Reynolds, CW, 1987. Flocks, herds and schools: A distributed behavioral model, Comput. Graph. 21 (4) (1987), pp. 25–34.
  • Heppner, F, and Grenander, U, 1990. "A stochastic nonlinear model for coordinated bird flocks". In: Krasner, S, ed. The Ubiquity of Chaos. Washington, DC: AAAS Publications; 1990. pp. 233–238.
  • Engelbrecht, AP, 2002. Computational Intelligence: An Introduction. New York: Wiley; 2002.
  • Ali, MM, and Kaelo, P, 2008. Improved particle swarm algorithms for global optimization, Appl. Math. Comput. 196 (2008), pp. 578–593.
  • Clerc, M, and Kennedy, J, 2002. The particle swarm: Explosion stability and convergence in a multi-dimensional complex space, IEEE Trans. Evol. Comput. 6 (1) (2002), pp. 58–73.
  • Glover, F, 1987. Tabu search methods in artificial intelligence and operations research, ORSA Artif. Intell. 1 (2) (1987), p. 6.
  • Schittkowski, K, 2004. Report. Department of Computer Science, University of Bayreuth; 2004, Available at http://www.ai7.uni-bayreuth.de/mc_ode.htm.
  • Fulginei, FR, and Salvini, A, 2005. Soft computing for identification of Jiles-Atherton model parameters, IEEE Trans. Magn. 41 (3) (2005), pp. 1100–1108.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.