1,597
Views
6
CrossRef citations to date
0
Altmetric
Original Articles

An interactive surrogate-based method for computationally expensive multiobjective optimisation

, , , &
Pages 898-914 | Received 04 Oct 2016, Accepted 20 Apr 2018, Published online: 20 May 2018

Abstract

Many disciplines involve computationally expensive multiobjective optimisation problems. Surrogate-based methods are commonly used in the literature to alleviate the computational cost. In this paper, we develop an interactive surrogate-based method called SURROGATE-ASF to solve computationally expensive multiobjective optimisation problems. This method employs preference information of a decision-maker. Numerical results demonstrate that SURROGATE-ASF efficiently provides preferred solutions for a decision-maker. It can handle different types of problems involving for example multimodal objective functions and nonconvex and/or disconnected Pareto frontiers.

1. Introduction

Many multiobjective optimisation problems (MOPs), arising e.g., in engineering applications, often involve multiple incommensurable, highly nonlinear, black-box and/or multimodal objective functions. Function evaluations in such problems may be conducted through time-consuming experiments and/or simulators. In the literature, these problems are called computationally expensive multiobjective optimisation problems. MOPs typically have several (in many cases infinitely many) optimal solutions known as Pareto optimal solutions. The set of all Pareto optimal solutions in the objective space (called Pareto frontier) can be nonconvex and/or disconnected. From a mathematical point of view and without any preference consideration, Pareto optimal solutions are equally acceptable for an MOP. Therefore, when solving an MOP, a decision-maker (DM) is required to provide preference information and to choose his/her preferred solutions.

Multiobjective optimisation methods are often categorised according to a decision-maker’s role in the solution process, i.e., non-interactive and interactive methods (Miettinen, Citation1999). According to Miettinen (Citation2008), in non-interactive methods, the DM either is not involved or provides preference information before or after the actual solution process. In interactive methods, the DM plays an essential role and the intention is to support him/her in the search for the most preferred solution. In such methods, steps of an iterative solution algorithm are repeated and the DM progressively specifies preference information so that the most preferred solution can be found. Examples of types of specifying preference information are reference points and classification of objective functions. What is noticeable is that the DM can determine and alter his/her preferences between each iteration and in the meantime learn about the interdependencies in the problem as well as about one’s own preferences. This is a significant advantage of interactive methods in light of the fact that becoming more acquainted with the problem, its possibilities and limitations is often very valuable for the DM. See Miettinen (Citation2008), Miettinen, Ruiz, and Wierzbicki (Citation2008) for details of non-interactive and interactive methods.

Surrogate-based methods are commonly used in the literature to alleviate computational cost (Jones, Citation2001; Regis & Shoemaker, Citation2012; Audet, Citation2014; Tan, Citation2015; Dengiz et al., Citation2009; Chih, Citation2013; Reis dos Santos & Reis dos Santos, Citation2011; Hurrion, Citation2000; Kleijnen & van Beers, Citation2013; Van Beers & Kleijnen, Citation2003; Yang & Tseng, Citation2002; Mehdad & Kleijnen, Citation2015; Park, Yum, Hung, Jeong, & Jeong, Citation2016). The basic idea in such methods is to introduce a computationally less expensive problem called a surrogate problem and to replace the original problem with the surrogate one. In the literature, methods have been developed in which surrogate problems are built independently of the type of the optimisation algorithms employed in them. In Messac and Mullur (Citation2008), a non-interactive surrogate-based method is developed. This surrogate-based method does not consider the role of a DM and cannot handle multimodal functions or disconnected Pareto frontiers. The surrogate-based methods proposed in Yun, Yoon and Nakayama (Citation2009), Kitayama, Srirat, Arakawa, and Yamazaki (Citation2013) consider the role of a DM in the solution process. However, they are not interactive methods. This means that if a DM wishes to provide new preferences, the entire methods should be run again. Therefore, the DM should wait for a long time to get new solutions corresponding to his/her preferences. In (Hartikainen, Miettinen, & Wiecek, Citation2012; Hartikainen & Lovison, Citation2015; Ruiz, Sindhya, Miettinen, Ruiz, & Luque, Citation2015) methods have been developed by which the DM can find preferred solutions quickly. These methods require a set of approximated solutions a priori generated by some other surrogate-based methods.

In Tabatabaei, Hakanen, Hartikainen, Miettinen, and Sindhya (Citation2015), we surveyed surrogate-based methods, and observed some shortcomings including inability to (1) involve a DM in the solution process, (2) deal with multimodal functions and (3) capture a nonconvex and disconnected Pareto frontier. To overcome these shortcomings, in this paper, we develop an interactive surrogate-based method. In our method, a DM provides his/her preferences in the form of a reference point containing aspiration levels representing desirable values for objective functions. According to Larichev (Citation1992), the type of preference information in the form of a reference point has been regarded to be understandable for a DM.

The method proposed is called SURROGATE-ASF and involves two phases, i.e., initialisation and decision-making phases. In the initialisation phase, a set of non-dominated solutions in the decision and objective spaces are generated. Using these solutions, hyper-boxes are formed in the decision space. Corresponding to each hyper-box, a single objective surrogate function is built by approximating an achievement scalarising function (ASF) (Wierzbicki, Citation1986). In the literature, this function is used to compute the (weakly) Pareto optimal solution for a given reference point. In the decision-making phase, the single objective surrogate functions built in the initialisation phase are employed in generating solutions reflecting the preferences of the DM. One should note that the DM is only involved in the latter phase. Based on the comparison in Muller and Shoemaker (Citation2014), a cubic radial basis function (RBF) with a linear tail is employed as the metamodeling technique in this paper, but SURROGATE-ASF is not limited to RBF.

SURROGATE-ASF has been developed for computationally expensive MOPs with box constraints. The novelty of SURROGATE-ASF can be summarised as follows: (1) the DM does not need to wait for a long time to obtain his/her solutions corresponding to his/her preferences, (2) SURROGATE-ASF utilises reference points given by a DM as input, (3) the DM can explore different regions, a particular interesting region or the entire Pareto frontier, (4) it provides approximated solutions in both the decision and objective spaces simultaneously.

The rest of this paper is organised as follows. In Section 2, the basic concepts used in this paper are addressed. The SURROGATE-ASF method is presented in Section 3. Numerical results of evaluating the performance of SURROGATE-ASF on a practical shape optimisation problem of designing an airfoil as well as some benchmark problems are presented in Section 4. Finally, in Section 5, we draw our conclusions and discuss paths for future research.

2. Basic concepts

In this section, we introduce the concepts and notations used in this paper. We consider multiobjective optimisation problems of the form:(1) minimisexS{f1(x),,fk(x)},(1)

where fi:SR are k (2) conflicting, computationally expensive objective functions, S={xRn:xclxcxcu,c=1,,n} is a nonempty feasible decision set which is a subset of the decision space Rn. A solution x=(x1,,xn)TS is called a feasible decision (variable) vector, where xc,c=1,,n, are decision variables and, xcl and xcu are the lower and upper bounds of xc, respectively. The image of x in the objective space Rk is called a feasible objective vector denoted by f(x). The image of S in the objective space is called the feasible objective set denoted by Z(=f(S)).

A feasible solution xS and the corresponding f(x)Z are termed weakly Pareto optimal for problem (Equation1), if there does not exist another feasible solution xS such that fi(x)<fi(x) for all i=1,,k. Correspondingly, they are Pareto optimal for problem (Equation1), if there does not exist another feasible solution xS such that fi(x)fi(x) for all i=1,,k, and fj(x)<fj(x) for at least one index j{1,,k}. The set of all Pareto optimal solutions in the objective space is called a Pareto frontier. Let the set Xd={x1,,xd} be an arbitrary subset of feasible solutions in S, and Fd={f(x1),,f(xd)}, the corresponding objective vectors in Z. A solution xi (or f(xi)), i=1,,d, that satisfies the definition of Pareto optimality with respect to all solutions in Xd (or Fd), is called a non-dominated solution in Xd (or Fd). A solution x (or f(x)) is called a locally non-dominated solution if there exist a non-empty set SS such that, x (or f(x)) satisfies the definition of Pareto optimality with respect to all points in S (or f(S)). A Pareto optimal solution is a non-dominated solution, but a non-dominated one is not necessarily Pareto optimal.

We also define a feasible solution xeiargminxS{fi(x)} for i=1,,k. The ith-pagination

extreme point (solution) for i=1,,k, is defined as zei=f(xei). These extreme points in the objective space are also extremes of the Pareto frontier. Based on the extreme points, a vector of lower bounds of the objective function values in the Pareto frontier is defined as the ideal (objective) vector and denoted by zideal=(z1ideal,,zkideal)T where ziideal=fi(xei) for i=1,,k. The utopian (objective) vector zutp is a vector in which its components are calculated by subtracting some small positive scalar (e.g., 10-6) from the components of zideal. A vector of upper bounds of the objective function values in the Pareto frontier is defined as the nadir (objective) vector and denoted by znadir=(z1nadir,,zknadir)T. The components of the nadir vector can be approximated by e.g., a pay-off table using the extreme points. More information of the ideal, utopian and nadir vectors is given e.g., in Miettinen (Citation1999). We define the difference operator a-b=(a1-b1,,ak-bk), where a=(a1,,ak),b=(b1,,bk)Rk.

In SURROGATE-ASF, the DM provides his/her preferences in the form of a reference point z¯=(z¯1,,z¯k)T, where z¯i is an aspiration level representing a desirable value for the objective function fi. One can find a preferred solution for a reference point given by a DM by applying an appropriate scalarisation. Scalarising problem (Equation1) means formulating a single objective optimisation problem such that its (globally) optimal solution is a Pareto optimal solution for (Equation1). In this paper, we consider the following widely used achievement scalarising function (ASF) (Wierzbicki, Citation1986) as an important element of SURROGATE-ASF:(2) ASF:S×RkR(x,z¯)maxi=1,,k(wi(fi(x)-z¯i)),(2)

where wi0, for i=1,,k, are non-negative fixed weights which actually set a direction where z¯ is projected onto the Pareto frontier. In this paper we set wi=1zinadir-ziutp, for i=1,,k, which are widely used (Wierzbicki, Citation1986). A new (but still computationally expensive) single objective optimisation problem is formulated as(3) minimisexSASF(x,z¯).(3)

The reference point in problem (Equation3) can be feasible or infeasible, i.e., inside or outside of the feasible objective set. As proved in Wierzbicki (Citation1986), theoretically by solving problem (Equation3), a (weakly) Pareto optimal solution corresponding to the reference point is obtained, regardless of the feasibility or infeasibility of the reference point. This does not hold for all other scalarising approaches (Miettinen, Citation1999 ). Different (weakly) Pareto optimal solutions can be obtained by changing the reference point in problem (Equation3). One can add an augmentation term to (Equation2) to avoid weakly Pareto optimal solutions as discussed and proved in Miettinen (Citation1999), Wierzbicki (Citation1986).

Numerically, in cases with a limited number of function evaluations, the optimal solution of problem (Equation3) is a locally non-dominated solution. In what follows, an optimal solution of problem (Equation3) corresponding to a reference point given by a DM and a predetermined reference point is called a preferred solution and a reference solution, respectively. Moreover, problem (Equation1) is referred to as the original problem and its computationally expensive functions as original functions.

3. The SURROGATE-ASF method

In the SURROGATE-ASF method, a DM can provide preferred ranges of the objective functions. Then, they are considered as the utopian and the nadir vectors corresponding to the region which is interesting for the DM in the objective space. Alternatively, estimates of the utopian and nadir vectors can be incorporated into the method. In fact, if the utopian and nadir vectors are given, then the DM can explore the entire Pareto frontier by providing different reference points through multiple interactions in the decision-making phase to be discussed in Section 3.3. SURROGATE-ASF involves two phases, i.e., initialisation and decision-making phases. In the initialisation phase, a set of non-dominated solutions within the preferred ranges given by the DM is generated. Then, by calling Algorithm 1 to be discussed in the following subsection, a finite number of hyper-boxes is formed using the non-dominated solutions in the decision space. For each individual hyper-box, a single objective surrogate function is built by approximating the ASF (Equation2) where the decision variables of problem (Equation1) and the aspiration levels appearing in the ASF are treated as variables of the surrogate function. In the decision-making phase to be discussed in Section 3.3, these computationally inexpensive surrogate functions are utilised to interact with the DM.

Figure 1. Projection of a given reference point onto the convex hull.

Figure 1. Projection of a given reference point onto the convex hull.

Figure 2. (a): A convex hull with 5 points for a bi-objective problem with q=4 divisions. (b): A convex hull with 15 points for a three-objective problem with q=4 divisions.

Figure 2. (a): A convex hull with 5 points for a bi-objective problem with q=4 divisions. (b): A convex hull with 15 points for a three-objective problem with q=4 divisions.

3.1. Surrogate building module

If the DM provides the preferred ranges, the extreme points of the corresponding region in the objective space must be calculated. To be more specific, the corresponding solutions in the decision space are found by solving problem (Equation3) using the extreme points as reference points. One should note that if the extreme points are formed based on the estimates of the utopian and nadir vectors, then their corresponding solutions in the decision space are already available. The approximation of the ASF requires sample points for the aspiration levels and the decision variables. However, before discussing the procedure of selecting sample points, we first look at an interesting property regarding problem (Equation3). This property assists to identify regions in the decision and objective spaces where to select sample points.

Suppose that H is the convex hull of all individual extreme points. This is constructed by a convex combination of all extreme points in the objective space. Moreover, let z¯ be a reference point given by a DM, and x be the corresponding optimal solution of problem (Equation3). Thus, xargminxS[maxi=1,,k(wi(fi(x)-z¯i))]. On the other hand, xargminxS[maxi=1,,k(wi(fi(x)-z¯i))+t], for all tR. Therefore, we have:xargminxSmaxi=1,,k(wi(fi(x)-z¯i))+t=argminxSmaxi=1,,k(wi(fi(x)-z¯i)+t)=argminxSmaxi=1,,k(wi(fi(x)-z¯i+twi))=argminxSmaxi=1,,k(wi(fi(x)-(z¯i-twi)))=argminxSmaxi=1,,k(wi(fi(x)-z¯i)),

where z¯ is treated as an arbitrary reference point on the ray passing through z¯ in parallel with w=(w1,,wk)T. It means that the optimal solution of problem (Equation3) for a reference point z¯ given by a DM is also the optimal solution of problem (Equation3) for any reference point on the ray passing though z¯ in parallel with w (see Figure ). The following problem gives the projected reference point z¯ in H:(4) z¯argminhHh-z¯w(4)

where . is the Euclidean norm. Problem (Equation4) applies if and only if there exists a real number tR such that z¯=h+tw for some hH.

Since any reference point within the preferred ranges of objective functions given by the DM can be projected onto the convex hull H (i.e., finding the closest point on H along the direction w), sample points for the aspiration levels as reference sample points are selected on this convex hull. To do this, first, a set of evenly distributed predetermined reference points including the extreme points is generated on the convex hull H and denoted by Z¯P. The reference solutions (non-dominated solutions) in the decision space corresponding to these reference points are obtained by solving problem (Equation3) for each individual predetermined reference point. This is conducted by employing a surrogate-based single objective optimisation method. The choice of this method has an impact on the performance of SURROGATE-ASF. One can use an appropriate state-of-the-art method to solve problem (Equation3) efficiently. These reference solutions are evaluated with the original functions.

In order to generate the predetermined reference points (without involving any DM), we use the method presented in Das and Dennis (Citation1998) that places points on a convex hull of all individual extreme points (a (k-1)- dimensional simplex). If hz predetermined reference points (including the extreme points in the objective space) are considered, the q divisions along each objective coordinate axis in the objective space of a k-objective problem can be calculated using:(5) hz=k+q-1q.(5)

Figures  (a) and  (b) depict two convex hulls for bi- and three-objective problems with q=4 and hz=2+4-14=5 and hz=3+4-14=15 predetermined reference points (black circles), respectively.

The sets of reference solutions in the decision and the objective spaces are denoted by XP and FP, respectively. By making use of the set of predetermined reference points Z¯P and reference solutions XP, the convex hull H and the decision space are decomposed into a finite number of sub-regions and hyper-boxes, respectively. In what follows, a=1,,r, represents the index corresponding to the ath hyper-box or sub-region where r is the number of hyper-boxes (sub-regions). For k= 2 and 3, we have r=q and q2, respectively.

Algorithm 1 starts by constructing sub-regions on H using the predetermined reference points in Z¯P. Each sub-region is formed by selecting the k nearest neighbor points on H starting from one of the extreme points. Figures  (a) and  (b) represent one possible way of forming the sub-regions and numbering them where k=2 and k=3, respectively. Then, reference sample points (the stars in Figures  (a) and  (b)) are selected within these sub-regions. To select reference sample points, the predetermined reference points corresponding to the ath sub-region (i.e., Z¯P,a={z¯P,a,1,,z¯P,a,k}) are considered as the extreme points of this sub-region. Then, by using the method presented in Das and Dennis (Citation1998), a set of evenly distributed reference sample points within this sub-region on the convex hull is generated. The larger the number of generated reference sample points, the higher is the accuracy of the surrogate function ASF~a corresponding to the ath sub-region. These reference sample points along with the reference points in Z¯P,a are considered as sample points corresponding to this sub-region and denoted by Z¯a.

To decompose the decision space, hyper-boxes are built corresponding to sub-regions on the convex hull H. Considering the ath sub-region on H and the set of predetermined reference points forming this sub-region, i.e., Z¯P,a={z¯P,a,1,,z¯P,a,k}, the set of non-dominated solutions corresponding to these predetermined reference points in the decision and the objective spaces are denoted by XP,a={xP,a,1,,xP,a,k} and FP,a={f(xP,a,1),,f(xP,a,k)}, respectively. Then, the lower and the upper bounds of the decision variable xc, for c=1,,n, within the ath hyper-box denoted by Sa corresponding to this sub-region are calculated as follows:(6) min{xcP,a,1,,xcP,a,k}xcmax{xcP,a,1,,xcP,a,k},for allc=1,,n.(6)

Figure  shows a simple example of hyper-boxes for a problem with two decision variables. Once Sa is formed, a small number of initial sample points within this hyper-box is selected using some sampling technique such as Latin hypercube sampling (LHS) (Helton, Johnson, Sallaberry, & Storlie, Citation2006) and evaluated with the original functions. These evaluated points along with the non-dominated solutions in XP,a are considered as the initial sample points in the decision space corresponding to Sa and denoted by X¯a.

Figure 3. Hyper-boxes in the decision space corresponding to the sub-regions on the convex hull in Figure  (a). For example, reference solutions xP,1,1 and xP,1,2 form S1.

Figure 3. Hyper-boxes in the decision space corresponding to the sub-regions on the convex hull in Figure 2 (a). For example, reference solutions xP,1,1 and xP,1,2 form S1.

To build the surrogate function ASF~a corresponding to the hyper-box Sa, a cubic RBF with a linear tail is employed to approximate ASF (its description is given e.g., in Muller, Shoemaker, and Piche (Citation2013)). To do this, the following Cartesian product of sets X¯a and Z¯a is formed as input data denoted by Ta for RBF:(7) Ta=X¯a×Z¯a={(x¯,z¯)|x¯X¯aandz¯Z¯a}.(7)

The ASF values of all elements in Ta are calculated. One should note that the objective function values of all sample points in X¯a are available. Then, Ta is divided into validation and training sets using the leave-one-out cross-validation (LOOCV) or the k-fold cross-validation method discussed in Muller and Shoemaker (Citation2014) (note that it differs from the index k for the kth objective function). Let Ta be the cardinality of Ta. According to Muller and Shoemaker (Citation2014), if Ta50, we use LOOCV. If Ta>50, then we use the k-fold cross-validation method where k=10, for 50<Ta100, k=20, for 100<Ta150 and k=30, for 150<Ta200. Having the elements in the training set and their corresponding ASF values as input data, RBF is employed to build ASF~a. When using either LOOVC or the k-fold cross-validation method, the prediction accuracy of ASF~a is evaluated using standard error measures like Root Mean Squared Error (RMSE) (Giunta, Watson, & Koehler, Citation1998) and/or R2 (Jin, Chen, & Simpson, Citation2001).

The steps of the decomposition procedure of the decision space into a finite number of hyper-boxes and building corresponding surrogate functions are shown in Algorithm 1. Note that the prediction accuracy of ASF~a may not be satisfactory after steps 1-10 of Algorithm 1 and, in that case, it needs to be improved. In the following subsection, we discuss an update strategy called Algorithm 2 by which new sample points within the hyper-box Sa can be selected. The process of updating ASF~a is repeated until a desired accuracy is achieved.

The non-dominated solutions used to form hyper-boxes correspond to the predetermined reference points on the convex hull. It may happen that at least two predetermined reference points have the same corresponding non-dominated solution. In such cases, to form sub-regions and hyper-boxes, out of all predetermined reference points with the same non-dominated solution (i.e., the same objective values but different decision variable values), only one reference point (selected arbitrarily) and its corresponding non-dominated solution are considered along with other different predetermined reference points and non-dominated solutions. Then, sub-regions and hyper-boxes are formed. It may also occur that hyper-boxes overlap each other. In these cases, some sample points selected fall into more than one hyper-box. However, sub-regions and hyper-boxes are still formed as mentioned in Algorithm 1.

In practice, obtaining a desired accuracy may not be possible. In such a case, depending on the computational budget, one can consider a maximum number of function evaluations for each individual hyper-box. Then, either achieving a desired accuracy or reaching the maximum number of function evaluations can be considered as a stopping criterion.

3.2. Accuracy improvement module

To improve the accuracy prediction of ASF~a corresponding to Sa, we discuss Algorithm 2. When updating ASF~a, two matters are considered, i.e, sampling non-dominated solutions within Sa and having a good diversity among the sample points in X¯a.

In order to sample non-dominated solutions, we employ the sampling function discussed in Kitayama et al. (Citation2013) to generate new sample points within Sa. The authors of Kitayama et al. (Citation2013) modify the Pareto fitness function (Schaumann, Balling, & Day, Citation1998) and introduce a sampling function by using RBF such that by minimising this function, the optimal solution can be a non-dominated solution. See Kitayama et al. (Citation2013) for details of building this sampling function. For the sake of simplicity, we refer to this sampling function as the modified Pareto fitness function (MPF).

By minimising the MPF iteratively, non-dominated solutions within Sa can be generated. Nevertheless, the diversity of the points in X¯a to cover the hyper-box should be considered. Thus, in some iteration of updating ASF~a, some solution obtained by minimising the MPF is not evaluated with the original functions. This requires a criterion to assess whether the optimal solution of the MPF should be evaluated with the original functions and added into X¯a or not. We discuss this criterion here.

Since some points are not evaluated with the original functions, we introduce XP¯a and FP¯a as sets of archived points generated during the updating process in the decision and the objective spaces, respectively. These sets include all points in X¯a and F¯a (the points evaluated with the original functions) and sample points which are not evaluated with the original functions. In what follows, a procedure is discussed to assign objective function values to the points not evaluated with the original functions.

Building the MPF is based on the points in XP¯a and FP¯a. To do this, all points in XP¯a and their corresponding MPF values are considered as input data to train a cubic RBF with a linear tail. Once the MPF is built, it is minimised and the optimal solution x¯cand within Sa is obtained. Then, as depicted in Figure , the closest point in X¯a in terms of the Euclidean distance to x¯cand is found. This point is denoted by x¯closecand, the corresponding objective vector by f¯closecand and the Euclidean distance between x¯cand and x¯closecand by dclosecand. The closest point in X¯a\{x¯closecand} to x¯closecand is also found. We denote this point by x¯closeclose and the Euclidean distance between x¯closeclose and x¯closecand by dcloseclose.

Figure 4. Selecting sample points within Sa to update ASF~a.

Figure 4. Selecting sample points within Sa to update ASF~a.

To have diversity among sample points within X¯a, the criterion dclosecand>dcloseclose2 is checked. This means that if x¯cand is outside the circle centered on x¯closecand with radius dcloseclose2 (see Figure ), then x¯cand is evaluated with the original functions and the objective vector f¯cand is calculated. Before adding x¯cand and f¯cand into X¯a and F¯a, respectively, the prediction accuracy of ASF~a is assessed. To accomplish this, a test set as defined in Ta (Equation7) is formed where x¯cand and all reference sample points in Z¯a are utilised to form the Cartesian product set. All elements in the test set are evaluated with ASF~a and ASF, and the prediction accuracy of ASF~a is checked. If the accuracy is not acceptable, x¯cand is added into X¯a and XP¯a and f¯cand into F¯a and FP¯a, respectively. Then, the points in X¯a are utilised to update ASF~a and the points in XP¯a to update the MPF. The process of minimising the MPF and checking the diversity criterion is repeated. If the prediction accuracy is acceptable, then the process of updating ASF~a is terminated.

If dclosecanddcloseclose2, x¯cand is inside the circle centered on x¯closecand with radius dcloseclose2, then x¯cand is not evaluated with the original functions and is not added into X¯a. However, to generate new sample points by the MPF, the objective vector corresponding to x¯cand is required. In this case, to avoid generating x¯cand again, the objective vector corresponding to this point is artificially treated as a dominated point in the objective space by the objective vector f¯closecand. To do this, an artificial objective vector denoted by f¯artifcand is considered for this point by adding a random vector with positive components to the objective vector f¯closecand. This random vector is a fraction of znadir-f¯closecand, e.g., znadir-f¯closecand10. By this choice, f¯closecand is non-dominated with respect to f¯artifcand, and is inside the feasible objective space. Then, x¯cand and f¯artifcand are added into XP¯a and FP¯a. Note that these points are not added into X¯a and F¯a. Then, the MPF values of all points in XP¯a are calculated and the MPF is updated. The updating process is repeated until the stopping criterion regarding the accuracy of ASF~a is met.

3.3. SURROGATE-ASF and decision-making

In this subsection, we present both the decision-making phase as well as the SURROGATE-ASF algorithm. As discussed in Section 3.1, in the initialisation phase, surrogate functions are built by calling Algorithm 1. These surrogate functions are employed to interact with the DM.

In the decision-making phase, if the DM did not provide any preferred ranges, the utopian and the nadir vectors are shown to him/her. The DM is supposed to specify reference points within the ranges. In any case, the DM provides a reference point z¯. The interaction with the DM in SURROGATE-ASF corresponds to that of the reference point method given in Wierzbicki (Citation1982). The reference point z¯ is projected onto the convex hull H by solving problem (Equation4) as discussed in Section 3.1. In practice, to find the projected reference point denoted by z¯, we do not solve problem (Equation4). A simple approach is to generate a large number of uniformly distributed points on H. Then, the point that minimise the objective function of problem (Equation4) is selected. Then, the sub-region and the hyper-box number corresponding to this projected reference point are identified and denoted by a~ and the following surrogate problem is solved by using any appropriate single objective optimisation method:(8) minimisexSa~ASF~a~(x,z¯),(8)

where Sa~ is the a~th hyper-box. The optimal solution obtained is an approximation of the preferred solution in the decision space corresponding to the reference point z¯ given by the DM. This solution is evaluated with the original functions and shown to the decision-maker. The process of asking for a reference point from the DM, projecting it onto the convex hull and solving the corresponding surrogate problem is repeated until the most preferred solution satisfying the DM is obtained. An overview of the algorithm of SURROGATE-ASF including the initialisation and the decision-making phases is given in Algorithm 3.

4. Numerical results

In this section, we demonstrate the performance of SURROGATE-ASF through solving a shape optimisation problem of designing an airfoil. Since this problem contains objective functions with a practical meaning, it is easier to describe the decision-making process. Therefore, we discuss the decision-making phase comprehensively for this problem. In addition to this practical problem, we consider academic benchmark problems attributing different characteristics such as non-convex Pareto frontiers, connected or disconnected sets of Pareto optimal solutions and multimodal functions. Results involving the benchmark problems are given in Section 4.2. The decision-making phase for these academic benchmark problems is similar to the one in the shape optimisation problem.

We compare the performance of SURROGATE-ASF with the method (Yun et al., Citation2009) (henceforth called here the Yun method) and the PAINT method (Hartikainen, Miettinen, & Wiecek, Citation2012). According to the findings presented in Tabatabaei et al. (Citation2015), the Yun method and PAINT are the most similar methods to SURROGATE-ASF because in all of them the DM provides preference information in the form of a reference point and, thus, these three methods are comparable.

The Yun method is non-interactive in which μ-ν-SVR (Yun et al., Citation2009) is applied as the metamodeling technique. Because it is not interactive by nature, for each individual reference point given by the DM, the entire method must be run for enabling interaction with the DM. In this method, first each objective function is approximated and a surrogate multiobjective optimisation problem is formulated. With the objective functions in the surrogate problem, problem (Equation3) as a single objective computationally inexpensive problem is formulated to be solved. The solution of this problem as well as some sample points, selected based on the Lagrangian coefficients in μ-ν-SVR, are utilised to improve the accuracy of the surrogate problem. By comparing the performance of the Yun method and SURROGATE-ASF, we show the advantages of the approach of approximating the achievement scalarising function (Equation2) as a computationally expensive objective function and then converting it to a computationally inexpensive one over the approach of first approximating each expensive function individually and then forming the achievement scalarising function. We implemented the Yun method in MATLAB 2015b.

Similar to SURROGATE-ASF, the PAINT method requires a set of non-dominated solutions being available to build a surrogate problem. In PAINT, a linear mixed integer multiobjective optimisation problem is introduced as a surrogate of the original problem. Then, this problem is scalarised using the achievement scalarising function including a reference point given by the DM. This scalarised problem is solved, and an approximated solution in the objective space is obtained. PAINT can only provide solutions in the objective space, i.e., decision variable values of the approximated solutions cannot be obtained. The approximated solution is shown to the DM. If the DM is not satisfied, (s)he provides another reference point. The interaction with the DM is conducted until the approximated solution obtained is acceptable for the DM. Then, this solution is considered as a reference point. Using this reference point and the original, computationally expensive functions, problem (Equation3) is formulated as a computationally expensive single objective optimisation problem. This problem is solved by a (surrogate-based) single objective optimisation method. The optimal solution is the most preferred (approximated) solution and provided to the DM. One should note that, for any approximated solution generated in the intermediate iterations, it is possible to obtain the corresponding decision variable values by solving an expensive single objective optimisation problem. In this paper, we follow the steps given in Hartikainen et al. (Citation2012) and do not calculate approximated solutions in the decision space until the last iteration.

At the end of the solution process by an interactive method, the most preferred solution (which is usually a single solution) is obtained. In this aspect, to the best of our knowledge, there is no established quality measure in the literature which is suitable for a reference point based interactive method. Therefore, we compare the closeness of approximated solutions in the objective space obtained by SURROGATE-ASF, the Yun method and PAINT with their corresponding solutions obtained by solving problem (Equation3) in which the original functions are used. To do so, problem (Equation3) (for the academic benchmark problems) is solved including reference points given by the DM. In what follows, we refer to these solutions as original solutions. The closeness of the approximated solution f~j and the original solution fj for j=1,,l, (where l is the number of reference points given by the DM) in the objective space is assessed based on the Euclidean distance. To do this, first normalised approximated solutions f~normj=f~j-zidealznadir-zideal and normalised original solutions fnormj=fj-zidealznadir-zideal are calculated for j=1,,l. Then, the Euclidean distance between the normalised approximated and original solutions in the objective space is calculated as follows:(9) Ej=f~normj-fnormj(9)

where . is the Euclidean norm.

We also define(10a) μc=1lj=1lEj,(10a) (10b) σc=1l-1j=1l(Ej-μc)2.(10b)

as mean and standard deviation of the Ejs, respectively, to be used for comparing the Ejs obtained by the methods.

4.1. Solving a shape optimisation problem

In this subsection, we demonstrate the performance of SURROGATE-ASF by solving a bi-objective airfoil shape optimisation problem(11) minimisexSCd(x)Cl(x),Cm2(x),(11)

where Cd, Cl and Cm are the drag, lift and pitching moment coefficients, respectively, and S={xR12:0.0085x10.0126,0.0020x20.0040,7.0000x310.0000,10.0000x414.0000,-0.0060x5-0.0030,0.0025x60.0050,0.4100x70.4600, 0.1100x80.1300, -0.9000x9-0.7000, 0.2000x100.2600, -0.0230x11-0.0150,0.0500x120.2000} is the feasible decision set. The angle of attack, Reynolds and Mach numbers were 4.0, 2.0×106 and 0.1, respectively. The CFD solver adopted in this problem was XFOIL (Drela, Citation1989). Problem (Equation11) termed as ASO-MOP2 has been investigated with a non-interactive method in Arias-Montano, Coello Coello, and Mezura-Montes (Citation2012) with a budget of 2000 function evaluations. In our paper, we also used the same budget to solve the problem. In Arias-Montano, Coello Coello, and Mezura-Montes (Citation2012), the description of the decision variables and the geometry of the corresponding airfoil (see Figure 3 in Arias-Montano, Coello Coello, and Mezura-Montes (Citation2012)) is provided. In this problem, the DM was interested in exploring the objective space for ranges 0.0043f10.0064 and 0.0076f20.034.

As said, the surrogate-based single objective optimisation method to solve problem (Equation3) as a computationally expensive one has an impact on the performance of SURROGATE-ASF in terms of reducing the computational cost. In Muller and Shoemaker (Citation2014), a toolbox implemented in MATLAB called MATSuMoTo (can be downloaded from https://courses.cit.cornell.edu/jmueller/) has been developed to solve computationally expensive global optimisation problems. We applied MATSuMoTo to solve problem (Equation3).

In the initialisation phase of SURROGATE-ASF, according to equation (Equation5), hz=4 (q=3) predetermined reference points for the bi-objective problem were considered. For each predetermined reference point, 350 function evaluations were used to find the corresponding reference solutions by solving problem (Equation3). In Algorithm 1, the criteria to check the accuracy of ASF~a corresponding to the ath hyper-box (sub-region) for a=1,,r, were R2=0.95 and RMSE=0.005 as they are commonly used in the literature. Since the accuracy of the initial surrogate functions was not satisfactory with these choices, we applied Algorithm 2 to update the surrogate functions with respect to a maximum number of 100 function evaluations within each hyper-box. Tables A.1 and A.2 in Appendix A summarise the parameter settings of SURROGATE-ASF and MATSuMoTo, respectively. In Table A.1, the number of reference sample points (i.e., hz) within each sub-region was chosen heuristically according to (Equation5). In the initialisation phase, we set 1400 function evaluations for MATSuMoTo to obtain the reference solutions XP and FP corresponding to the points in Z¯P. The number of function evaluations needed by SURROGATE-ASF to build all surrogate functions was 284. Therefore, the total number of function evaluations in the initialisation phase of SURROGATE-ASF using MATSuMoTo to build all surrogate functions was 1684.

In the decision-making phase, reference points given by the DM were projected onto the convex hull H according to (Equation4). Then, the corresponding surrogate problem (Equation8) was solved including the projected reference points. In this problem, while the computationally inexpensive surrogate problem (Equation8) can be solved by any appropriate method, we employed a single objective genetic algorithm with the parameter settings given in Goldberg (Citation1989). As shown in Table , in the first iteration, the DM provided the first reference point z¯1 within his preferred ranges. The corresponding surrogate problem was solved and the preferred solution was obtained. This solution was evaluated with the original functions and the corresponding objective vector f1 was shown to the DM. For f1, the DM was interested in improving the ratio of drag to lift coefficient while keeping the pitching moment coefficient the same. Then, the next reference point z¯2 was given by the DM and the preferred solution f2 was presented to him. Comparing the ratio of drag to lift coefficient in z¯2, f1 and f2, the DM realised that with this pitching moment coefficient, the improvement in the ratio of drag to lift coefficient is very small. Thus, the DM sacrificed the pitching moment coefficient to improve the ratio of drag to lift coefficient, and provided the reference point z¯3. The preferred solution f3 was shown to the DM. The change in the pitching moment coefficient from f2 to f3 was acceptable for the DM. Based on the pitching moment coefficient in f3, the DM provided z¯4 to improve the ratio of drag to lift coefficient. The corresponding preferred solution f4 was presented to the DM.

Table 1. Reference points given by the DM and corresponding preferred solutions in the objective space.

Table 2. The Euclidean distance between approximated and original solutions in the objective space (biobjective problems, SUR. and PAINT with MAT.).

Table 3. The Euclidean distance between approximated and original solutions in the objective space (biobjective problems, SUR. and PAINT with DIR.).

Table 4. The Euclidean distance between approximated and original solutions in the objective space (3-objective problems, SUR. and PAINT with MAT.).

Figure 5. Geometric representation of the airfoils corresponding to each iteration.

Figure 5. Geometric representation of the airfoils corresponding to each iteration.

Table 5. The Euclidean distance between approximated and original solutions in the objective space (3-objective problems, SUR. and PAINT with DIR.).

Table 6. The Euclidean distance between approximated and original solutions in the objective space (DTLZ2 problem, SUR. and PAINT with MAT.).

Table 7. The Euclidean distance between approximated and original solutions in the objective space (DTLZ2 problem, SUR. and PAINT with DIR.).

When comparing z¯4 and f4, the DM figured out that the ratio of drag to lift coefficient is very close to his preferred value given in z¯4. Since the pitching moment coefficient in f4 was acceptable for the DM, f4 satisfied him and the solution process was terminated. Table summarises the reference points given and the corresponding preferred solutions in the objective space obtained by SURROGATE-ASF (SUR) with MATSuMoTo. Figure  depicts the geometric shape of the airfoils corresponding to the preferred solutions obtained in each iteration. In the decision-making phase, only four computationally expensive function evaluations were conducted, that is, one for each reference point. Because of this, the DM was able to find the most preferred solution fast. One should note that the total number of function evaluations in the initialisation and decision-making phases of SURROGATE-ASF using MATSuMoTo was 1688. We used this number for comparing with other methods.

For the reference points given by the DM, we also obtained the corresponding preferred solutions by the Yun method and PAINT. Since the Yun method is a non-interactive method, we ran the method for each individual reference point. The number of function evaluations used for each reference point was set as 16884=422. Regarding PAINT, to build a surrogate problem, we employed the same reference solutions in FP used in SURROGATE-ASF. During the decision-making phase, we only obtained the approximated preferred solutions in the objective space. At the end of the decision-making by PAINT, we solved problem (Equation3) for the last reference point given by the DM. The maximum number of function evaluations in this step was 288 function evaluations, since 1400 evaluations were already used in obtaining the reference solutions FP.

Since the Pareto optimal set of this problem is not known, we cannot use the Euclidean distance (Equation9) and, instead, compare the solutions obtained in terms of dominance. As can be seen in Table , all solutions obtained by SURROGATE-ASF with MATSuMoTo dominate the corresponding solutions obtained by the Yun method and PAINT.

In terms of computational burden, SURROGATE-ASF with MATSuMoTo required 66 minutes to build surrogate functions in the initialisation phase. In the decision-making phase, the DM waited 2 seconds for each reference point to see the preferred solutions. In PAINT, the surrogate problem was built in 57 minutes. Then, for all reference point except the last once, the DM was able to see the preferred solutions after 2 seconds. However, to see the preferred solution corresponding to the last iteration, the DM had to wait for 9 minutes. In the Yun method, for each reference point, the DM had to wait for 27 minutes to see the corresponding solution. To summarise, the last preferred solution was obtained with SURROGATE-ASF, PAINT and the Yun method after 8 seconds, 546 seconds and 108 minutes, respectively. This comparison highlights the advantage of SURROGATE-ASF in which the DM was able to find the preferred solutions quickly. All this was done on a Dell Desktop with Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz processor and 8.00 GB RAM.

4.2. Solving benchmark problems

In this subsection, we evaluate the performance of SURROGATE-ASF through solving ZDT (Zitzler, Deb, & Thiele, Citation2000) and DTLZ (Deb, Thiele, Laumanns and Zitzler, Citation2002) benchmark problems. The Pareto frontiers of ZDT1, ZDT4 and DTLZ1 are convex, of ZDT2, ZDT6, DTLZ2, DTLZ3, DTLZ4, DTLZ5 and DTLZ6 are nonconvex and of ZDT3 and DTLZ7 are nonconvex and disconnected. Moreover, ZDT4, ZDT6, DTLZ2, DTLZ3, DTLZ4 and DTLZ6 have many locally nondominated solutions. These benchmark problems also contain multimodal objective functions. For all benchmark problems, we considered n=7 decision variables. The number of objective functions in ZDT and DTLZ problems was k=2 and k=3, respectively. Additionally, to evaluate the performance of SURROGATE-ASF on problems with more than 3 objective functions, we considered DTLZ2 (with multimodal objective functions) with 4 and 5 objective functions. Although objective functions involved in the benchmark problems are known, we treat them as black-box functions. In order to demonstrate the performance of SURROGATE-ASF in computationally expensive problems where solution times may be limited, we set the maximum number of function evaluations for each benchmark problem as 300.

For the problems considered here, along with MATSuMoTo discussed in Section 4.1, we applied the derivative-free DIRECT method (Jones, Perttunen, & Stuckman, Citation1993) (can be downloaded from http://www4.ncsu.edu/~ctk/Finkel_Direct/) as the second method to solve problem (Equation3) in the initialisation phase. Although DIRECT has not been developed for computationally expensive problems, we show how these different types of methods affect the performance of SURROGATE-ASF. Table A.3 in Appendix A summarises the parameter settings of DIRECT.

For each benchmark problem, we first applied MATSuMoTo and then DIRECT to solve problem (Equation3) taking into account the preferences of the DM. In the initialisation phase of SURROGATE-ASF, according to equation (Equation5), hz=4 (q=3) predetermined reference points were considered for biobjective problems. For three-, four- and five-objective problems we set hz=6 (q=2), hz=10 (q=2) and hz=15 (q=2), respectively.

By increasing the number of objective functions, the number of predetermined reference points is also increased. On the other hand, the computational budget was fixed. Therefore, in this phase, for two-, three-, four- and five-objective problems, we considered 230, 230, 200 and 150 function evaluations, respectively, to find the reference solutions (non-dominated solutions) corresponding to the predetermined reference points by solving problem (Equation3). In two-, three-, four- and five-objective problems, we considered l=3,4,5 and 5 reference points given by the DM. This means that in the decision-making phase, only l function evaluations were required. The remaining number of function evaluations was used in the initialisation phase to build a surrogate function for each sub-region. In the decision-making phase, while the computationally inexpensive surrogate problem (Equation8) can be solved by any appropriate global optimisation method, we employed DIRECT.

Similar to Section 4.1, we compare the performance of SURROGATE-ASF (with both MATSuMoTo and DIRECT) taking into account the preferences of a DM to that of the Yun method and PAINT. In this comparison, we find solutions corresponding to reference points given by the DM within the preferred ranges of the objective functions provided by the DM. These preferred ranges for all benchmark problems were set as [-50,50]k. This demonstrates that the method is not too sensitive on the choice of the ranges. As before, the Yun method was run for each individual reference point given by the DM. The number of function evaluations used for each reference point in all benchmark problems was set as 300l where l is the number of reference points given by the DM in the decision-making phase. Regarding PAINT to build a surrogate problem, we employed the same non-dominated solutions used in SURROGATE-ASF. During the decision-making phase, we only obtained the approximated preferred solutions in the objective space. At the end of the decision-making by PAINT, we solved problem (Equation3) for the last reference point given by the DM. The maximum number of function evaluations in this step was the remaining budget out of 300 function evaluations.

Tables summarises the Euclidean distances Ej, j=1,,l, between the approximated solutions obtained with SURROGATE-ASF (with MATSuMoTo), PAINT (with MATSuMoTo) and the Yun method and the original solutions for biobjective problems. Similarly, these results for SURROGATE-ASF (with DIRECT), PAINT (with DIRECT) and the Yun method are summarised in Table . The rightmost column represents mean and standard deviation defined in (Equation10a) and (Equation10b) for Ej, j=1,,l, for the methods mentioned above. Corresponding results for three-, four- and five-objective problems with MATSuMoTo and DIRECT are summarised in Tables , , and , respectively. Bold figures show the smallest distance between the approximated solutions obtained and the original solutions for each reference point. In all tables, SUR, MAT and DIR stand for SURROGATE-ASF, MATSuMoTo and DIRECT, respectively.

As far as applying MATSuMoTo is concerned to solve problem (Equation3) to obtain reference solutions corresponding to the predetermined reference points, SURROGATE-ASF outperformed PAINT and the Yun method on ZDT1, ZDT2, ZDT3, ZDT6 (highlighted distances in Table ), on DTLZ1, DTLZ4, DTLZ5, DTLZ7 (highlighted distances in Table ) and on DTLZ2 with 4 and 5 objective functions (highlighted distances in Table ).

Regarding problems with disconnected Pareto frontiers, SURROGATE-ASF with MATSuMoTo outperformed PAINT and Yun on ZDT3 and DTLZ7. As can be seen in Tables , , and , the Euclidean distances corresponding to solutions obtained by PAINT are larger than those by SURROGATE-ASF. This confirms the fact that PAINT does not perform well on problems with disconnected Pareto frontiers as mentioned in Hartikainen et al. (Citation2012).

As can be seen in Table , on DTLZ2 with 3 objective functions, SURROGATE-ASF performed better than the other methods expect on the last reference point. Due to the limited function evaluation budget for each hyper-box (sub-problem), Algorithm 2 stopped updating the surrogate function corresponding to this reference point before reaching the desirable accuracy R2=0.95 and RMSE=0.005. As a result, for this reference point, the approximated solution obtained by PAINT was slightly closer to the original one. On DTLZ6, the Yun method performed slightly better than SURROGATE-ASF with MATSuMoTo. For other reference points, though, approximated solutions obtained by SURROGATE-ASF were closer to the original ones.

On ZDT4 (in Table ) and DTLZ3 (in Table ), the reference solutions obtained by DIRECT were closer to the Pareto frontier in comparison with those obtained by MATSuMoTo. As a result, SURROGATE-ASF with DIRECT performed better than PAINT and the Yun method (as show in Tables and ). In particular, SURROGATE-ASF with DIRECT surpassed the Yun method significantly (in Table ). On the other hand, the Yun method outperformed SURROGATE-ASF with MATSuMoTo on these problems.

As can be seen in Tables and , MATSuMoTo and DIRECT solved problem (Equation3) for DTLZ1 and DTLZ2. On DTLZ1, not only SURROGATE-ASF with DIRECT (Table ) outperformed PAINT and the Yun method, but also SURROGATE-ASF with MATSuMoTo. For DTLZ2, SURROGATE-ASF with MATSuMoTo (Table ) performed better than SURROGATE-ASF with DIRECT, PAINT and the Yun method.

Tables and summarise the results for the problems with four and five objective functions. By increasing the number of objective functions, the number of sub-problems is also increased. With a limited number of function evaluations for each hyper-box (sub-problem), Algorithm 2 terminated the updating process based on the computational budget allocated rather than reaching the desirable accuracy R2=0.95 and RMSE=0.005. However, SURROGATE-ASF with MATSuMoTo and DIRECT surpassed both PAINT and the Yun method. Moreover, SURROGATE-ASF with MATSuMoTo performed better than SURROGATE-ASF with DIRECT.

Overall, considering μc and σc given in Tables , , , and , SURROGATE-ASF (based on both MATSuMoTo and DIRECT) performed very well in the numerical tests when compared to the two other methods. However, only in Table , the Yun method outperformed SURROGATE-ASF based on DIRECT. In addition, SURROGATE-ASF based on MATSuMoTo performed better than SURROGATE-ASF based on DIRECT on the benchmark problems considered.

In terms of computational burden we only report the maximum CPU time among all CPU time taken by the methods to solve the benchmark problems. SURROGATE-ASF with MATSuMoTo and DIRECT required at most 22 and 18 seconds, respectively to build surrogate functions in the initialisation phase. In the decision-making phase, the DM found the preferred solution corresponding to each reference point in less than 1 second. In PAINT with MATSuMoTo and DIRECT, the surrogate problem was built in 183 and 169 seconds, respectively. Then, the DM was able to find the preferred solutions in less than 1 seconds for all reference point except the last once. In the last iteration, to see the corresponding preferred solution, the DM waited for 4 seconds. In the Yun method, the DM managed to see the preferred solution for each reference point in 255 seconds. As can be understood, SURROGATE-ASG with MATSuMoTo and DIRECT outperformed other methods in terms of CPU time to provide preferred solutions for the DM.

5. Conclusions and future research directions

In this paper, we developed an interactive surrogate-based method called SURROGATE-ASF to solve computationally expensive MOPs. This method consists of two phases: initialisation and decision-making phases. In the initialisation phase, the decision space is decomposed into a finite number of hyper-boxes. For each hyper-box, a single objective surrogate of the achievement scalarising function is built by using e.g., a cubic RBF with a linear tail. In the decision-making phase, iterations with a decision-maker are conducted in an interactive fashion. In this phase, at each iteration a reference point is specified by the decision-maker. Then, a surrogate single objective optimisation problem is formulated and solved by any appropriate single objective optimisation method. The optimal solution is an approximation of the preferred solution in the decision space. This solution is evaluated with the original functions and shown to the decision-maker. The interaction with the decision-maker is repeated until the most preferred solution for him/her is found.

In SURROGATE-ASF, the approach of building a computationally inexpensive surrogate of the achievement scalarising function is applied instead of a typical approach of building a surrogate of each individual computationally expensive objective function and then forming the achievement scalarising function as discussed in Yun et al. (Citation2009). Therefore, when interacting with the DM, only a single objective optimisation problem is solved. The surrogate assists the DM to find his/her preferred solutions in both the decision and the objective spaces quickly. Numerical results confirmed that SURROGATE-ASF performed very well, in particular, on problems with multimodal objective functions, non-convex and/or disconnected Pareto frontier and it fills a gap in the selection of methods available for computationally expensive problems. Solving a real-world computationally expensive airfoil optimisation problem demonstrated that SURROGATE-ASF reduced the computational burden significantly in comparison with the typical approach by which the DM had to wait for a long time to find the most preferred solution. The speed can be further improved by applying parallel computing when building single objective surrogate functions.

By increasing the number of objective functions, the number of sub-problems is also increased. Therefore, with a fixed budget of function evaluations, the number of function evaluations for each sub-problem is decreased. This may lead to a sacrificed accuracy of the sub-problems. A possible idea is to merge hyper-boxes and sub-regions to reduce the number of sub-problems. This is a future research direction when SURROGATE-ASF is to be applied for problems with a large number of objective functions. SURROGATE-ASF with the current setting along with an appropriate single objective surrogate-based method can be employed for problem with a high-dimensional decision space. However, the accuracy of the surrogate functions for each hyper-box may not be satisfactory. A possible approach is to apply a dimensionality reduction method within each hyper-box to obtain a low-dimensional hyper-box. Then, SURROGATE-ASF with the current format can be applied for each new hyper-box. This is another future research direction.

6. Abbreviations and Notations

MOP=

Multiobjective Optimization Problem

DM=

Decision-Maker

ASF=

Achievement Scalarising Function

RBF=

Radial Basis Function

SVR=

Support Vector Regression

LOOCV=

The leave-one-out cross-validation

MPF=

Modified Pareto Fitness

k=

Number of (computationally expensive) objective functions

n=

Number of decision variables

fi=

ith computationally expensive objective function

S=

Feasible decision set

x=

Decision (variable) vector

xc=

cth decision variable

xcl,xcu=

Lower and upper bounds of xc

Rk=

Objective space

f(x)=

Objective vector

Z=f(S)=

Feasible objective set

zei=

ith extreme point

zideal=

Ideal (objective) vector

zutp=

Utopian (objective) vector

znadir=

Nadir (objective) vector

z¯=

Reference point given by the decision-maker

H=

Convex hull of all individual extreme points

z=

Projected reference point on H corresponding to z

Z¯P=

Set of predetermined reference points generated on H

XP=

Set of reference solutions in the decision space

FP=

Set of reference solutions in the objective space

q=

Number of divisions along each objective coordinate axis

r=

Number of sub-regions (or hyper-boxes)

Sa=

ath hyper-box

z¯P,a,i=

ith predetermined reference point forming the ath sub-region on H

Z¯a=

Set of reference sample points corresponding to the ath sub-region

XP,a=

Set of non-dominated solutions in the decision space corresponding to the predetermined reference points in Z¯P

FP,a=

Set of non-dominated solutions in the objective space corresponding to the predetermined reference points in Z¯P

xP,a,i=

ith of the non-dominated solutions defining Sa

X¯a=

Set of sample points in the decision space corresponding to Sa

ASF~a=

Surrogate function corresponding to Sa

Ta=

Cartesian product of sets X¯a and Z¯a

XP¯a=

Set of archived points in the decision space generated during the updating process

FP¯a=

Set of archived points in the objective space generated during the updating process

x¯cand=

Minimiser of MPF

x¯closecand=

Closest point in X¯a to x¯cand

x¯closeclose=

Closest point in X¯a\{x¯closecand} to x¯closecand

dclosecand=

Euclidean distance between x¯cand and x¯closecand

dcloseclose=

Euclidean distance between x¯closecand and x¯closeclose

f¯closecand=

Objective vector corresponding to x¯closecand

f¯artifcand=

Artificial objective vector

Acknowledgements

This work was partly funded by the COMAS Doctoral Program at the University of Jyvaskyla, the Academy of Finland [project No. 287496], Early Career Scheme (ECS) sponsored by the Research Grants Council of Hong Kong [project No. 21201414 (Dr. Matthias Hwai Yong Tan)] and the KAUTE Foundation. Mohammad Tabatabaei thanks Dr. Alfredo Arias Montano for providing the airfoil optimisation problem code and Prof. Raino Mäkinen for fruitful discussions on this problem.

References

  • Arias-Montano, A., Coello Coello, C.A. & Mezura-Montes, E. (2012). Multi-objective airfoil shape optimization using a multiple-surrogate approach. Proceedings of the 2012 IEEE Congress on Evolutionary Computation (pp. 1–8). IEEE.
  • Audet, C. (2014). A survey on direct search methods for blackbox optimization and their applications. In P. M. Pardalos & T. M. Rassias (Eds.), Mathematics without boundaries (pp. 31–56). New York, NY: Springer.
  • Chih, M. (2013). A more accurate second-order polynomial metamodel using a pseudorandom number assignment strategy. Journal of the Operational Research Society, 64(2), 198–207.
  • Das, I., & Dennis, J. (1998). Normal-boundary intersection: A new method for generating the Pareto surface in nonlinear multicriteria optimization problems. SIAM Journal on Optimization, 8(3), 631–657.
  • Deb, K., Thiele, L., Laumanns, M., & Zitzler, E. (2002). Scalable multi-objective optimization test problems. In Proceedings of the 2002 IEEE Congress on Evolutionary Computation (Vol. 1, pp. 825–830). IEEE.
  • Dengiz, B., Alabas-Uslu, C., & Dengiz, O. (2009). Optimization of manufacturing systems using a neural network metamodel with a new training approach. Journal of the Operational Research Society, 60(9), 1191–1197.
  • Drela, M. (1989). Xfoil: An analysis and design system for low Reynolds number aerodynamics.
  • Giunta, A., Watson, L. T., & Koehler, J. (1998). A comparison of approximation modeling techniques: Polynomial versus interpolating models. In Proceedings of 7th AIAA/USAF/NASA/ISSMO symposium on Multidisciplinary Analysis and Optimization (Vol. 1, pp. 392–404). St. Louis, MO.
  • Goldberg, D. E. (1989). Genetic algorithms in search, optimization and machine learning. Boston: Addison-Wesley Longman Publishing Co., Inc.
  • Hartikainen, M., & Lovison, A. (2015). Paint-SiCon: constructing consistent parametric representations of Pareto sets in nonconvex multiobjective optimization. Journal of Global Optimization, 62(2), 243–261.
  • Hartikainen, M., Miettinen, K., & Wiecek, M. M. (2012). PAINT: Pareto front interpolation for nonlinear multiobjective optimization. Computational Optimization and Applications, 52(3), 845–867.
  • Helton, J. C., Johnson, J. D., Sallaberry, C. J., & Storlie, C. B. (2006). Survey of sampling based methods for uncertainty and sensitivity analysis. Reliability Engineering & System Safety, 91(10–11), 1175–1209.
  • Hurrion, D. R. (2000). A sequential method for the development of visual interactive meta-simulation models using neural networks. Journal of the Operational Research Society, 51(6), 712–719.
  • Jin, R., Chen, W., & Simpson, T. W. (2001). Comparative studies of metamodelling techniques under multiple modelling criteria. Structural and Multidisciplinary Optimization, 23(1), 1–13.
  • Jones, D. R. (2001). A taxonomy of global optimization methods based on response surfaces. Journal of Global Optimization, 21(4), 345–383.
  • Jones, D. R., Perttunen, C. D., & Stuckman, B. E. (1993). Lipschitzian optimization without the Lipschitz constant. Journal of Optimization Theory and Applications, 79(1), 157–181.
  • Kitayama, S., Srirat, J., Arakawa, M., & Yamazaki, K. (2013). Sequential approximate multi-objective optimization using radial basis function network. Structural and Multidisciplinary Optimization, 48(3), 501–515.
  • Kleijnen, C. J. P., & van Beers, M. W. C. (2013). Monotonicity-preserving bootstrapped kriging metamodels for expensive simulations. Journal of the Operational Research Society, 64(5), 708–717.
  • Larichev, O. I. (1992). Cognitive validity in design of decision-aiding techniques. Journal of Multi-Criteria Decision Analysis, 1(3), 127–138.
  • Mehdad, E., & Kleijnen, C. J. P. (2015). Classic Kriging versus Kriging with bootstrapping or conditional simulation: classic Kriging’s robust condence intervals and optimization. Journal of the Operational Research Society, 66(11), 1804–1814.
  • Messac, A., & Mullur, A. A. (2008). A computationally efficient metamodeling approach for expensive multiobjective optimization. Optimization and Engineering, 9(1), 37–67.
  • Miettinen, K. (1999). Nonlinear multiobjective optimization. Boston: Kluwer Academic Publishers.
  • Miettinen, K. (2008). Introduction to multiobjective optimization: Noninteractive approaches. In J. Branke , K. Deb, K. Miettinen, & R. Slowinski (Eds.), Multiobjective Optimization: Interactive and evolutionary approaches (pp. 1–26). Berlin Heidelberg: Springer-Verlag.
  • Miettinen, K., Ruiz, F., & Wierzbicki, A. P. (2008). Introduction to multiobjective optimization: Interactive approaches. In J. Branke, K. Deb, K. Miettinen, & R. Slowinski (Eds.), Multiobjective optimization: Interactive and evolutionary approaches (pp. 27–57). Berlin Heidelberg: Springer-Verlag.
  • Muller, J., & Shoemaker, C. A. (2014). Inuence of ensemble surrogate models and sampling strategy on the solution quality of algorithms for computationally expensive black-box global optimization problems. Journal of Global Optimization, 60(2), 123–144.
  • Muller, J., Shoemaker, C. A., & Piche, R. (2013). SO-MI: A surrogate model algorithm for computationally expensive nonlinear mixed-integer black-box global optimization problems. Computers & Operations Research, 40(5), 1383–1400.
  • Park, T., Yum, B., Hung, Y., Jeong, Y.-S., & Jeong, K. M. (2016). Robust kriging models in computer experiments. Journal of the Operational Research Society, 67(4), 644–653.
  • Regis, R. G., & Shoemaker, C. A. (2012). Combining radial basis function surrogates and dynamic coordinate search in high-dimensional expensive black-box optimization. Engineering Optimization, 45(5), 529–555.
  • Reis dos Santos, I. M., & Reis dos Santos, M. P. (2011). Construction and validation of distribution-based regression simulation metamodels. Journal of the Operational Research Society, 62(7), 1376–1384.
  • Ruiz, A. B., Sindhya, K., Miettinen, K., Ruiz, F., & Luque, M. (2015). E-NAUTILUS: A decision support system for complex multiobjective optimization problems based on the nautilus method. European Journal of Operational Research, 246(1), 218–231.
  • Schaumann, E., Balling, R., & Day, K. (1998). Genetic algorithms with multiple objectives. Proceedings of 7th AIAA/USAF/NASA/ISSMO Symposium on Multidisciplinary Analysis and Optimization (Vol. 3, pp. 2114–2123). St. Louis, MO.
  • Tabatabaei, M., Hakanen, J., Hartikainen, M., Miettinen, K., & Sindhya, K. (2015). A survey on handling computationally expensive multiobjective optimization problems using surrogates: non-nature inspired methods. Structural and Multidisciplinary Optimization, 52(1), 1–25.
  • Tan, M. (2015). Sequential Bayesian polynomial chaos model selection for estimation of sensitivity indices. SIAM/ASA Journal on Uncertainty Quantification, 3, 146–168.
  • Van Beers, M. W. C., & Kleijnen, C. J. P. (2003). Kriging for interpolation in random simulation. Journal of the Operational Research Society, 54(3), 255–262.
  • Wierzbicki, A. P. (1982). A mathematical basis for satisficing decision making. Mathematical Modelling, 3(5), 391–405.
  • Wierzbicki, A. P. (1986). On the completeness and constructiveness of parametric characterizations to vector optimization problems. OR Spectrum, 8(2), 73–87.
  • Yang, T., & Tseng, L. (2002). Solving a multi-objective simulation model using a hybrid response surface method and lexicographical goal programming approach-a case study on integrated circuit ink-marking machines. Journal of the Operational Research Society, 53(2), 211–221.
  • Yun, Y., Yoon, M., & Nakayama, H. (2009). Multi-objective optimization based on metamodeling by using support vector regression. Optimization and Engineering, 10(2), 167–181.
  • Zitzler, E., Deb, K., & Thiele, L. (2000). Comparison of multiobjective evolutionary algorithms: Empirical results. Evolutionary Computation, 8(2), 173–195.

Appendix 1

Table A1. Parameter settings for SURROGATE-ASF.

Table A2. Parameter settings for MATSuMoTo.

Table A3. Parameter settings for DIRECT.