691
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Microservice combination optimisation based on improved gray wolf algorithm

, , , , &
Article: 2175791 | Received 21 Oct 2022, Accepted 28 Jan 2023, Published online: 22 Feb 2023

Abstract

Microservices architecture is a new paradigm for application development. The problem of optimising the performance of microservice architectures from a non-functional perspective is a typical Nondeterministic Polynomial (NP) problem. Therefore, aiming to quantify the non-functional requirements of computing microservice systems, while solving the problem of latency in computing the best combination of services with the maximum QoS objective function value, this paper proposes a microservice combination approach based on the QoS model and a CGWO algorithm for optimisation computation for this model. The experimental results verify that the error rate of the method is only 0.528% on the non-functional combination optimisation problem, and the computational efficiency of the algorithm increases by 97.29% when the complexity of the problem search space increases, while CGWO improves 65.97% and 81.25% respectively in the accuracy of optimisation compared to the prototype of the algorithm (GWO), and has a stable optimisation performance, aspect. It proves that the research in this paper has a high advantage in automatically searching for the best QoS for the microservice combination problem.

1. Introduction

Microservices are small services consisting of a single application (Cerny et al., Citation2018). To accomplish more complex tasks, multiple interoperable microservices need to be combined, making such combinations a hot research topic. The enhancement of efficiency is a necessity given the fact that a set of microservices is executed in simply one task and the environment and requirements are dynamically changing (Ai et al., Citation2021; Barkat et al., Citation2021). In the field of cloud services industrial manufacturing, manual scheduling of microservices to generate microservice combinations is gradually becoming impossible due to the rapid increase in the number of microservices. Many existing studies contribute implementation methods for automated scheduling and combination of microservices based on functional attribute considerations (Mendonça et al., Citation2019; Potdar et al., Citation2020), but bring the impact of quality chaos of non-functional attributes (Brady et al., Citation2020), while the approach provided in this paper will compensate for this lack of aspect and enable microservice systems to complete user request tasks with high performance (Brasser et al., Citation2022). In order to reduce the performance overhead, the Quality of Service (QoS) model of microservices needs to be optimised.

Intelligent optimisation algorithms are good at performing a large number of efficient computations in a short time (Boussaïd et al., Citation2013; Zhang, You, et al., Citation2019). Research on intelligent optimisation algorithms has been relatively mature and extensive and has been applied in various fields (Liang et al., Citation2018; Chen & Huang, Citation2021; Zhang, Cui, et al., Citation2022; Hu et al., Citation2021; Naseri & Jafari Navimipour, Citation2019), especially in combination with neural networks, reinforcement learning (Li et al., Citation2021; Wang et al., Citation2020). They are widely used in security and computer vision, such as adversarial attacks (Huang, Zhang, et al., Citation2020; Mo et al., Citation2022; Mo et al., Citation2020) and attack detection (Kuang et al., Citation2019; Li et al., Citation2022; Zhang, Xue, et al., Citation2021), and the results are either an exciting improvement over previous work are either exciting improvements or feasible solutions to proposed problems in both research areas (Zhang, Xue, et al., Citation2021; Zhang, Zhu, et al., Citation2022; Ren et al., Citation2021).

In recent years, breakthroughs in cloud computing technology have brought new application scenarios for intelligent algorithms, such as the microservice combination problem studied in this paper. The QoS-based microservice combination optimisation problem requires the computation of the QoS objective function values for all possible service combinations and the selection of the service combination having the largest QoS objective function value; thus, this task is considered as a Nondeterministic Polynomial (NP)-hard problem in a clear sense. NP hard problems are commonly found in the field of scheduling computing, and metaheuristic algorithms have proven to be the most effective for such problems so far methods, and such algorithms include genetic algorithms (Katoch et al., Citation2021), particle swarm algorithms (Sengupta et al., Citation2018), grey wolf algorithms and so on. However, the existing algorithms have certain limitations: they are prone to fall into local optima, their execution efficiency is low, and they are not suitable for large-scale service combinations (Han et al., Citation2022; Wang et al., Citation2019; Wu & Li, Citation2021;Yu et al., Citation2021).

This paper makes the following contributions. Firstly, the QoS model is pioneered to visualise the non-functional requirements of the microservice combination problem, and the multi-objective optimisation problem with multiple QoS metrics is transformed into a single-objective optimisation problem with optimised QoS objective function values, and then an intelligent optimisation algorithm for microservice combination named CGWO is innovatively proposed by combining three common approaches in the intelligence domain, namely the GWO (grey wolf algorithm), the elite backward learning strategy and the vertical and horizontal crossover strategy. CGWO, using this algorithm to perform the optimisation search calculation for the microservice combination problem based on the QoS model, and finally experimentally demonstrates the effectiveness and stability of the CGWO algorithm in solving the microservice combination non-functional optimisation problem. To do so, this paper is organised as follow: in section 2, the algorithmic foundations are presented, and a special importance is attributed to Gray Wolf algorithm. Section 3 shows the QoS combination optimisation based on improved grey wolf algorithm. As for section 4, the experimental results are presented referring to some case study and finally, section 5 concludes this work.

2. Algorithmic foundations

2.1. Microservice portfolio optimisation model

The overall QoS for each microservice combination is calculated using the QoS objective function for microservice combinations, which reduces a multi-objective optimisation problem into a single-objective optimisation problem (Zhang, You, et al., Citation2019; Yan et al., Citation2020; Yan et al., Citation2021). The microservice combination with the highest objective function value is then chosen as the optimisation result. The microservice combination optimisation objective function is defined as shown in Eq. (1) (Naseri & Jafari Navimipour, Citation2019): (1) QoSsol=wt×Qt+wc×Qc+wre×Qre+wav×Qav,(1) where QoSsol denotes the overall QoS value of a microservice portfolio; Qt is the overall response time of a microservice portfolio; Qc is the execution cost of a microservice portfolio; Qre and Qav are the reliability and availability metric values of a microservice portfolio, respectively. Note that Qt, Qc, Qre, and Qav represent the aggregations of the normalised qt, qc, qre, and qav values of individual microservices in a microservice combination (Sefati & Navimipour, Citation2021), which are related to the actual workflow of the microservice. As for the values of wt, wc, wre, and wav, they correspond, respectively, to the weight coefficients of time, cost, reliability, and availability service responses. All these four values range in the interval [0, 1] and their sum weight is equal to the unit.

2.2. Grey wolf algorithm

Grey Wolf Optimiser (GWO) is an intelligence optimisation algorithm, proposed by Mirjalili et al. in 2014. It divides the social rank of grey wolves into four classes (α, β, δ, and ω from high to low), and assumes that α, β, and δ wolves have better knowledge of prey location (Mirjalili et al., Citation2014). During each iteration, these three wolves are used to estimate the location of the prey, and the remaining ω wolf updates the distance for the prey around α, β, and δ wolves to define their own location. At the end of each iteration, the α, β, and δ wolves and their positions in the current pack are updated. Finally, the α wolf is considered as the location of the prey found (Xiaofeng & Xiuying, Citation2019).

In Eqs. (2) to (5), Dα, Dβ, and Dδ indicate the distance between individual grey wolf and the prey locations marked by α, β, and δ wolf locations, respectively. Thus, one can obtain: (2) Dα=|C1Xα(t)X(t)|,(2) (3) Dβ=|C2Xβ(t)X(t)|,(3) (4) Dδ=|C3Xδ(t)X(t)|,(4) (5) Cj=2rj,j=1,2,3,(5) where t denotes the number of iterations that have been performed, Xα(t), Xβ(t), and Xδ(t)represent the current positions of α, β, and δ wolves respectively, X(t) is the position of the individual grey wolf, C1, C2, and C3 are random vectors, and rjis a random vector varying in the range [0, 1].

Eqs. (6) to (8) define the step length and the direction of the individual grey wolf around α, β, and δ wolves towards the prey, respectively, and Eq. (9) is the updated position of the individual grey wolf around α, β and δ wolves. (6) X1(t+1)=Xα(t)A1Dα,(6) (7) X2(t+1)=Xβ(t)A2Dβ,(7) (8) X3(t+1)=Xδ(t)A3Dδ,(8) (9) X(t+1)=X1+X2+X33,(9) Aiis the coefficient vector and is calculated as shown in Eq. (10): (10) Ai=2aria,i=1,2,3,(10) where a decreases linearly from 2 to 0 during the iterations and ri is a random vector varying in the range [0, 1]. Thus, A is a random value in the interval [-a, a]. When |A|≥1, the individual grey wolf moves away from the prey and the algorithm performs a global search. When |A|<1, the grey wolf individual approaches the prey and the algorithm performs a local search.

The GWO algorithm has only two adjustable parameters, A and C, and it is characterised by its simple structure and its easy implementation [33]. At the same time, the grey wolf algorithm has two convergence factors, a and A, that can be adjusted adaptively and an information feedback mechanism can achieve a balance between local search and global search; thus, using the grey wolf algorithm as an intelligent optimisation algorithm for microservice combinations would yield to a good performance in terms of solution accuracy and convergence speed for the problem. However, the grey wolf algorithm also suffers from some drawbacks that are listed here below (Wang et al., Citation2020;Yang et al., Citation2020):

  1. Poor population diversity: since the initial population of GWO is randomly generated, a good population diversity cannot be guaranteed;

  2. Easy to fall into local optimum: each iteration of the grey wolf algorithm only shares the information of α, β, and δ wolves to other grey wolf individuals ω. ω continuously approximates α, β, and δ wolves, but the search domination of these three wolves does not necessarily result in finding the globally optimal and sub-optimal individuals; thus, the GWO algorithm easily falls into a local optimum during the solution process.

3. Qos combination optimisation based on improved grey wolf algorithm

In order to improve the response speed of the algorithm and to avoid falling into local optimum, this paper combines the ideas of elite backward learning and vertical and horizontal crossover strategies with the GWO, and proposes an improved grey wolf algorithm, namely the Crossover Grey Wolf Optimiser (CGWO) algorithm to better exploit the advantages of the grey wolf algorithm and solve its limitations.

3.1. Elite reverse learning strategy

The elite reverse learning strategy is used to enhance the diversity of the initial population of grey wolves and to ensure that it is of good quality. The basic idea of reverse learning is that, if a feasible solution to a problem is obtained, the opposite of the feasible solution is first computed, the feasible solution and its opposite are mixed, and the better solution from the mixed solution interval is selected as the next generation of individuals using the evaluation method (Meng et al., Citation2014).

Suppose Xi =  [xi, 1, xi, 2, … , xi, dim] is a position vector of dim-dimensional grey wolf individuals, xi, d∈[a, b], and a and b are the search space boundaries in the d-th (1≤ddim) dimension. Each element in the inverse vector Xj = [xj, 1, xj, 2, … , xj, dim] of Xi is computed using Eq. (11). (11) xj,d=(a+b)xi,d.(11) If there is an element x in Xj that is out of bounds, replace it by a random value in the search space boundary [a, b] under that dimension.

The steps to initialise the grey wolf population using the elite reverse learning strategy are as follows:

Step 1: Randomly initialize the position vector Xi(1≤iN) of N individual grey wolves to form the population pop1;

Step 2: For the position vectors of N individual grey wolves, find the inverse vector of each individual grey wolf to form population pop2;

Step 3: Combine pop1 and pop2, calculate the objective function value of each grey wolf individual in the combined population, and select the top N grey wolf individuals in the objective function value to form the initial population of the algorithm.

3.2. Vertical and horizontal crossover strategy

The vertical and horizontal crossover strategy (Meng et al., Citation2014)corrects the individual and the global optimal solutions of the population, which can improve the diversity of the population and prevent the algorithm from falling into a local optimum. This strategy consists of two steps: horizontal crossover and vertical crossover.

3.2.1. Horizontal crossover

Horizontal crossover divides the population into two subpopulations of equal size, as shown in Figure . Individuals from the two subpopulations are randomly and non-repeatedly combined in pairs, and the two combined individuals are crossed in the same dimension. Suppose there are two parent individuals X(i) and X(j) from two different subpopulations into which a dim-dimensional population is divided. The i-th parent individual X(i) from the first subpopulation and the j-th parent individual X(j) from the second subpopulation are to perform horizontal crossover operations in the d-th (1≤dn) dimension, and their offspring Xhc(i, d) in the d-th dimension and Xhc(j, d) are generated via Eqs. (12) and (13), respectively. (12) Xhc(i,d)=r1×X(i,d)+(1r1)×X(j,d)+c1×(X(i,d)X(j,d)),(12) (13) Xhc(j,d)=r2×X(j,d)+(1r2)×X(i,d)+c2×(X(j,d)X(i,d)),(13) where r1, r2, c1 and c2 are uniformly distributed random values ranging between 0 and 1. Xhc(i, d) and Xhc(j, d) are the children solutions of the parent individuals X(i) and X(j) in the d-th (1≤dn) dimension, respectively. The individuals generated after the horizontal crossover need to compete with the original parent, comparing their objective function values and ultimately retaining the individual with the higher objective function value as the offspring.

Figure 1. Horizontal crossover operation.

Figure 1. Horizontal crossover operation.

3.2.2. Vertical crossover

Vertical crossover is an arithmetic crossover of all individuals in the population between two different dimensions. These two dimensions, d1 and d2, are randomly selected for the vertical crossover operation, as shown in Figure . d1-th and d2-th dimension elements of all individuals in the population are extracted for the vertical crossover, and the offspring Xvc(i, d1) of the d1-th dimension of the i-th individual in the population can be generated by Eq. (14) as shown below: (14) Xvc(i,d1)=r×X(i,d1)+(1r)×X(i,d2),(14) where r is a uniformly distributed random value between 0 and 1, and Xvc(i, d1) is the offspring of X(i, d1) and X(i, d2). The individuals of the offspring obtained at the end of the vertical crossover also compete with the individuals of the parents to keep the individual with the higher value of the objective function.

Figure 2. Vertical crossover operation.

Figure 2. Vertical crossover operation.

3.3. CGWO-based microservice combination optimization

3.3.1. Improved grey wolf Algorithm – cross grey wolf optimizer (CGWO)

The execution process of the CGWO algorithm is as follow:

Step 1: Initialize the position vector Xi (i = 1, 2, … , n) of n individual grey wolves using the elite backward learning strategy. Also, initialize the parameters A and C, the maximum number of iterations maxIter, and the horizontal crossover for the population and the vertical crossover operations with probability p1.

Step 2: Calculating the objective function value for each individual grey wolf using the top three grey wolf individuals, with the largest objective function values as α, β, and δ wolves.

Step 3: Updating the parameters C and Aand the position vector of individual grey wolves in the population using Eqs. (5), (9) and (10).

Step 4: Randomly generate probability p∈[0, 1]. When p > p1, no horizontal crossover nor vertical crossover operation are generated, and the process will switch to Step 5. However, when pp1, horizontal crossover is performed on individuals of the population. After that, the objective function values of children and parents is being compared, and the individuals with larger objective function values are inserted into the population. Then, the vertical crossover is performed on individuals of the population, and, similarly, the individuals with higher objective function values in the offspring and parents are added to the population.

Step 5: Use Eq. (1) to calculate the objective function values of all individual grey wolves and update the objective function values and position vectors Xα(t), Xβ(t), and Xδ(t)for α, β, and δ wolves.

Step 6: When the maximum number of iterations is reached, the algorithm ends and outputs the position vector Xα(t) and the objective function value of α. Otherwise, Step 3 is recalled.

3.3.2. CGWO-based microservice combination optimisation

Each microservice in the portfolio corresponds to a set of microservice instances with the same function but with different QoS metric values. Each candidate microservice instance is represented as a binary group, i.e. service msi, j = <id, qos>, where id is the number of the microservice instance in the set of candidate microservice instances (the number is encoded as a decimal integer) and qos = <qt, qc, qav, qre> denotes the normalised QoS metric value of the microservice instance.

Optimisation of the microservice combinations is performed using CGWO algorithm. The location vector Xi of grey wolf individuals in CGWO represents a microservice instance combination solution as shown in Eq. (15). The elements xi, j inXi correspond to the number mi, j.id of a microservice instance mi, j as represented in Eq. (16). xi, j represents the j-th microservice of the combination where the microservice instance numbered xi, j in the set of instances is selected to join the service. (15) Xi=[xi,1,xi,2,,xi,j,,xi,m],i{1,2,,n},j{1,2,,m},(15) (16) xi,j=msi,j.id.(16) Algorithm 1 gives the procedure for solving the microservice combination optimisation problem using CGWO. The number of abstract microservices in the microservice combination is dim, indicating that the dimension of the grey wolf individual position vector Xi is dim, the maximum number of iterations is maxIter, the crossover probability is p1, indicating that the horizontal and vertical crossover operations are performed on the individuals in the current grey wolf population with a probability of p1 during each iteration, the population size is set to popNum that is being initialised as Xi (1 ≤ ipopNum) before starting the optimisation process. The position vector Xi of each individual grey wolf is taken as a feasible solution in the solution space, and the one with the largest objective function value among the feasible solutions (considered as the most suitable solution) is noted as Xα; then, the next best solutions are Xβ and Xδ.

where:
  • initialise(X) denotes the initialisation of n solutions using the elite reverse learning strategy;

  • t is the current number of iterations;

  • calculateFitness(X) denotes the calculation of the objective function value of each grey wolf individual in the population, and the three individuals with the highest current objective function values are selected and their position vectors are assigned to Xα, Xβ, and Xδ.

During each iteration, CGWO needs to continuously select the three individuals with the highest objective function value according to these values in the population and then guide the other individuals to update their positions, while performing horizontal crossover and vertical crossover operations on the position vectors of the population with p1 probability. The optimisation process stops when the number of iterations reaches the maximum value maxIter; then, the position vector Xα with the highest objective function value in the population is returned as the result of the microservice combination optimisation. Therefore, each element in Xα is the individual code of the microservice instance selected to form the microservice combination.

4. Experimental analysis

The experiments evaluate the performance of the microservice combination optimisation approach using CGWO with two evaluation objectives:

  1. The first objective is to assess the performance of the microservice instance portfolio obtained using the CGWO optimisation algorithm in terms of optimality and execution time;

  2. The second objective is to assure a certain degree of improvement of the CGWO algorithm with respect to the Grey Wolf algorithm in the solution selection process and in the selection results.

The experiments were executed on an Intel(R) Core (TM) i5-6500 CPU @ 3.20 GHz 3.19 GHz, 16GB RAM, 64-bit, Windows 10 operating system, and developed in IntelliJ IDEA environment using java language.

4.1. Performance of the CGWO algorithm in finding the optimal solution

For the first objective – the performance of the CGWO algorithm in finding the optimal solution – two experiments are conducted. In the first experiment, the solution found using the CGWO algorithm is compared with the optimal solution found using the exhaustive method in terms of objective function value and execution time to verify that CGWO greatly improves computational efficiency with less loss of accuracy. As the convergence of CGWO to the optimal solution is influenced by a tunable parameter p1 specific to the algorithm, the steps are as follows:

  • Step 1: an exhaustive search is performed on the set of instances corresponding to each microservice in the microservice portfolio to obtain the objective function value and the execution time for the actual best combination of that microservice portfolio;

  • Step 2: iteratively fine-tune the adjustable parameter p1∈[0, 1] in CGWO and calculate the optimisation result of the CGWO algorithm after 400 iterations for different p1 values, the optimisation result includes the maximum objective function value and execution time;

  • Step 3: compare the maximum objective function value and execution time returned using CGWO and using the exhaustive search and calculate the deviation of the objective function values between both methods.

The second experiment compares the difference between the average fitness value and the best fitness value of the CGWO algorithm for different numbers of iterations to verify the stability of the CGWO algorithm. The specific implementation steps are:
  • Step 1: record the best configuration parameter p1 obtained from evaluation experiment 1;

  • Step 2: optimise the microservice combination using the CGWO algorithm under the same parameter p1 configuration and record the maximum fitness value obtained for different iterations;

  • Step 3: repeat Step2 50 times and calculate the maximum fitness value and its average for the different iterations in the 50 experiments;

  • Step 4: compare the average and the maximum fitness value under different iterations with the CGWO algorithm.

4.1.1. Experiment 1: verification of algorithm accuracy and computational efficiency

In order to illustrate the accuracy and computational efficiency of the CGWO algorithm for different problem complexities, this paper sets up two different sizes of search spaces and compares the experimental results with the exhaustive method, which has the highest accuracy. Table shows the experimental results of the exhaustive enumeration method for microservice combination A and microservice combination B, where microservice combinations A and B in Table represent two different configurations in terms of the number of microservices in the combination and the number of instances corresponding to each microservice, respectively. For example, Microservice combination A consists of six microservices having respective numbers of instances equal to 20, 15, 10, 10, 5 and 5. The search space complexity of the microservice combination is represented by the number of all executable microservice combination solutions. Similarly, microservice combination B consists of eight microservices whose number of instances is equal to 15, 15, 5, 5, 5, 5, 5 and 5.

Table 1. Experimental results of the exhaustive enumeration method for microservice combination A and microservice combination B.

The microservice instance QoS metrics include service response time qt, service cost qc, service availability qav and service reliability qre, where qt takes the values [0, 300], qc takes the values [0, 30], qav takes the values [0.7, 1] and qre takes the values [0.5, 1].

Tables and show the experimental results obtained when varying the value of the tunable parameter p1 for microservice combinations A and B, respectively. p1 was set to 0.4, 0.6, 0.8 and 1. The results of the comparison experiments include the average best fit, the average execution time and the average deviation. Each row of the table represents the average of the microservice portfolio optimisation results obtained when the CGWO algorithm was run for 50 times on the same configuration of adjustable parameters p1, where the initial population size is 200 for each optimisation and the number of iterations is 400. The rows highlighted in bold are the best results of the trade-off between fitness and time, indicating the best configuration of the adjustable parameter p1.

Table 2. Experimental results of CGWO optimisation for microservice portfolio A.

Table 3. Experimental results of CGWO optimisation for microservice portfolio B.

By comparing the experimental results of the CGWO algorithm and the exhaustive method, it can be concluded that:

  • The lowest deviation between the maximum objective function value optimised using the CGWO algorithm and the actual objective function value derived using the exhaustive method reached 0.0033 with an error rate of 0.528% at different computational space sizes, and the higher the complexity of the problem search space, the smaller the deviation of the results and the higher the accuracy When the problem search space complexity is increased (microservice combination B), the minimum execution time of the CGWO algorithm is 1139.36 ms, which is close to the minimum time of 1136.74 ms for low search space complexity and much smaller than the execution time of 42009ms for the exhaustive method, with a 97.29% improvement in computational efficiency.

Based on these two conclusions, it is clear that the CGWO algorithm has the ability to accurately explore the solution space in a relatively short time. Added to that, by changing the value of the adjustable parameter value p1 and considering the best adaptation value and execution time of the microservice, it is found that the best parameter configuration is obtained for p1 equal to 0.8.

4.1.2. Experiment 2: algorithm stability evaluation

In order to verify the stability of the CGWO optimisation performance, 50 optimisation experiments were conducted using the CGWO optimisation algorithm for each of microservice combination A with low search space complexity and microservice combination B with high search space complexity, and the average optimisation function value and the best optimisation function value with the number of iterations in these 50 experiments were recorded, and the results are shown in Figures and . This experiment setsadjustable parameter p1 = 0.8, an initial population size of 200 and a maximum number of iterations of 500. The CGWO algorithm was run 50 times for microservice combination A and microservice combination B. The difference between the average and maximum value of the search function for different iterations was recorded and compared for each of the 50 experiments.

Figure 3. Diagram of microservice portfolio A objective function.

Figure 3. Diagram of microservice portfolio A objective function.

Figure 4. Diagram of the microservice portfolio B objective function.

Figure 4. Diagram of the microservice portfolio B objective function.

Figures and show that, with the change of the number of iterations, the difference between the average value of the optimisation function and the maximum value of the optimisation function of the microservice combination is small, and the change trend of both is relatively consistent, and with the increase of the number of iterations, the average value of the optimisation function of the multiple experiments approaches the direction of the best value of the optimisation function with the increase of the number of iterations, which indicates that the performance of the algorithm tends to be stable after convergence, which indicates that the CGWO optimisation algorithm proposed in this paper has high stability.

4.2. Degree of improvement of the CGWO algorithm

Experiment 3: In order to examine the degree of improvement of the CGWO algorithm relative to the Grey Wolf algorithm, which is the second objective, both algorithms were compared together, using the average optimal objective function value and the average deviation as the comparative evaluation metrics of the two optimisation algorithms. The CGWO and GWO algorithms were used to optimise microservice combinations A and B. The same initial population was used for each optimisation, the initial population size was 200, the number of iterations was 500, the parameter p1 of the CGWO algorithm was taken to be 0.8, and a total of 50 experiments were performed. The average optimal objective function value and the average deviation of the two algorithms in the 50 experiments were taken and compared. The results of the experiments are shown in Tables and according to the different microservice combinations A and B. Meanwhile, the average fitness values of the two optimisation algorithms for microservice combinations A and B at different numbers of iterations are calculated and the optimisation effects of the two algorithms at different numbers of iterations are compared as shown in Figures and .

Figure 5. Adaptation values of the optimised microservice combination A for different iterations of the two algorithms.

Figure 5. Adaptation values of the optimised microservice combination A for different iterations of the two algorithms.

Figure 6. Adaptation values of the optimised microservice combination B for different iterations of the two algorithms.

Figure 6. Adaptation values of the optimised microservice combination B for different iterations of the two algorithms.

Table 4. Comparison of experiments for optimisation of microservice portfolio A.

Table 5. Experimental comparison of microservice portfolio B optimisation.

The average optimal objective function values for optimising microservice combinations A and B using the CGWO algorithm are 0.6175 and 0.6217, respectively, which are higher than those of the GWO algorithm (respectively equal to 0.6049 and 0.6074), indicating that the CGWO calculation is superior. Meanwhile, the average deviations between the optimal objective function values and the actual optimal objective function values of the CGWO algorithm are 0.0065 and 0.0033, respectively, which are smaller than those of the GWO algorithm (equal to 0.0191 and 0.0176, respectively), indicating that CGWO has improved 65.97% and 81.25%, respectively, in the accuracy of finding the optimal value. This indicates that the improved strategy of the CGWO algorithm helps to find the global optimal solution in the process of iterative optimisation beyond the current local optimum. Also, as can be seen in Figures and , at different iterations, the CGWO algorithm is consistently better than the GWO algorithm. In summary, it can be concluded that the CGWO algorithm is more effective than the GWO algorithm in finding the optimal solution for the microservice combination.

5. Conclusion

An improved grey wolf optimisation algorithm, CGWO, is proposed to find the combination of microservice instances with the largest objective function value. CGWO is based on the standard grey wolf algorithm (Yang et al., Citation2019) and uses an elite backward learning strategy to initialise the grey wolf population during its initialisation to ensure the population diversity for the initial iteration; then, during the iterative optimisation process, the vertical and the horizontal crossover strategies are employed. The operations on both crossovers avoid premature convergence of the optimisation process and prevent the final solution obtained from being a local optimum. The vertical crossover operation can make some dimensions whose optimisation has stalled jump out of the local maximum, and the horizontal crossover operation can expand the exploration space of the optimal solution. Finally, an experimental validation of the proposed CGWO algorithm applied in microservice combinatorial optimisation is conducted to verify the performance of the CGWO algorithm for two different sets of problem sizes. Firstly, the CGWO algorithm is compared with the most accurate exhaustive method to verify the accuracy and computational efficiency of the CGWO algorithm in finding the optimal solution; and then the CGWO algorithm is compared with the GWO algorithm to verify the superiority and stability of the CGWO algorithm in searching for the optimal solution. In future work, the increased time complexity associated with algorithm improvements will also be further optimised. Some research work has demonstrated that the incorporation of neural networks may bring better results (Huang, Chen, et al., Citation2020), but it has not been proven to be effective at present. Other impacts of the actual deployment and operating environment of microservices on the non-functionality of the microservice portfolio will also be considered, such as the reliability and security of containers and network transmission latency (Attaoui et al., Citation2022; Zhang, Zhu, et al., Citation2022; Zhang, Zhu, et al., Citation2021; Zhang, Wang, et al., Citation2019).

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by the National Key R&D Program of China (Grant No. 2020YFB1709503).

References

  • Ai, S., Hong, S., Zheng, X., Wang, Y., & Liu, X. (2021). CSRT rumor spreading model based on complex network. International Journal of Intelligent Systems, 36(5), 1903–1913. https://doi.org/10.1002/int.22365
  • Attaoui, W., Sabir, E., Elbiaze, H., & Guizani, M. (2022). VNF and Container Placement: Recent Advances and Future Trends. Computer Science, arXiv preprint arXiv:2204.00178. https://doi.org/10.48550/arXiv.2204.00178
  • Barkat, A., Kazar, O., & Seddiki, I. (2021). Framework for web service composition based on QoS in the multi cloud environment. International Journal of Information Technology, 13(2), 459–467. https://doi.org/10.1007/s41870-020-00564-z
  • Boussaïd, I., Lepagnot, J., & Siarry, P. (2013). A survey on optimization metaheuristics. Information Sciences, 237, 82–117. https://doi.org/10.1016/j.ins.2013.02.041
  • Brady, K., Moon, S., Nguyen, T., & Coffman, J. (2020, January 6–8). Docker container security in cloud computing. In 10th Annual Computing and Communication Workshop and Conference, CCWC 2020, Las Vegas, NV, USA, 0975–0980. https://doi.org/10.1109/CCWC47524.2020.9031195
  • Brasser, F., Jauernig, P., Pustelnik, F., Sadeghi, A. R., & Stapf, E. (2022). Trusted Container Extensions for Container-based Confidential Computing. Cryptography and Security, arXiv:2205.05747v1. https://doi.org/10.48550/arXiv.2205.05747
  • Cerny, T., Donahoo, M. J., & Trnka, M. (2018). Contextual understanding of microservice architecture. ACM SIGAPP Applied Computing Review, 17(4), 29–45. https://doi.org/10.1145/3183628.3183631
  • Chen, C., & Huang, T. (2021). Camdar-adv: Generating adversarial patches on 3D object. International Journal of Intelligent Systems, 36(3), 1441–1453. https://doi.org/10.1002/int.22349
  • Han, T., Zhang, L., & Jia, S. (2022). Bin similarity-based domain adaptation for fine-grained image classification. International Journal of Intelligent Systems, 37(3), 2319–2334. https://doi.org/10.1002/int.22775
  • Hu, L., Yan, H., Li, L., Pan, Z., Liu, X., & Zhang, Z. (2021). MHAT: An efficient model-heterogenous aggregation training scheme for federated learning. Information Sciences, 560, 493–503. https://doi.org/10.1016/j.ins.2021.01.046
  • Huang, T., Chen, Y., Yao, B., Yang, B., Wang, X., & Li, Y. (2020). Adversarial attacks on deep-learning-based radar range profile target recognition. Information Sciences, 531, 159–176. https://doi.org/10.1016/j.ins.2020.03.066
  • Huang, T., Zhang, Q., Liu, J., Hou, R., Wang, X., & Li, Y. (2020). Adversarial attacks on deep-learning-based SAR image target recognition. Journal of Network and Computer Applications, 162, 102632. https://doi.org/10.1016/j.jnca.2020.102632
  • Katoch, S., Chauhan, S. S., & Kumar, V. (2021). A review on genetic algorithm: Past, present, and future. Multimedia Tools and Applications, 80(5), 8091–8126. https://doi.org/10.1007/s11042-020-10139-6
  • Kuang, X., Zhang, M., Li, H., Zhao, G., Cao, H., Wu, Z., & Wang, X. (2019, December). DeepWAF: Detecting web attacks based on CNN and LSTM models. In J. Vaidya, X. Zhang, & J. Li (Eds.), Cyberspace safety and security (pp. 121–136). Springer. https://doi.org/10.1007/978-3-030-37352-8_11.
  • Li, C., Tang, Y., Tang, Z., Cao, J., & Zhang, Y. (2022). Motif-based embedding label propagation algorithm for community detection. International Journal of Intelligent Systems, 37(3), 1880–1902. https://doi.org/10.1002/int.22759
  • Li, Y., Yao, S., Zhang, R., & Yang, C. (2021). Analyzing host security using D-S evidence theory and multisource information fusion. International Journal of Intelligent Systems, 36(2), 1053–1068. https://doi.org/10.1002/int.22330
  • Liang, C., Wang, X., Zhang, X., Zhang, Y., Sharif, K., & Tan, Y. A. (2018). A payload-dependent packet rearranging covert channel for mobile VoIP traffic. Information Sciences, 465, 162–173. https://doi.org/10.1016/j.ins.2018.07.011
  • Mendonça, N. C., Jamshidi, P., Garlan, D., & Pahl, C. (2019). Developing self-adaptive microservice systems: Challenges and directions. IEEE Software, 38(2), 70–79. https://doi.org/10.1109/MS.2019.2955937
  • Meng, A. B., Chen, Y. C., Yin, H., & Chen, S. Z. (2014). Crisscross optimization algorithm and its application. Knowledge-Based Systems, 67, 218–229. https://doi.org/10.1016/j.knosys.2014.05.004
  • Mirjalili, S., Mirjalili, S. M., & Lewis, A. (2014). Grey wolf optimizer. Advances in Engineering Software, 69, 46–61. https://doi.org/10.1016/j.advengsoft.2013.12.007
  • Mo, K., Huang, T., & Xiang, X. (2020, October). Querying little is enough: Model inversion attack via latent information. In X. Chen, H. Yan, Q. Yan, & X. Zhang (Eds.), Machine learning for cyber security (pp. 681–690). Springer. https://doi.org/10.1002/int.22315.
  • Mo, K., Tang, W., Li, J., & Yuan, X. (2022). Attacking deep reinforcement learning with decoupled adversarial policy. IEEE Transactions on Dependable and Secure Computing, https://doi.org/10.1109/TDSC.2022.3143566
  • Naseri, A., & Jafari Navimipour, N. (2019). A new agent-based method for QoS-aware cloud service composition using particle swarm optimization algorithm. Journal of Ambient Intelligence and Humanized Computing, 10(5), 1851–1864. https://doi.org/10.1007/s12652-018-0773-8
  • Potdar, A. M., Narayan, D. G., Kengond, S., & Mulla, M. M. (2020). Performance evaluation of docker container and virtual machine. Procedia Computer Science, 171, 1419–1428. https://doi.org/10.1016/j.procs.2020.04.152
  • Ren, H., Huang, T., & Yan, H. (2021). Adversarial examples: Attacks and defenses in the physical world. International Journal of Machine Learning and Cybernetics, 12(11), 3325–3336. https://doi.org/10.1007/s13042-020-01242-z
  • Sefati, S., & Navimipour, N. J. (2021). A qos-aware service composition mechanism in the internet of things using a hidden-markov-model-based optimization algorithm. IEEE Internet of Things Journal, 8(20), 15620–15627. https://doi.org/10.1109/JIOT.2021.3074499
  • Sengupta, S., Basak, S., & Peters, R. A. (2018). Particle swarm optimization: A survey of historical and recent developments with hybridization perspectives. Machine Learning and Knowledge Extraction, 1(1), 157–191. https://doi.org/10.3390/make1010010
  • Wang, X., Li, J., Kuang, X., Tan, Y. A., & Li, J. (2019). The security of machine learning in an adversarial setting: A survey. Journal of Parallel and Distributed Computing, 130, 12–23. https://doi.org/10.1016/j.jpdc.2019.03.003
  • Wang, Y., Yang, G., Li, T., Zhang, L., Wang, Y., Ke, L., Dou, Y., Li, S., & Yu, X. (2020). Optimal mixed block withholding attacks based on reinforcement learning. International Journal of Intelligent Systems, 35(12), 2032–2048. https://doi.org/10.1002/int.22282
  • Wu, C., & Li, W. (2021). Enhancing intrusion detection with feature selection and neural network. International Journal of Intelligent Systems, 36(7), 3087–3105. https://doi.org/10.1002/int.22397
  • Xiaofeng, Z., & Xiuying, W. (2019). Comprehensive review of grey wolf optimization algorithm. Computer Science, 46(03), 30–38. https://doi.org/10.11896/j.issn.1002-137X.2019.03.004
  • Yan, H., Chen, M., Hu, L., & Jia, C. (2020). Secure video retrieval using image query on an untrusted cloud. Applied Soft Computing, 97, 106782. https://doi.org/10.1016/j.asoc.2020.106782
  • Yan, H., Hu, L., Xiang, X., Liu, Z., & Yuan, X. (2021). PPCL: Privacy-preserving collaborative learning for mitigating indirect information leakage. Information Sciences, 548, 423–437. https://doi.org/10.1016/j.ins.2020.09.064
  • Yang, Y., Yang, B., Wang, S., Jin, T., & Li, S. (2020). An enhanced multi-objective grey wolf optimizer for service composition in cloud manufacturing. Applied Soft Computing, 87, 106003. https://doi.org/10.1016/j.asoc.2019.106003
  • Yang, Y., Yang, B., Wang, S., Liu, W., & Jin, T. (2019). An improved grey wolf optimizer algorithm for energy-aware service composition in cloud manufacturing. The International Journal of Advanced Manufacturing Technology, 105(7), 3079–3091. https://doi.org/10.1007/s00170-019-04449-9
  • Yu, X., Wang, Z., Wang, Y., Li, F., Li, T., Chen, Y., Tian, Y., & Yu, X. (2021). Impsuic: A quality updating rule in mixing coins with maximum utilities. International Journal of Intelligent Systems, 36(3), 1182–1198. https://doi.org/10.1002/int.22337
  • Zhang, C., Cui, X., Lian, S., Xiao, R., Qiao, H., Li, S., Lou, Y., Feng, Y., Zhuang, L., Du, J., & Liu, X. (2022). Intelligent algorithm for dynamic functional brain network complexity from CN to AD. International Journal of Intelligent Systems, 37(8), 4715–4746. https://doi.org/10.1002/int.22737
  • Zhang, D., You, X., Liu, S., & Yang, K. (2019). Multi-colony ant colony optimization based on generalized jaccard similarity recommendation strategy. IEEE Access, 7, 157303–157317. https://doi.org/10.1109/ACCESS.2019.2949860
  • Zhang, N., Xue, J., Ma, Y., Zhang, R., Liang, T., & Tan, Y. A. (2021). Hybrid sequence-based android malware detection using natural language processing. International Journal of Intelligent Systems, 36(10), 5770–5784. https://doi.org/10.1002/int.22529
  • Zhang, Q., Wang, X., Yuan, J., Liu, L., Wang, R., Huang, H., & Li, Y. (2019). A hierarchical group key agreement protocol using orientable attributes for cloud computing. Information Sciences, 480, 55–69. https://doi.org/10.1016/j.ins.2018.12.023
  • Zhang, Q., Zhu, L., Li, Y., Ma, Z., Yuan, J., Zheng, J., & Ai, S. (2022). A group key agreement protocol for intelligent internet of things system. International Journal of Intelligent Systems, 37(1), 699–722. https://doi.org/10.1002/int.22644
  • Zhang, Q., Zhu, L., Wang, R., Li, J., Yuan, J., Liang, T., & Zheng, J. (2021). Group key agreement protocol among terminals of the intelligent information system for mobile edge computing. International Journal of Intelligent Systems, https://doi.org/10.1002/int.22544