1,137
Views
0
CrossRef citations to date
0
Altmetric
Artificial Intelligence Applications in Healthcare Supply Chain Networks under Disaster Conditions

Emergency medical supplies scheduling during public health emergencies: algorithm design based on AI techniques

, , , ORCID Icon, , ORCID Icon & show all
Received 08 Mar 2023, Accepted 25 Sep 2023, Published online: 01 Nov 2023

Figures & data

Table 1. Emergency supplies dispatch table.

Table 2. The parameter list.

Figure 1. Reinforcement learning mechanism.

The pheromone communication is carried out between population A and population B, and the Markov dynamic decision judgment of reinforcement learning is used to reward or punish population A or population B.
Figure 1. Reinforcement learning mechanism.

Figure 2. Location distribution diagram of the example: (a) Example R101: (b) Example C101.

Figures (a) and (b) show the location distribution of 24 suppliers’ demand points and 3 distribution centres in the R101 and C101.
Figure 2. Location distribution diagram of the example: (a) Example R101: (b) Example C101.

Table 3. R101 example demand point information table.

Table 4. R101 example distribution centre information table.

Table 5. R101 example distribution vehicle information table.

Table 6. C101 example demand point information table.

Table 7. C101 example distribution centre information table.

Table 8. C101 example distribution vehicle information table.

Figure 3. Iteration diagram of fair index optimisation: (a) R101 iterative graph of fair index optimisation: (b) C101 iterative graph of fair index optimisation.

In Figure (a), the fairness index of example R101 tends to converge after about 100 iterations, and the convergence value is 0.083; In Figure (b), the fairness index of C101 example converges in about 230 iterations, and the convergence value is 0.054.
Figure 3. Iteration diagram of fair index optimisation: (a) R101 iterative graph of fair index optimisation: (b) C101 iterative graph of fair index optimisation.

Figure 4. Ant colony parameter tuning: (a) ACS algorithm and parameter: (b) MMAS algorithm and parameter: (c) ACS algorithm parameter selection: (d) MMAS algorithm parameter selection: (e) Ant Quantity.

In Figure (a), the ACS algorithm is used to obtain the optimal α and β factors, where α = 2, β = 5.5; In Figure (b), the MMAS algorithm is used to obtain the optimal α and β factors, where α = 2.5, β = 5; In Figure (c), the ACS algorithm is used to screen ρ, and the optimal ρ = 0.2; Figure (d) uses the MMAS algorithm to screen ρ, and the optimal ρ = 0.8; Figure (e) compares the number of ants of 12,16,24,30,36, the optimal objective function is the best, and the number of ants that converges the fastest is 24.
Figure 4. Ant colony parameter tuning: (a) ACS algorithm and parameter: (b) MMAS algorithm and parameter: (c) ACS algorithm parameter selection: (d) MMAS algorithm parameter selection: (e) Ant Quantity.

Figure 5. Roadmap of vehicle distribution: (a) R101 Vehicle distribution roadmap: (b) C101 Vehicle distribution roadmap.

Figure (a) shows the optimal vehicle assignment route of the R101 example, as shown in Table 9; Figure (b) shows the optimal vehicle assignment route for the C101 example, as shown in Table 10.
Figure 5. Roadmap of vehicle distribution: (a) R101 Vehicle distribution roadmap: (b) C101 Vehicle distribution roadmap.

Table 9. R101 is an example of a vehicle distribution route.

Table 10. C101 is an example of a vehicle distribution route.

Table 11. Comparison of the results of two allocation schemes.

Table 12. R101 example algorithm comparison results.

Table 13. C101 example algorithm comparison results.

Figure 6. Comparison of algorithm convergence: (a) Comparison of algorithm convergence in R101 example: (b) Comparison of algorithm convergence in C101 example.

Figures (a) and (b) show the comparison between the ACS-MMAS, ACS, and MMAS algorithms for R101 and C101 examples, respectively. The results show that the Z2 value obtained by the ACS-MMAS algorithm is smaller than that obtained by ACS and MMAS algorithms.
Figure 6. Comparison of algorithm convergence: (a) Comparison of algorithm convergence in R101 example: (b) Comparison of algorithm convergence in C101 example.

Figure 7. Pareto frontier comparison of algorithms: (a) R101 example Pareto frontier: (b) C01 example Pareto frontier.

Figure (a) and Figure (b) respectively show that the ACS-MMAS algorithm is used to obtain more Pareto solution sets on the pre-Pareto curve for R101 and C101 examples, and the controllable space is larger than the ACS algorithm and MMAS algorithm.
Figure 7. Pareto frontier comparison of algorithms: (a) R101 example Pareto frontier: (b) C01 example Pareto frontier.

Figure 8. Shadow area is the HV value of the solution set S′.

a1, a2 a3 are the three Pareto solutions, and the hypervolume value is the sum of the volumes of the hypervolume cube.
Figure 8. Shadow area is the HV value of the solution set S′.

Table 14. Algorithm HV value comparison.

Data availability statement

Data available on request from the authors.