85
Views
2
CrossRef citations to date
0
Altmetric
Computers and Computing

Integrating Two-Level Reinforcement Learning Process for Enhancing Task Scheduling Efficiency in a Complex Problem-Solving Environment

, , , , , , , & show all

References

  • A. Asghari, M. K. Sohrabi, and F. Yaghmaee, “Task scheduling, resource provisioning, and load balancing on scientific workflows using parallel SARSA reinforcement learning agents and genetic algorithm,” J. Supercomput., Vol. 77, no. 3, pp. 2800–28, 2020. DOI: 10.1007/s11227-020-03364-1.
  • P. Nawrocki, and B. Sniezynski, “Adaptive context-aware energy optimization for services on mobile devices with use of machine learning,” Wirel. Pers Commun., Vol. 115, no. 3, pp. 1839–67, 2020. DOI: 10.1007/s11277-020-07657-9.
  • X. Sui, D. Liu, L. Li, H. Wang, and H. Yang, “Virtual machine scheduling strategy based on machine learning algorithms for load balancing,” J. Wirel. Commun. Netw., Vol. 2019, pp. 160, 2019. DOI: 10.1186/s13638-019-1454-9.
  • S. Afzal, and G. Kavitha, “Load balancing in cloud computing – a hierarchical taxonomical classification,” J. Cloud Comput., Vol. 8, no. 22, pp. 1–24, 2019. DOI: 10.1186/s13677-019-0146-7.
  • Y. Zhang, X. Cheng, L. Chen, and H. Shen, “Energy-efficient tasks scheduling heuristics with multi-constraints in virtualized clouds,” J. Grid Comput.., Vol. 16, pp. 459–75, 2018. DOI: 10.1007/s10723-018-9426-6.
  • Y. Khosiawan, Y. Park, I. Moon, J. M. Nilakantan, and I. Nielsen, “Task scheduling system for UAV operations in indoor environment,” Neural Comput & Applic., Vol. 31, pp. 5431–59, 2019. DOI: 10.1007/s00521-018-3373-9.
  • J. Ge, B. Liu, T. Wang, Q. Yang, A. Liu, and A. Li, “Q-learning based flexible task scheduling in a global view for the internet of things,” Trans, Emerg. Tel. Tech., Vol. 32, pp. 1–32, 2020. DOI: 10.1002/ett.4111.
  • M. Melnik, and D. Nasonov, “Workflow scheduling using neural networks and reinforcement learning,” Procedia Comput. Sci., Vol. 156, pp. 29–36, 2019. DOI: 10.1016/j.procs.2019.08.126.
  • B. Waschneck, A. Reichstaller, L. Belzner, T. Altenmüller, T. Bauernhansl, A. Knapp, and A. Kyek, “Optimization of global production scheduling with deep reinforcement learning,” Procedia CIRP, Vol. 72, pp. 1264–69, 2018. DOI: 10.1016/j.procir.2018.03.212.
  • D. Kim, T. Lee, S. Kim, B. Lee, and H. Y. Youn, “Adaptive packet scheduling in IoT environment based on Q-learning,” J. Ambient Intell. Human Comput., Vol. 11, pp. 2225–35, 2020. DOI: 10.1007/s12652-019-01351-w.
  • P. Zhang, X. Ma, Y. Xiao, W. Li, and C. Lin, “Two-level task scheduling with multi-objectives in geo-distributed and large-scale SaaS cloud,” World Wide Web, Vol. 22, pp. 2291–319, 2019. DOI: 10.1007/s11280-019-00680-2.
  • F. Benda, R. Braune, K. F. Doerner, and R. F. Hartl, “A machine learning approach for flow shop scheduling problems with alternative resources, sequence-dependent setup times, and blocking,” OR Spectr., Vol. 41, pp. 871–93, 2019. DOI: 10.1007/s00291-019-00567-8.
  • K. Karthiban, and J. S. Raj, “An efficient green computing fair resource allocation in cloud computing using modified deep reinforcement learning algorithm,” Soft. comput., Vol. 24, pp. 14933–42, 2020. DOI: 10.1007/s00500-020-04846-3.
  • Z. Tang, W. Jia, X. Zhou, W. Yang, and Y. You, “Representation and reinforcement learning for task scheduling in edge computing,” IEEE Transactions on Big Data, Vol. 8, no. 3, pp. 795–808, 2020. DOI: 10.1109/TBDATA.2020.2990558.
  • H. Meng, D. Chao, Q. Guo, and X. Li, “Delay-sensitive task scheduling with deep reinforcement in mobile-edge computing systems,” J. Phys.: Conf. Ser., Vol. 1229, pp. 012059, 2019. DOI: 10.1088/17426596/1229/1/012059.
  • J. Wang, J. Cao, S. Wang, Z. Yao, and W. Li, “IRDA: incremental reinforcement learning for dynamic resource allocation,” IEEE Trans. Big Data, Vol. 8, no. 3, pp. 770–83, 2020. DOI: 10.1109/TBDATA.2020.2988273.
  • M. M. Amiri, and D. Gündüz, “Computation scheduling for distributed machine learning with straggling workers,” in ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 2019, pp. 8177–81, DOI: 10.1109/ICASSP.2019.8682911.
  • M. Gaafar, M. Shaghaghi, R. S. Adve, and Z. Ding, “Reinforcement learning for cognitive radar task scheduling,” in 2019 53rd Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, CA, USA, 2019, pp. 1653–57, DOI: 10.1109/IEEECONF44664.2019.9048892.
  • M. Soualhia, F. Khomh, and S. Tahar, “A dynamic and failure-aware task scheduling framework for hadoop,” IEEE Trans. Cloud Comput., Vol. 8, no. 2, pp. 553–69, 1 Apr—Jun. 2020. DOI: 10.1109/TCC.2018.2805812.
  • X. Wang, Z. Ning, S. Guo, and L. Wang, “Imitation learning enabled task scheduling for online vehicular edge computing,” IEEE Trans. Mobile Comput., Vol. 21, no. 2, pp. 598–611, 2020. DOI: 10.1109/TMC.2020.3012509.
  • Z. Hu, B. Li, and J. Luo, “Time- and cost- efficient task scheduling across geo-distributed data centers,” IEEE Trans. Parallel Distrib. Syst., Vol. 29, no. 3, pp. 705–18, 1 Mar. 2018. DOI: 10.1109/TPDS.2017.2773504.
  • G. L. Stavrinides, and H. D. Karatza, “Scheduling real-time bag-of-tasks applications with approximate computations in SaaS clouds,” Concurr. Computat. Pract. Exp., Vol. 32, no. 1, pp. e4208, 2017. DOI: 10.1002/cpe.4208.
  • Z. Zang, W. Wang, Y. Song, L. Lu, W. Li, Y. Wang, and Y. Zhao, “Hybrid deep neural network scheduler for job-shop problem based on convolution two-dimensional transformation,” Comput. Intell. Neurosci., Vol. 2019, pp. 7172842, 2019. DOI: 10.1155/2019/7172842.
  • P. Zhang, and M. Zhou, “Dynamic cloud task scheduling based on a two-stage strategy,” IEEE Trans. Autom. Sci. Eng., Vol. 15, no. 2, pp. 772–83, Apr. 2018. DOI: 10.1109/TASE.2017.2693688.
  • X. Zheng, M. Li, and J. Guo, “” task scheduling using edge computing system in smart city”,” Int. J. Commun. Syst., Vol. 34, no. 6, pp. e4422, 2020. DOI: 10.1002/dac.4422.
  • R. Shenbaga Moorthy, and P. Pabitha, “Optimal provisioning and scheduling of analytics as a service in cloud computing,” Trans. Emerg. Tel. Tech., Vol. 30, pp. e3609, 2019. DOI: 10.1002/ett.3609.
  • F. Ebadifard, and S. M. Babamir, “A PSO-based task scheduling algorithm improved using a load-balancing technique for the cloud computing environment,” Concurr. Comput. Pract. Exp., Vol. 30, pp. e4368, 2017. DOI: 10.1002/cpe.4368.
  • R. Khorsand, and M. Ramezanpour, “An energy efficient task scheduling algorithm based on a multicriteria decision-making method in cloud computing,” Int. J. Commun. Syst., Vol. 33, pp. e4379, 2020. DOI: 10.1002/dac.4379.
  • T. Dong, F. Xue, C. Xiao, and J. Li, “Task scheduling based on deep reinforcement learning in a cloud manufacturing environment,” Concurr. Comput. Pract. Exp., Vol. 32, pp. e5654, 2020. DOI: 10.1002/cpe.5654.
  • K. R. Prasanna Kumar, and K. Kittusamy, “Amelioration of task scheduling in cloud computing using crow search algorithm,” Neural Comput. Appl., Vol. 32, pp. 5901–7, 2020. DOI: 10.1007/s00521-019-04067-2.
  • D. S. Rani, and M. Pounambal, “Deep learning based dynamic task offloading in mobile cloudlet environments,” Evol. Intel., Vol. 14, no. 2, pp. 499–507, 2019. DOI: 10.1007/s12065-019-00284-9.
  • Z. Zhou, F. Li, H. Zhu, H. Xie, J. H. Abawajy, and M. U. Chowdhury, “An improved genetic algorithm using greedy strategy toward task scheduling optimization in cloud environments,” Neural Comput. Applic., Vol. 32, pp. 1531–41, 2020. DOI: 10.1007/s00521-019-04119-7.
  • Z. Peng, J. Lin, D. Cui, Q. Li, and J. He, “A multi-objective trade-off framework for cloud resource scheduling based on the deep Q-network algorithm,” Cluster. Comput., Vol. 32, pp. 1531–1541, 2020. DOI: 10.1007/s10586-019-03042-9.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.