2,930
Views
57
CrossRef citations to date
0
Altmetric
Articles

Intelligent scheduling of discrete automated production line via deep reinforcement learning

, , , &
Pages 3362-3380 | Received 06 Apr 2019, Accepted 09 Jan 2020, Published online: 27 Jan 2020

References

  • Bellman, R. E., and S. E. Dreyfus. 1962. Applied Dynamic Programming. Princeton, NJ: Princeton University. Press.
  • Gabel, T., and M. Riedmiller. 2007. “Scaling Adaptive Agent-Based Reactive Job-Shop Scheduling to Large-Scale Problems.” 2007 IEEE Symposium on Computational Intelligence in Scheduling, Honolulu, HI, USA, 259–266.
  • Kara, A., and I. Dogan. 2018. “Reinforcement Learning Approaches for Specifying Ordering Policies of Perishable Inventory Systems.” Expert Systems With Applications 91: 150–158. doi: 10.1016/j.eswa.2017.08.046
  • Kuhnle, A., N. Röhrig, and G. Lanza. 2019. “Autonomous Order Dispatching in the Semiconductor Industry Using Reinforcement Learning.” Procedia CIRP 79: 391–396. doi: 10.1016/j.procir.2019.02.101
  • Li, Lin-ying, Rui Lu, and Jie Zang. 2016. “Scheduling Model of Cluster Tools for Concurrent Processing of Multiple Wafer Types.” Mathematics in Practice and Theory (16): 152–161.
  • Li, X., J. Wang, and R. Sawhney. 2012. “Reinforcement Learning for Joint Pricing, Lead-Time and Scheduling Decisions in Make-to-Order Systems.” European Journal of Operational Research 221 (1): 99–109. doi: 10.1016/j.ejor.2012.03.020
  • Lin, Zhongwei, and Yiping Yao. 2015. “Load Balancing for Parallel Discrete Event Simulation of Stochastic Reaction and Diffusion.” 2015 IEEE International Conference on Smart City/SocialCom/SustainCom (SmartCity), Chengdu, China, 609–614.
  • Mnih, Volodymyr, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, et al. 2015. “Human-level Control Through Deep Reinforcement Learning.” Nature 518 (7540): 529–533. doi: 10.1038/nature14236
  • Palombarini, J., and E. Martínez. 2012. “SmartGantt – An Intelligent System for Real Time Rescheduling Based on Relational Reinforcement Learning.” Expert Systems With Applications 39 (11): 10251–10268. doi: 10.1016/j.eswa.2012.02.176
  • Paternina-Arboleda, C. D., and T. K. Das. 2005. “A Multi-Agent Reinforcement Learning Approach to Obtaining Dynamic Control Policies for Stochastic lot Scheduling Problem.” Simulation Modelling Practice & Theory 13 (5): 389–406. doi: 10.1016/j.simpat.2004.12.003
  • Riedmiller, S. C., and M. A. Riedmiller. 1999. “A Neural Reinforcement Learning Approach to Learn Local Dispatching Policies in Production Scheduling.” Sixteenth International Joint Conference on Artificial Intelligence.
  • Shahrabi, J., M. A. Adibi, and M. Mahootchi. 2017. “A Reinforcement Learning Approach to Parameter Estimation in Dynamic Job Shop Scheduling.” Computers & Industrial Engineering 110: 75–82. doi: 10.1016/j.cie.2017.05.026
  • Shin, M., K. Ryu, and M. Jung. 2012. “Reinforcement Learning Approach to Goal-Regulation in a Self-Evolutionary Manufacturing System.” Expert Systems With Applications 39 (10): 8736–8743. doi: 10.1016/j.eswa.2012.01.207
  • Shiue, Y. R., K. C. Lee, and C. T. Su. 2018. “Real-time Scheduling for a Smart Factory Using a Reinforcement Learning Approach.” Computers & Industrial Engineering 125: 604–614. doi: 10.1016/j.cie.2018.03.039
  • Stricker, N., A. Kuhnle, R. Sturm, and S. Friess. 2018. “Reinforcement Learning for Adaptive Order Dispatching in the Semiconductor Industry.” CIRP Annals – Manufacturing Technology 67 (1): 511–514. doi: 10.1016/j.cirp.2018.04.041
  • Szepesvári, Csaba. 2010. “Algorithms for Reinforcement Learning.” Synthesis Digital Library of Engineering and Computer Science. San Rafael, CA: Morgan & Claypool.
  • Waschneck, B., A. Reichstaller, L. Belzner, T. Altenmuller, T. Bauernhansl, T. Knapp, and A. Kyek. 2018a. “Deep Reinforcement Learning for Semiconductor Production Scheduling.” 2018 29th Annual SEMI Advanced Semiconductor Manufacturing Conference (ASMC), 301–306.
  • Waschneck, B., A. Reichstaller, L. Belzner, T. Altenmüller, T. Bauernhansl, T. Knapp, and A. Kyek. 2018b. “Optimization of Global Production Scheduling with Deep Reinforcement Learning.” Procedia CIRP 72: 1264–1269. doi: 10.1016/j.procir.2018.03.212
  • Zhang, W., and T. G. Dietterich. 1995. “A Reinforcement Learning Approach to Job-shop Scheduling.” International Joint Conference on Artificial Intelligence. Montréal: Morgan Kaufmann Publishers.
  • Zhang, W., and T. G. Dietterich. 1996. “High-Performance Job-Shop Scheduling with a Time-Delay TD(λ) Network.” Advances in Neural Information Processing Systems 1996: 1024–1030.
  • Zhang, Z., L. Zheng, N. Li, W. Wang, S. Zhong, and K. Hu. 2012. “Minimizing Mean Weighted Tardiness in Unrelated Parallel Machine Scheduling with Reinforcement Learning.” Computers and Operations Research 39 (7): 1315–1324. doi: 10.1016/j.cor.2011.07.019
  • Zweben, M., E. Davis, B. Daun, and M. J. Deale. 1993. “Scheduling and Rescheduling with Iterative Repair.” IEEE Transactions on Systems, Man and Cybernetics 23 (6): 1588–1596. doi: 10.1109/21.257756

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.