ABSTRACT
Cranes are used extensively in manufacturing workshops to move jobs, but their high complexity and dynamics lead to difficult workshop production scheduling. To address this issue, this article proposes a deep reinforcement learning-based method combined with discrete event simulation to minimize the makespan of the double-deck traversable crane flexible job-shop scheduling problem (DTCFJSP). Specifically, the problem is first formulated as a finite Markov decision process by introducing state representation, an action space and a reward function. Then, a new double-deep Q-learning network is incorporated to create a selection strategy for optimal actions in different states. The results of experiments conducted in this study show that the average efficiency of the double-deck traversable crane is approximately 12% higher than that of regular cranes, and the application of deep reinforcement learning in crane scheduling is feasible and effective.
Disclosure statement
No potential conflict of interest was reported by the authors.
Data availability statement
The authors confirm that the data supporting the findings of this study are available within the article.