Abstract
Industry 5.0 emphasises collaborative work between humans, advanced technology, and artificial intelligence robots, focusing on human-centric principles and integrating flexibility and sustainability to enhance workflows. The advancement of human-robot interaction services can significantly improve the operations efficiency of digital twin manufacturing cells. Motivated by the above background, a gesture-driven interaction architecture for digital twin manufacturing cells is proposed, including data acquisition layer, data processing layer, and application service layer. Secondly, deep learning algorithms are employed for recognising predefined gestures. An improved YOLOv5 algorithm is used to solve the problem of low accuracy in static gesture recognition; while a 3D-CNN-based multimodal data fusion algorithm is used to solve the problem of continuity, diversity, and dimensionality in dynamic gesture recognition. Ultimately, the prototype system is developed utilising Kinect 2.0 and Unity 3D, which involves linking the gesture recognition to the digital twin model, and linking the digital twin model to the physical manufacturing cells. This study is expected to provide theoretical and practical insights to empower human-robot interaction technology in manufacturing cells.
Acknowledgements
This work was supported in part by the National Key R&D Program of China (No. 2021YFB3301702), and the Major Special Science and Technology Project of Shaanxi Province, China (No. 2018zdzx01-01-01).
Disclosure statement
No potential conflict of interest was reported by the author(s).
Competing interests
The authors declare that they have no competing interests.