186
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Deep reinforcement learning on variable stiffness compliant control for programming-free robotic assembly in smart manufacturing

, , ORCID Icon, , &
Received 25 Jul 2023, Accepted 01 Feb 2024, Published online: 18 Feb 2024

References

  • Bi, Z. M., and S. Y. T. Lang. 2007. “Automated Robotic Programming for Products with Changes.” International Journal of Production Research 45 (9): 2105–2118. https://doi.org/10.1080/00207540600733634
  • Chai, Hua, Chunpong So, and Philip F. Yuan. 2021. “Manufacturing Double-curved Glulam with Robotic Band Saw Cutting Technique.” Automation in Construction 124:103571. Publisher: Elsevier. https://doi.org/10.1016/j.autcon.2021.103571
  • Esteso, Ana, David Peidro, Josefa Mula, and Manuel Díaz-Madroñero. 2022. “Reinforcement Learning Applied to Production Planning and Control.” International Journal of Production Research 61: 1–18. https://doi.org/10.1080/00207543.2022.2104180.
  • Guo, Wanjin, Yaguang Zhu, and Xu He. 2020. “A Robotic Grinding Motion Planning Methodology for a Novel Automatic Seam Bead Grinding Robot Manipulator.” IEEE Access 8:75288–75302. Publisher: IEEE. https://doi.org/10.1109/Access.6287639
  • Hägele, Martin, Klas Nilsson, and J. Norberto Pires. 2008. “Industrial Robotics.” In Springer Handbook of Robotics, edited by Bruno Siciliano and Oussama Khatib, 963–986. Berlin, Heidelberg: Springer.
  • Hao, Peng, Tao Lu, Shaowei Cui, Junhang Wei, Yinghao Cai, and Shuo Wang. 2022. “Meta-residual Policy Learning: Zero-trial Robot Skill Adaptation Via Knowledge Fusion.” IEEE Robotics and Automation Letters 7 (2): 3656–3663. https://doi.org/10.1109/LRA.2022.3146916
  • Hogan, N. 1985. “Impedance Control – An Approach to Manipulation. I – Theory. II – Implementation. III – Applications.” Journal of Dynamic Systems, Measurement, and Control 107. https://doi.org/10.1115/1.3140702.
  • Hou, Zhimin, Zhihu Li, Chenwei Hsu, Kuangen Zhang, and Jing Xu. 2022. “Fuzzy Logic-Driven Variable Time-Scale Prediction-Based Reinforcement Learning for Robotic Multiple Peg-in-Hole Assembly.” IEEE Transactions on Automation Science and Engineering 19 (1): 218–229. https://doi.org/10.1109/TASE.2020.3024725
  • Jaura, Arun, M. O. M. Osman, and Nicholas Krouglicof. 1998. “Hybrid Compliance Control for Intelligent Assembly in a Robot Work Cell.” International Journal of Production Research 36 (9): 2573–2583. https://doi.org/10.1080/002075498192706
  • Jiang, Jingang, Zhiyuan Huang, Zhuming Bi, Xuefeng Ma, and Guang Yu. 2020. “State-of-the-Art Control Strategies for Robotic PiH Assembly.” Robotics and Computer-Integrated Manufacturing 65:101894. https://doi.org/10.1016/j.rcim.2019.101894
  • Kaneko, Takeshi, Masashi Sekiya, Kunihiro Ogata, Sho Sakaino, and Toshiaki Tsuji. 2016. “Force Control of a Jumping Musculoskeletal Robot with Pneumatic Artificial Muscles.” In 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 5813–5818. IEEE.
  • Khader, Shahbaz Abdul, Hang Yin, Pietro Falco, and Danica Kragic. 2020. “Stability-guaranteed Reinforcement Learning for Contact-rich Manipulation.” IEEE Robotics and Automation Letters 6 (1): 1–8. Publisher: IEEE. https://doi.org/10.1109/LSP.2016.
  • Kim, Songi, and Keeheon Lee. 2023. “The Paradigm Shift of Mass Customisation Research.” International Journal of Production Research 61 (10): 3350–3376. 10.1080/00207543.2022.2081629
  • Kusiak, Andrew. 2018. “Smart Manufacturing.” International Journal of Production Research 56 (1-2): 508–517. https://doi.org/10.1080/00207543.2017.1351644
  • LeCun, Yann, Yoshua Bengio, and Geoffrey Hinton. 2015. “Deep Learning.” Nature 521 (7553): 436–444. https://doi.org/10.1038/nature14539
  • Lian, Wenzhao, Tim Kelch, Dirk Holz, Adam Norton, and Stefan Schaal. 2021. “Benchmarking Off-The-Shelf Solutions to Robotic Assembly Tasks.” In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 1046–1053. IEEE.
  • Liao, Yongxin, Fernando Deschamps, Eduardo de Freitas Rocha Loures, and Luiz Felipe Pierin Ramos. 2017. “Past, Present and Future of Industry 4.0-a Systematic Literature Review and Research Agenda Proposal.” International Journal of Production Research 55 (12): 3609–3629. https://doi.org/10.1080/00207543.2017.1308576
  • Lillicrap, Timothy P., Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. 2016. “Continuous Control with Deep Reinforcement Learning.” In 4th International Conference on Learning Representations, ICLR 2016 – Conference Track Proceedings.
  • Liu, Gang, Bitao Yao, Wenjun Xu, and Xuedong Liu. 2022. “Optimizing Non-Diagonal Stiffness Matrix of Compliance Control for Robotic Assembly Using Deep Reinforcement Learning.” In Journal of Physics: Conference Series, Vol. 2402, 012013. IOP Publishing. Issue: 1.
  • Luo, Jianlan, Eugen Solowjow, Chengtao Wen, Juan Aparicio Ojea, Alice M Agogino, Aviv Tamar, and Pieter Abbeel. 2019. “Reinforcement Learning on Variable Impedance Controller for High-Precision Robotic Assembly.” In 2019 International Conference on Robotics and Automation (ICRA), 3080–3087. IEEE.
  • Mason, Matthew T. 1981. “Compliance and Force Control for Computer Controlled Manipulators.” IEEE Transactions on Systems, Man, and Cybernetics 11 (6): 418–432. https://doi.org/10.1109/TSMC.1981.4308708
  • Merhi, Mohammad I., and Antoine Harfouche. 2023. “Enablers of Artificial Intelligence Adoption and Implementation in Production Systems.” International Journal of Production Research 0 (0): 1–15. https://doi.org/10.1080/00207543.2023.2167014
  • Michel, Youssef, Matteo Saveriano, and Dongheui Lee. 2023. “A Passivity-based Approach for Variable Stiffness Control with Dynamical Systems.” IEEE Transactions on Automation Science and Engineering. https://doi.org/10.1109/TASE.2023.3324141.
  • Mnih, Volodymyr, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, et al. 2015. “Human-level Control Through Deep Reinforcement Learning.” Nature 518 (7540): 529–533. https://doi.org/10.1038/nature14236
  • Oikawa, Masahide, Tsukasa Kusakabe, Kyo Kutsuzawa, Sho Sakaino, and Toshiaki Tsuji. 2021. “Reinforcement Learning for Robotic Assembly Using Non-Diagonal Stiffness Matrix.” IEEE Robotics and Automation Letters 6 (2): 2737–2744. https://doi.org/10.1109/lra.2021.3060389
  • Oikawa, Masahide, Kyo Kutsuzawa, Sho Sakaino, and Toshiaki Tsuji. 2020. “Admittance Control Based on a Stiffness Ellipse for Rapid Trajectory Deformation.” In 2020 IEEE 16th International Workshop on Advanced Motion Control (AMC), 23–28. IEEE.
  • Panzer, Marcel, and Benedict Bender. 2022. “Deep Reinforcement Learning in Production Systems: a Systematic Literature Review.” International Journal of Production Research 60 (13): 4316–4341. https://doi.org/10.1080/00207543.2021.1973138
  • Rai, Rahul, Manoj Kumar Tiwari, Dmitry Ivanov, and Alexandre Dolgui. 2021. “Machine Learning in Manufacturing and Industry 4.0 Applications.” International Journal of Production Research 59 (16): 4773–4778. https://doi.org/10.1080/00207543.2021.1956675
  • Raibert, M. H., and J. J. Craig. 1981. “Hybrid Position/force Control of Manipulators.” Journal of Dynamic Systems, Measurement and Control, Transactions of the ASME 103 (2). https://doi.org/10.1115/1.3139652
  • Roveda, Loris, Asad Ali Shahid, Niccolò Iannacci, and Dario Piga. 2021. “Sensorless Optimal Interaction Control Exploiting Environment Stiffness Estimation.” IEEE Transactions on Control Systems Technology 30 (1): 218–233. https://doi.org/10.1109/TCST.2021.3061091
  • Roveda, Loris, Mauro Magni, Martina Cantoni, Dario Piga, and Giuseppe Bucca. 2021. “Human–robot Collaboration in Sensorless Assembly Task Learning Enhanced by Uncertainties Adaptation Via Bayesian Optimization.” Robotics and Autonomous Systems 136:103711. https://doi.org/10.1016/j.robot.2020.103711
  • Roveda, Loris, Jeyhoon Maskani, Paolo Franceschi, Arash Abdi, Francesco Braghin, Lorenzo Molinari Tosatti, and Nicola Pedrocchi. 2020. “Model-based Reinforcement Learning Variable Impedance Control for Human–robot Collaboration.” Journal of Intelligent & Robotic Systems 100 (2): 417–433. https://doi.org/10.1007/s10846-020-01183-3
  • Roveda, Loris, Daniele Riva, Giuseppe Bucca, and Dario Piga. 2021. “Sensorless Optimal Switching Impact/force Controller.” IEEE Access 9:158167–158184. https://doi.org/10.1109/ACCESS.2021.3131390
  • Roveda, Loris, Andrea Testa, Asad Ali Shahid, Francesco Braghin, and Dario Piga. 2022. “Q-Learning-based Model Predictive Variable Impedance Control for Physical Human–robot Collaboration.” Artificial Intelligence 312:103771. https://doi.org/10.1016/j.artint.2022.103771
  • Scherzinger, Stefan. 2021. “Human-Inspired Compliant Controllers for Robotic Assembly.” PhD diss., Karlsruher Institut für Technologie (KIT).
  • Scherzinger, Stefan, Arne Roennau, and Rüdiger Dillmann. 2017. “Forward Dynamics Compliance Control (FDCC): A New Approach to Cartesian Compliance for Robotic Manipulators.” In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 4568–4575. IEEE.
  • Scherzinger, Stefan, Arne Roennau, and Rüdiger Dillmann. 2019. “Contact Skill Imitation Learning for Robot-Independent Assembly Programming.” In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 4309–4316. IEEE.
  • Scherzinger, Stefan, Arne Roennau, and Rüdiger Dillmann. 2019. “Inverse Kinematics with Forward Dynamics Solvers for Sampled Motion Tracking.” In 2019 19th International Conference on Advanced Robotics (ICAR), 681–687. IEEE.
  • Scherzinger, Stefan, Arne Roennau, and Rüdiger Dillmann. 2020. “Virtual Forward Dynamics Models for Cartesian Robot Control.” arXiv preprint arXiv:2009.11888.
  • Schoettler, Gerrit, Ashvin Nair, Jianlan Luo, Shikhar Bahl, Juan Aparicio Ojea, Eugen Solowjow, and Sergey Levine. 2020. “Deep Reinforcement Learning for Industrial Insertion Tasks with Visual Inputs and Natural Rewards.” In IEEE International Conference on Intelligent Robots and Systems, ISSN: 21530866.
  • Schumacher, Marie, Janis Wojtusch, Philipp Beckerle, and Oskar von Stryk. 2019. “An Introductory Review of Active Compliant Control.” Robotics and Autonomous Systems 119:185–200. https://doi.org/10.1016/j.robot.2019.06.009
  • Sharma, Kamal, Varsha Shirwalkar, and Prabir K Pal. 2013. “Intelligent and Environment-Independent Peg-In-Hole Search Strategies.” In 2013 International Conference on Control, Automation, Robotics and Embedded Systems (CARE), 1–6. IEEE.
  • Sutton, Richard S, and Andrew G Barto. 2018. Reinforcement Learning: An Introduction. Cambridge, MA: MIT press.
  • Tsuji, Toshiaki, Chinami Momiki, and Sho Sakaino. 2013. “Stiffness Control of a Pneumatic Rehabilitation Robot for Exercise Therapy with Multiple Stages.” In 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, 1480–1485. IEEE.
  • Tsuji, T., T. Yokoo, Y. Hasegawa, K. Abe, Y. Sakurai, and S. Ishii. 2014. “Development of Rehabilitation Support Robot with Guidance Control Based on Biarticular Muscle Mechanism.” IEEJ Journal of Industry Applications 3 (4): 350–357. Publisher: The Institute of Electrical Engineers of Japan. https://doi.org/10.1541/ieejjia.3.350
  • Unten, H., S. Sakaino, and T. Tsuji. 2022. “Peg-in-Hole Using Transient Information of Force Response.” IEEE/ASME Transactions on Mechatronics 28 (3): 1674–1682. Publisher: IEEE. https://doi.org/10.1109/TMECH.2022.3224907
  • Xu, L. D., E. L. Xu, and L. Li. 2018. “Industry 4.0: State of the Art and Future Trends.” International Journal of Production Research 56 (8): 2941–2962. https://doi.org/10.1080/00207543.2018.1444806
  • Zhang, X., H. Zhou, J. Liu, Z. Ju, Y. Leng, and C. Yang. 2023. “A Practical PID Variable Stiffness Control and Its Enhancement for Compliant Force-tracking Interactions with Unknown Environments.” Science China Technological Sciences 66 (10): 2882–2896. https://doi.org/10.1007/s11431-022-2436-y

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.