6,620
Views
2
CrossRef citations to date
0
Altmetric
SYSTEMS & CONTROL

A survey on vision guided robotic systems with intelligent control strategies for autonomous tasks

ORCID Icon, ORCID Icon & ORCID Icon | (Reviewing editor)
Article: 2050020 | Received 04 Aug 2021, Accepted 01 Mar 2022, Published online: 03 Apr 2022

References

  • Adrian, O. D., Serban, O. A., & Niculae, M. (2017). Some proper methods for optimization in robotics. International Journal of Modeling and Optimization, 7(4), 188–44. https://doi.org/10.7763/IJMO.2017.V7.582
  • Aguiar, A. S., Dos Santos, F. N., Cunha, J. B., Sobreira, H., & Sousa, A. J. (2020). Localization and mapping for robots in agriculture and forestry: A survey. Robotics, 9(4), 1–23. https://doi.org/10.3390/robotics9040097
  • Al Mutib, K., Mattar, E., Alsulaiman, M., Ramdane, H., & ALDean, M. (2014). Neural network vision-based visual servoing and navigation for KSU-IMR mobile robot using epipolar geometry. The International Journal of Soft Computing and Software Engineering. March 1-2 (pp. 1–8). March 1-2 3( March 1-2. https://doi.org/10.7321/jscse.v3.n3.136.
  • Al-Forati, I. S., & Rashid, A. (2018). Design and implementation of an indoor robot localization system using minimum bounded circle algorithm. In 2019 8th International Conference on Modeling Simulation and Applied Optimization (ICMSAO),Apr 15-17, (pp. 1–6). IEEE, Manama, Bahrain.
  • Al-Jarrah, R., Al-Jarrah, M., & Roth, H. (2018). A novel edge detection algorithm for mobile robot path planning. Journal of Robotics, 2018, 1–12. https://doi.org/10.1155/2018/1969834
  • Ali, I., Durmush, A., Suominen, O., Yli-Hietanen, J., Peltonen, S., Collin, J., & Gotchev, A. (2020). FinnForest dataset: A forest landscape for visual SLAM. Robotics and Autonomous Systems, 132, 1–13. https://doi.org/10.1016/j.robot.2020.103610
  • Ali, H., Gong, D., Wang, M., & Dai, X. (2020). Path planning of mobile robot with improved ant colony algorithm and MDP to produce smooth trajectory in grid-based environment. Frontiers in Neurorobotics, 14(44), 1–13. https://doi.org/10.3389/fnbot.2020.00044
  • Almeida, J. S., Marinho, L. B., Souza, J. M., Assis, E. A., & Reboucas Filho, P. P. (2018). Localization system for autonomous mobile robots using machine learning methods and omnidirectional sonar. IEEE Latin America Transactions, 16(2), 368–374. https://doi.org/10.1109/TLA.2018.8327388
  • Alom, M. Z., Taha, T. M., Yakopcic, C., Westberg, S., Sidike, P., Nasrin, M. S., Hasan, M., Van Essen, B. C., Awwal, A. A. S., & Asari, V. K. (2019). A state-of-the-art survey on deep learning theory and architectures. Electronics, 8(3), 1–67. https://doi.org/10.3390/electronics8030292
  • Angani, A., Lee, J. W., Talluri, T., Lee, J. Y., & Shin, K. J. (2020). Human and Robotic Fish Interaction Controlled Using Hand Gesture Image Processing. Sensors and Materials, 32(10), 3479–3490. https://doi.org/10.18494/SAM.2020.2925
  • Aranda, M., Lopez-Nicolas, G., & Sagues, C. (2017). Angle-based navigation using the 1D trifocal tensor. In Aranda, M., Lopez-Nicolas, G., & Sagues, C. (Eds.), Control of Multiple Robots Using Vision Sensors (vol. 34, no. 1, pp. 19–51). Springer. https://doi.org/10.1007/978-3-319-57828-6_2
  • Arhin, S., Manandhar, B., & Baba-Adam, H. (2020). Predicting travel times of bus transit in Washington, DC using artificial neural networks. Civil Engineering Journal, 6(11), 2245–2261. https://doi.org/10.28991/cej-2020-03091615
  • Asvadi, A., Garrote, L., Premebida, C., Peixoto, P., & Nunes, U. J. (2017). Multimodal vehicle detection: Fusing 3D LIDAR and color camera data. Pattern Recognition Letters, 115(1), 9–20. https://doi.org/10.1016/j.patrec.2017.09.038
  • Banlue, T., Sooraksa, P., & Noppanakeepong, S. (2014). A practical position-based visual servo design and implementation for automated fault insertion test. international Journal of Control, Automation and Systems, 12(5), 1090–1101.https://doi.org/10.1007/s12555-013-0128-3
  • Basri, H., Syarif, I., Sukaridhoto, S., & Falah, M. F. (2019). Intelligent system for automatic classification of fruit defect using faster region-based convolutional neural network (Faster R-Cnn). Jurnal Ilmiah Kursor, 10(1), 1–11. https://doi.org/10.28961/kursor.v10i1.187
  • Bateux, Q., Marchand, E., Leitner, J., Chaumette, F., & Corke, P. (2017). Visual servoing from deep neural networks. arXiv preprint, arXiv:1705.08940. 1–6. https://doi.org/10.48550/arXiv.1705.08940
  • Bateux, Q., Marchand, E., Leitner, J., Chaumette, F., & Corke, P. (2018). Training deep neural networks for visual servoing. In 2018 IEEE international conference on robotics and automation (ICRA), May 21-25, (pp. 3307–3314). IEEE: Brisbane, Australia. https://doi.org/10.1109/ICRA.2018.8461068.
  • Bebis, G., Egbert, D., & Shah, M. (2003). Review of computer vision education. IEEE Transactions on Education, 46(1), 2–21. https://doi.org/10.1109/TE.2002.808280
  • Becerra, H. M., & Sagues, C. (2011). Dynamic pose-estimation from the epipolar geometry for Visual Servoing of mobile robots, In 2011 IEEE International Conference on Robotics and Automation, 2011, May 9-13, (pp. 417–422). IEEE: Shanghai, China. https://doi.org/10.1109/ICRA.2011.5979737.
  • Benzaoui, M., Chekireb, H., Tadjine, M., & Boulkroune, A. (2016). Trajectory tracking with obstacle avoidance of redundant manipulator based on fuzzy inference systems. Neurocomputing, 196, 23–30. https://doi.org/10.1016/j.neucom.2016.02.037
  • Bore, N., Jensfelt, P., & Folkesson, J. (2018). Multiple object detection, tracking and long-term dynamics learning in large 3D maps. Arvix. 1–13. https://arxiv.org/abs/1801.09292.
  • Bovik, A. C. (2009d). The Essential Guide to Image Processing. Academic Press.
  • Byambasuren, B. E., Baasanjav, T., Myagmarjav, T., & Baatar, B. (2020). Application of image processing and industrial robot arm for quality assurance process of production, In 2020 IEEE Region 10 Symposium (TENSYMP), Jun 5-7, (pp. 526–530), IEEE, Dhaka, Bangladesh.
  • Calli, B., & Dollar, A. M. (2018). Robust precision manipulation with simple process models using visual servoing techniques with disturbance rejection. IEEE Transactions on Automation Science and Engineering, 16(1), 406–419. https://doi.org/10.1109/TASE.2018.2819661
  • Campbell, S., Mahony, N. O., Carvalho, A., Krpalkova, L., Riordan, D., & Walsh, J. (2020). Path Planning Techniques for Mobile Robots A Review, In 2020 6th International Conference on Mechatronics and Robotics Engineering (ICMRE),Feb 12-15, (pp. 12–16). IEEE: Barcelona, Spain, https://doi.org/10.1109/ICMRE49073.2020.9065187.
  • Castelli, F., Michieletto, S., Ghidoni, S., & Pagello, E. (2017). A machine learning-based visual servoing approach for fast robot control in industrial setting. International Journal of Advanced Robotic Systems, 14(6), 1–10. https://doi.org/10.1177/1729881417738884
  • Cervera, E., Del Pobil, A. P., Berry, F., & Martinet, P. (2003). Improving image-based visual servoing with three-dimensional features. The International Journal of Robotics Research, 22(10–11), 821–839. https://doi.org/10.1177/027836490302210003
  • Chaumette, F. 1998. Potential problems of stability and convergence in image-based and position-based visual servoing. In David J. Kriegman, Gregory D. Hage, A. Stephen Morse (Eds.),The confluence of vision and control (Vol. 237 66–78).Springer.
  • Chaumette, F., & Hutchinson, S. (2008). Visual Servoing and visual tracking. In (Ed.), Springer Handbook of Robotics (pp. 563–583). Springer.
  • Chen, X., Jia, Y., & Matsuno, F. (2014). Tracking control of nonholonomic mobile robots with velocity and acceleration constraints. In 2014 American Control Conference, Jun 4-6, (pp. 880–884). IEEE: Portland, Oregon.
  • Chen, J., Jia, B., & Zhang, K. (2017). Trifocal tensor-based adaptive visual trajectory tracking control of mobile robots. IEEE Transactions on Cybernetics, 47(11), 3784–3798. https://doi.org/10.1109/TCYB.2016.2582210
  • Chen, Y., Li, Z., Kong, H., & Ke, F. (2019). Model predictive tracking control of nonholonomic mobile robots with coupled input constraints and unknown dynamics. IEEE Transactions on Industrial Informatics, 15(6), 3196–3205. https://doi.org/10.1109/TII.2018.2874182
  • Chen, B., & Pan, B. (2020). Camera calibration using synthetic random speckle pattern and digital image correlation. Optics and Lasers in Engineering, 126, 1–9. https://doi.org/10.1016/j.optlaseng.2019.105919
  • Chen, B. X., Sahdev, R., & Tsotsos, J. K. (2017). Integrating Stereo Vision with a CNN Tracker for a Person-Following Robot. In M. Liu, H. Chen, & M. Vincze (Eds.), ICVS 2017. Lecture Notes in Computer Science (Vol. 10528, pp. 300–313). Springer: Computer Vision Systems.
  • Chen, J., Wang, Z., & Li, H. (2018). Real-time object segmentation based on convolutional neural network with saliency optimization for picking. Journal of Systems Engineering and Electronics, 29(6), 1300–1307. https://doi.org/10.21629/JSEE.2018.06.17
  • Chesi, G. (2009). Visual Servoing path planning via homogeneous forms and LMI optimizations. IEEE Transactions on Robotics, 25(2), 281–291.https://doi.org/10.1109/TRO.2009.2014131
  • Chesi, G., Mariottini, G. L., Prattichizzo, D., & Vicino, A. (2006). Epipole-based Visual Servoing for mobile robots. Advanced Robotics, 20(2), 55–280. https://doi.org/10.1163/156855306775525820
  • Chopde, S., Patil, M., Shaikh, A., Chavhan, B., & Deshmukh, M. (2017). Developments in the computer vision system, focusing on its applications in quality inspection of fruits and vegetables-A review. Agricultural Reviews, 38(2), 94–102. https://doi.org/10.18805/ag.v38i02.7940
  • Corke, P. I. (1993). Visual control of robot manipulators–a review. Visual Servoing: Real-Time Control of Robot Manipulators Based on Visual Sensory Feedback, 7, 31. https://doi.org/10.1142/9789814503709_0001
  • Cruz, Y. J., Rivas, M., Quiza, R., Beruvides, G., & Haber, R. E. (2020). Computer vision system for welding inspection of liquefied petroleum gas pressure vessels based on combined digital image processing and deep learning techniques. Sensors, 16, 1–13. https://doi.org/10.3390/s20164505
  • Cuevas, F., Castillo, O., & Cortes, P. (2020). Towards a control strategy based on type-2 fuzzy logic for an autonomous mobile robot. In O. Castillo & P. Melin (Eds.), Hybrid Intelligent Systems in Control, Pattern Recognition and Medicine. Studies in Computational Intelligence (Vol. 827, pp. 301–314). Springer.
  • Cui, X., Lu, C., & Wang, J. (2020). 3D semantic map construction using improved ORB-SLAM2 for mobile robot in edge computing environment. IEEE Access, 8, 67179–67191. https://doi.org/10.1109/ACCESS.2020.2983488
  • D’, E. G., & Gasbarro, D. D. (2017). Review of techniques for 2D camera calibration suitable for industrial vision systems. Journal of Physics: Conference Series, 841(1), 1–7. https://doi.org/10.1088/1742-6596/841/1/012030
  • Dai, X., Long, S., Zhang, Z., & Gong, D. (2019). Mobile robot path planning based on ant colony algorithm with a* heuristic method. Frontiers in Neurorobotics, 13(15), 131–139. https://doi.org/10.3389/fnbot.2019.00015
  • Dan, Z., & Bin, W. (2017). A review on model reference adaptive control of robotic manipulators. Annual Reviews in Control, 44, 1–11. https://doi.org/10.1016/j.arcontrol.2017.02.002
  • Dewi, T., Risma, P., Oktarina, Y., & Muslimin, S. (2018). Visual servoing design and control for agriculture robot; a review. In 2018 International Conference on Electrical Engineering and Computer Science (ICECOS),Oct. 24-25, (pp. 57–62). IEEE: Jakarta, Indonesia. https://doi.org/10.1109/ICECOS.2018.8605209.
  • Dey, S., Reilly, V., Saleemi, I., & Shah, M. (2012). Detection of independently moving objects in non-planar scenes via multi-frame monocular epipolar constraint. A. Fitzgibbon, S. Lazebnik, P. Perona, Y. Sato, & C. Schmid (Eds.), Computer Vision – ECCV 2012. ECCV 2012. Lecture Notes in Computer Science (Vol. 7576, pp. 860–873). Springer: VerlagBerlin, Heidelberg. https://doi.org/10.1007/978-3-642-33715-4_62
  • Dolgov, D., Thrun, S., Montemerlo, M., & Diebel, J. (2008). Practical search techniques in path planning for autonomous driving. The Ann Arbor Journal, 1001(48105), 18–80. http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.475.6856
  • Doulamis, A., Doulamis, N., Protopapadakis, E., & Voulodimos, A. (2018) Combined convolutional neural networks and fuzzy spectral clustering for real time crack detection in tunnels, In 2018 25th IEEE International Conference on Image Processing (ICIP), Oct 7-10, (pp. 4153–4157). IEEE: Athens, Greece.
  • Dung, T. D., Hossain, D., Kaneko, S. I., & Capi, G. (2019). Multifeature image indexing for robot localization in textureless environments. Robotics, 8(2), 1–11. https://doi.org/10.3390/robotics8020037
  • Fang, Y., Liu, X., & Zhang, X. (2012). Adaptive active visual servoing of nonholonomic mobile robots. IEEE Transactions on Industrial Electronics, 59(1), 486–497. https://doi.org/10.1109/TIE.2011.2143380
  • Fareh, R., Baziyad, M., Rahman, M. H., Rabie, T., & Bettayeb, M. (2020). Investigating reduced path planning strategy for differential wheeled mobile robot. Robotica, 38(2), 235–255. https://doi.org/10.1017/S0263574719000572
  • Faria, D. R., Vieira, M., Premebida, C., & Nunes, U. (2015). Probabilistic human daily activity recognition towards robot-assisted living. In Proceedings of the IEEE RO-MAN’15, Aug 31 – Sep 4, (pp. 582–587). IEEE: Kobe, Japan. https://doi.org/10.1109/ROMAN.2015.7333644.
  • Fauadi, M. H., Akmal, S., Ali, M. M., Anuar, N. I., Ramlan, S., Noor, A. Z., & Awang, N. (2018). Intelligent vision-based navigation system for mobile robot: A technological review. Periodicals of Engineering and Natural Sciences (PEN), 6(2), 47–57. https://doi.org/10.21533/pen.v6i2.174
  • Fernandes, R., Premebida, C., Peixoto, P., Wolf, D., & Nunes, U. (2014). Road detection using high resolution LIDAR. In IEEE Vehicle Power and Propulsion Conference, Oct. 27-30, (pp.1–6). Coimbra, Portugal.
  • Ferreira, M., Costa, P., Rocha, L., & Moreira, A. P. (2014). Stereo-based real-time 6-DoF work tool tracking for robot programming by demonstration. International Journal of Advanced Manufacturing and Technology, 85(1-4), 1–13. https://doi.org/10.1007/s00170-014-6026-x
  • Fong, T., Nourbakhsh, I., & Dautenhahn, K. (2003). A survey of socially interactive robots. Robotics and Autonomous Systems, 42(3–4), 143–166. https://doi.org/10.1016/S0921-8890(02)00372-X
  • Freda, L., & Oriolo, G. (2007). Vision-based interception of a moving target with a nonholonomic mobile robot. Robotics and Autonomous Systems, 55(6), 419–432. https://doi.org/10.1016/j.robot.2007.02.001
  • Gans, N. R., & Hutchinson, S. A. (2007). Stable Visual Servoing through hybrid switched-system control. IEEE Transactions on Robotics, 23(3), 530–540. https://doi.org/10.1109/TRO.2007.895067
  • Gans, N. R., Hutchinson, S., & Corke, P. I. (2003). Performance tests for visual servo control systems, with application to partitioned approaches to visual servo control. The International Journal of Robotics Research, 22(10–11), 955–984. https://doi.org/10.1177/027836490302210011
  • Gao, J., Ye, W., Guo, J., & Li, Z. (2020). Deep reinforcement learning for indoor mobile robot path planning. Sensors, 20(19), 1–15. https://doi.org/10.3390/s20195493
  • Garibotto, G., Murrieri, P., Capra, A., De Muro, S., Petillo, U., Flammini, F., Esposito, M., Pragliola, C., Di Leo, G., Lengu, R., & Mazzino, N. (2013). White paper on industrial applications of computer vision and pattern recognition. In International conference on image analysis and processing, Sep 11-13, (pp. 721–730). springer: Naples,Italy.
  • Garrido, S., Moreno, L., Blanco, D., & Jurewicz, P. (2011). Path planning for mobile robot navigation using Voronoi diagram and fast marching. International Journal of Robotics and Automation, 2(1), 42–64. https://doi.org/10.1109/IROS.2006.282649
  • Gasparetto, A., Boscariol, P., Lanzutti, A., & Vidoni, R. (2015). Path planning and trajectory planning algorithms: A general overview. Mechanisms and Machine Science, 29, 3–27.https://doi.org/10.1007/978-3-319-14705-5_1
  • Goldberg, D. E. (1989). Genetic Algorithms in Search, Optimization, and Machine Learning. Addison-Wesley.
  • Gong, Z., Tao, B., Qiu, C., Yin, Z., & Ding, H. (2020). Trajectory planning with shortest path for modified uncalibrated visual servoing based on projective homography. IEEE Transactions on Automation Science and Engineering, 17(2), 1076–1083. https://doi.org/10.1109/TASE.2019.2954598
  • Gong, L., Yu, X., & Wang, J. (2021). Curve-Localizability-SVM active localization research for mobile robots in outdoor environments. Applied Science, 11(10), 1–14. https://doi.org/10.3390/app11104362
  • Gul, F., Mir, I., Abualigah, L., Sumari, P., & Forestiero, A. (2021). A consolidated review of path planning and optimization techniques: technical perspectives and future directions. Electronics, 10(18), 1–38. https://doi.org/10.3390/electronics10182250
  • Gunen, M. A., Besdok, E., Civicioglu, P., & Atasever, U. H. (2020). Camera calibration by using weighted differential evolution algorithm: A comparative study with ABC, PSO, COBIDE, DE, CS, GWO, TLBO, MVMO, FOA, LSHADE, ZHANG, and BOUQUET. Neural Computing and Applications, 32(23), 17681–17701. https://doi.org/10.1007/s00521-020-04944-1
  • Guo, Y., Wang, X., Xu, Q., Liu, S., Liu, S., & Han, J. (2020). Weather impact on passenger flow of rail transit lines. Civil Engineering Journal, 6(2), 276–284. https://doi.org/10.28991/cej-2020-03091470
  • Halme, R.-J., Lanz, M., Kamarainen, J., Pieters, R., Latokartano, J., & Hietanen, A. (2018). Review of vision-based safety systems for human-robot collaboration. Procedia CIRP, 72(51), 111–116. https://doi.org/10.1016/j.procir.2018.03.043
  • Haralick, R. M., & Shapiro, L. G. (1993). Computer and Robot Vision. Prentice Hall.
  • Hartley, R. I. (1997). Self-calibration of stationary camera. International Journal of Computer Vision, 22(1), 5–23. https://doi.org/10.1023/A:1007957826135
  • Haviland, J., Dayoub, F., & Corke, P. (2020) Control of the final-phase of closed-loop visual grasping using image-based visual servoing.1-7, arXiv preprint arXiv:2001.05650.
  • He, W., Heng, W., Jiahai, H., Bin, Z., & Long, Q. (2019). Smooth point-to-point trajectory planning for industrial robots with kinematical constraints based on high-order polynomial curve. Mechanism and Machine Theory, 139, 284–293. https://doi.org/10.1016/j.mechmachtheory.2019.05.002
  • He, Z., Wu, C., Zhang, S., & Zhao, X. (2018). Moment-based 2.5-D visual servoing for textureless planar part grasping. IEEE Transactions on Industrial Electronics, 66(10), 7821–7830. https://doi.org/10.1109/TIE.2018.2886783
  • Hu, J., Niu, H., Carrasco, J., Lennox, B., & Arvin, F. (2020). Voronoi-based multi-robot autonomous exploration in unknown environments via deep reinforcement learning. IEEE Transactions on Vehicular Technology, 69(12), 14413–14423. https://doi.org/10.1109/TVT.2020.3034800
  • Hua, J., & Zeng, L. (2021). Hand–Eye calibration algorithm based on an optimized neural network. Actuators, 10(4), 1–11. https://doi.org/10.3390/act10040085
  • Huang, W., Liu, H., & Wan, W. (2020). An online initialization and self-calibration method for stereo visual-inertial odometry. IEEE Transactions on Robotics, 36(4), 1153–1170. https://doi.org/10.1109/TRO.2019.2959161
  • Huang, Y., & Su, J. (2019). Visual servoing of nonholonomic mobile robots: a review and a novel perspective. IEEE Access, 7, 134968–134977. https://doi.org/10.1109/ACCESS.2019.2941962
  • Iriondo, A., Lazkano, E., Susperregi, L., Urain, J., Fernandez, A., & Molina, J. (2019). Pick and place operations in logistics using a mobile manipulator controlled with deep reinforcement learning. Applied Sciences, 9(348), 1–19. https://doi.org/10.3390/app9020348
  • Jackson, W. (2016). Digital Video Editing Fundamentals. Apress.
  • Jansen-van Vuuren, R. D., Shahnewaz, A., & Pandey, A. K. (2020). Image and signal sensors for computing and machine vision: Developments to meet future needs. In Oleg Sergiyenko, Wendy Flores-Fuentes, Paolo Mercorelli (Eds.), Machine Vision and Navigation (pp. 3–32). Springer.
  • Jia, B., & Liu, S. (2015). Switched visual servo control of nonholonomic mobile robots with field-of-view constraints based on homography. Control Theory Technology, 13(4), 311–320. https://doi.org/10.1007/s11768-015-4068-8
  • Jiang, Z., Lefeber, E., & Nijmeijer, H. (2001). Saturated stabilization and tracking of a nonholonomic mobile robot. System and Control Letters, 42(5), 327–332. https://doi.org/10.1016/S0167-6911(00)00104-3
  • Jiang, J., & Ma, Y. (2020). Path planning strategies to optimize accuracy, quality, build time and material use in additive manufacturing: A Review. Micromachines, 11(7), 1–18. https://doi.org/10.3390/mi11070633
  • Jiexin, X., Zhenzhou, S., Yue, L., Yong, G., & Jindong, T. (2019). Deep reinforcement learning with optimized reward functions for robotic trajectory planning. IEEE Access, 7, 105669–105679. https://doi.org/10.1109/ACCESS.2019.2932257
  • Jin, Z., Wu, J., Liu, A., Zhang, W. A., & Yu, L. (2021). Policy-based deep reinforcement learning for visual servoing control of mobile robots with visibility constraints. IEEE Transactions on Industrial Electronics, 69(2), 1898–1908. https://doi.org/10.1109/TIE.2021.3057005
  • Joniak, P., & Muszyński, R. (2017). Path following for two HOG wheels mobile robot. Journal of Automation Mobile Robotics and Intelligent Systems, 11(2), 75–81. https://doi.org/10.14313/JAMRIS_2-2017/19
  • Jung, J. H., & Lim, D. G. (2020). Industrial robots, employment growth, and labor cost: A simultaneous equation analysis. Technological Forecasting and Social Change, 159, 1–11. https://doi.org/10.1016/j.techfore.2020.120202
  • Kakani, V., Nguyen, V. H., Kumar, B. P., Kim, H., & Pasupuleti, V. R. (2020). A critical review on computer vision and artificial intelligence in food industry. Journal of Agriculture and Food Research, 2, 1–12. https://doi.org/10.1016/j.jafr.2020.100033
  • Kanellakis, C., & Nikolakopoulos, G. (2017). Survey on computer vision for UAVs: Current developments and trends. Journal of Intelligent and Robotic Systems, 87(1), 141–168. https://doi.org/10.1007/s10846-017-0483-z
  • Kang, M., Chen, H., & Dong, J. (2020). Adaptive visual servoing with an uncalibrated camera using extreme learning machine and Q-leaning. Neurocomputing, 402, 384–394. https://doi.org/10.1016/j.neucom.2020.03.049
  • Kazemi, M., Gupta, K., & Mehrandezh, M. (2010). Path-planning for visual servoing: A review and issues. Visual Servoing via Advanced Numerical Methods, 401, 189–207. https://doi.org/10.1007/978-1-84996-089-2_11
  • Ke, F., Li, Z., & Yang, C. (2017). Robust tube-based predictive control for visual servoing of constrained differential-drive mobile robots. IEEE Transactions on Industrial Electronics, 65(4), 3437–3446. https://doi.org/10.1109/TIE.2017.2756595
  • Keshmiri, M., & Xie, W. F. (2016). Image-based Visual Servoing using an optimized trajectory planning technique. IEEE/ASME Transactions on Mechatronics, 22(1), 359–370. https://doi.org/10.1109/TMECH.2016.2602325
  • Keshmiri, M., Xie, W. F., & Mohebbi, A. (2014). Augmented image-based visual servoing of a manipulator using acceleration command. IEEE Transactions on Industrial Electronics, 61(10), 5444–5452. https://doi.org/10.1109/TIE.2014.2300048
  • Lai, C. C., & Su, K. L. (2018). Development of an intelligent mobile robot localization system using Kinect RGB-D mapping and neural network. Computers and Electrical Engineering, 7, 620–628. https://doi.org/10.1016/j.compeleceng.2016.04.018
  • Le, A. V., Kyaw, P. T., Veerajagadheswar, P., Muthugala, M. V., Elara, M. R., Kumar, M., & Nhan, N. H. (2021). Reinforcement learning-based optimal complete water-blasting for autonomous ship hull corrosion cleaning system. Ocean Engineering, 220, 1–16. https://doi.org/10.1016/j.oceaneng.2020.108477
  • Lee, J. Y. (2020). Zero-Shot Calibration of Fisheye Cameras. arXiv preprint arXiv:2011.14607.
  • Lee, S., & Chwa, D. (2021). Dynamic image-based visual servoing of monocular camera mounted omnidirectional mobile robots considering actuators and target motion via fuzzy integral sliding mode control. IEEE Transactions on Fuzzy Systems, 29(7), 2068–2076. https://doi.org/10.1109/TFUZZ.2020.2985931
  • Lee, S., Kim, H., Ko, D., Lee, M., Jung, B., & Seok, S. (2019). Trajectory tracking of end effector on mobile robot with multiple onboard cameras. In 2019 16th International Conference on Ubiquitous Robots (UR),Jun 24-27, (pp. 212–218). IEEE: Jeju, Korea.
  • Lee, D.-M., Yang, S.-H., Lee, S.-R., & Lee, Y.-M. (2004). Development of machine vision system and dimensional analysis of the automobile front-chassis-module. KSME International Journal, 18(12), 2209–2215. https://doi.org/10.1007/BF02990225
  • Lee, K. K., Yu, Y. K., & Wong, K. H. (2017). Recovering camera motion from points and lines in stereo images: A recursive model-less approach using trifocal tensors. In 2017 18th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD), Jun 26-28, (pp. 83–90). IEEE: Kanazawa, Japan. https://doi.org/10.1109/SNPD.2017.8022704.
  • Li, G. C.-H., & Chang, Y.-M. (2019). Automated visual positioning and precision placement of a workpiece using deep learning. The International Journal of Advanced Manufacturing Technology, 104(9), 4527–4538. https://doi.org/10.1007/s00170-019-04293-x
  • Li, J., Chen, Z., Rao, G., & Xu, J. (2019). Structured light-based visual servoing for robotic pipe welding pose optimization. IEEE Access, 7, 138327–138340. https://doi.org/10.1109/ACCESS.2019.2943248
  • Li, B., Fang, Y., & Zhang, X. (2013). 2d trifocal tensor based visual servo regulation of nonholonomic mobile robots. In 32nd IEEE Chinese Control Conference, Jul 26-28, (pp. 5764–5769). IEEE: Xi’an, China.
  • Li, C., Li, B., Wang, R., & Zhang, X. (2021). A survey on visual servoing for wheeled mobile robots. International Journal of Intelligent Robotics and Applications, 5(2), 203–218. https://doi.org/10.1007/s41315-021-00177-0
  • Li, Q., Nevalainen, P., Queralta, J. P., Heikkonen, J., & Westerlund, T. (2020). Localization in Unstructured Environments: Towards Autonomous Robots in Forests with Delaunay Triangulation. Remote Sensing, 12(11), 1–22. https://doi.org/10.3390/rs12111870
  • Li, T., Qiu, Q., & Zhao, C. (2021). Hybrid Visual Servoing Tracking Control of Uncalibrated Robotic Systems for Dynamic Dwarf Culture Orchards Harvest. In 2021 IEEE International Conference on Development and Learning (ICDL), Aug 23-26, (pp. 1–6). IEEE: Beijing, China.
  • Li, W., & Xiong, R. (2019). Dynamical Obstacle Avoidance of Task- Constrained Mobile Manipulation Using Model Predictive Control. IEEE Access, 7, 88301–88311. https://doi.org/10.1109/ACCESS.2019.2925428
  • Lin, H. (2014). A Fast and Unified Method to Find a Minimum-Jerk Robot Joint Trajectory Using Particle Swarm Optimization. Journal of Intelligent and Robotic Systems, 75(3–4), 379–392. https://doi.org/10.1007/s10846-013-9982-8
  • Lin, C.-M., Tsai, C.-Y., Lai, Y.-C., Li, S.-A., & Wong, -C.-C. (2018). Visual Object Recognition and Pose Estimation Based on a Deep Semantic Segmentation Network. IEEE Sensors Journal, 18(22), 9370–9381. https://doi.org/10.1109/JSEN.2018.2870957
  • Lippiello, V., Siciliano, B., & Villani, L. (2007). Position-Based Visual Servoing in Industrial Multirobot Cells Using a Hybrid Camera Configuration. IEEE Transactions on Robotics, 23(1), 73–86. https://doi.org/10.1109/TRO.2006.886832
  • Liu, H., Darabi, H., Banerjee, P., & Liu, J. (2007). Survey of wireless indoor positioning techniques and systems. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 37(6), 1067–1080. https://doi.org/10.1109/TSMCC.2007.905750
  • Long, L., & Dongri, S. (2019). Review of Camera Calibration Algorithms. In Sanjiv K. Bhatia, Shailesh Tiwari, Su Ruidan, Munesh Chandra Trivedi, K. K. Mishra (Eds.), Advances in Computer Communication and Computational Sciences (pp. 723–732). Springer.
  • López-Nicolás, G., Gans, N. R., Bhattacharya, S., Sagues, C., Guerrero, J. J., & Hutchinson, S. (2010). Homography-based control scheme for mobile robots with nonholonomic and field-of-view constraints. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 40(4), 1115–1127. https://doi.org/10.1109/TSMCB.2009.2034977
  • Luo, L. P., Yuan, C., Yan, R. J., Yuan, Q., Wu, J., Shin, K. S., & Han, C. S. (2015). Trajectory planning for energy minimization of industry robotic manipulators using the Lagrange interpolation method. International Journal of Precision Engineering and Manufacturing, 16(5), 911–917. https://doi.org/10.1007/s12541-015-0119-9
  • Malamas, Elias, Petrakis, Euripides, Zervakis, Michalis, Petit, Laurent, Legat, Jean-Didier.(2003). A survey on industrial vision systems, applications, and tools, Image and Vision Computing, 21, 171–188. https://doi.org/10.1016/S0262-8856(02)00152-X
  • Malis, E., Chaumette, F., & Boudet, S. (1999). 2 1/2 D Visual Servoing. IEEE Transactions on Robotics and Automation, 15(2), 238–250. https://doi.org/10.1109/70.760345
  • Man, L., Yong, D., Xianli, H., & Maolin, Y. (2020). Image positioning and identification method and system for coal and gangue sorting robot. International Journal of Coal Preparation and Utilization, 14(11), 1–19. https://doi.org/10.1080/19392699.2020.1760855
  • Manduchi, R., Castano, A., Talukder, A., & Matthies, L. (2005). Obstacle detection and terrain classification for autonomous off-road navigation. Autonomous Robots, 18(1), 81–102. https://doi.org/10.1023/B:AURO.0000047286.62481.1d
  • Martinez, E. A., Caron, G., Pegard, C., & Alabazares, D. L. (2020). Photometric path planning for vision-based navigation. In 2020 IEEE International Conference on Robotics and Automation (ICRA), 31 May to 31 August, (pp. 9007–9013). IEEE: Paris, France. https://doi.org/10.1109/ICRA40945.2020
  • Mezouar, Y., & Chaumette, F. (2002). Path planning for robust image-based control. IEEE Transactions on Robotics and Automation, 18(4), 534–549. https://doi.org/10.1109/TRA.2002.802218
  • Michalek, M., & Kozlowski, K. (2010). Vector-field-orientation feedback control method for a differentially driven vehicle. IEEE Transactions on Control Systems Technology, 18(1), 45–65. https://doi.org/10.1109/TCST.2008.2010406
  • Minniti, M. V., Farshidian, F., Grandia, R., & Hutter, M. (2019). Whole-body mpc for a dynamically stable mobile manipulator. IEEE Robotics and Automation Letters, 4(4), 3687–3694. https://doi.org/10.1109/LRA.2019.2927955
  • Miroslav, S., Karel, K., Miroslav, K., Viktor, K., & Libor, P. (2019). Visual Data Simulation for Deep Learning in Robot Manipulation Tasks, (Ed.): MESAS 2018. springer. 402–411.
  • Mittal, R.K., Nagrath, IJ. (2003). Robotics and Control. Tata McGraw-Hill.
  • Morar, A., Moldoveanu, A., Mocanu, I., Moldoveanu, F., Radoi, I. E., Asavei, V., Gradinaru, A., & Butean, A. (2020). A comprehensive survey of indoor localization methods based on computer vision. Sensors, 20(9), 1–36. https://doi.org/10.3390/s20092641
  • Mouli, C. C., & Raju, K. N. (2013). A review on wireless embedded system for vision guided robot arm for object sorting. International Journal of Scientific & Engineering Research, 4(7), 1–6. https://www.ijser.org/onlineResearchPaperViewer.aspx?A-Review-on-Wireless-Embedded-System-for-Vision-Guided-Robot-Arm-for-Object-Sorting.pdf
  • Mustafa, C. B., & Omur, A. (2020). Performing predefined tasks using the human–robot interaction on speech recognition for an industrial robot. Engineering Applications of Artificial Intelligence, 95, 1–13. https://doi.org/10.1016/j.engappai.2020.103903
  • Nadhir Ab Wahab, M., Nefti-Meziani, S., & Atyabi, A. (2020). A comparative review on mobile robot path planning: Classical or meta-heuristic methods? Annual Reviews in Control, 50, 233–252. https://doi.org/10.1016/j.arcontrol.2020.10.001
  • Nagrath, M. &., & Nagrath, I. J. (2003). Robotics and Control. Tata. McGraw-Hill Education.
  • Nandini, V., Deepak Vishal, R., Arun Prakash, C., & Aishwarya, S. (2016). A Review on Applications of Machine Vision Systems in Industries. Indian Journal of Science and Technology, 9(48), 1–5. https://doi.org/10.17485/ijst/2016/v9i48/108433
  • Nematollahi, E., & Janabi-Sharifi, F. (2009). Generalizations to control laws of image-based visual servoing. International Journal of Optomechatronics, 3(3), 167–186. https://doi.org/10.1080/15599610903144161
  • Neto, A. A., Macharet, D. G., Da.s. Campos, V. C., & Campos, M. F. M. (2009). Adaptive complementary filtering algorithm for mobile robot localization. Journal of the Brazilian Computer Society, 15(3), 19–31. https://doi.org/10.1007/BF03194503
  • Nuzzi, C., Pasinetti, S., Lancini, M., Docchio, F., & Sansoni, G. (2019). Deep learning-based hand gesture recognition for collaborative robots. IEEE Instrumentation and Measurement Magazine, 22(2), 44–51. https://doi.org/10.1109/MIM.2019.8674634
  • Palmieri, G., Palpacelli, M., Battistelli, M., & Callegari, M. (2012). A comparison between position-based and image-based dynamic visual servoings in the control of a translating parallel manipulator. Journal of Robotics, 2012, 1–11. https://doi.org/10.1155/2012/103954
  • Pan, X., Wu, J., Li, Z., Yang, J., Zhang, C., Deng, C., Yi, Z., Gao, Q., Yu, M., Zhang, Z., Liu, L., Chi, F., & Bai, P. (2021). Self-calibration for linear structured light 3D measurement system based on quantum genetic algorithm and feature matching. Optik, 225, 1–10. https://doi.org/10.1016/j.ijleo.2020.165749
  • Pandey, A., Pandey, S., & Parhi, D. R. (2017). Mobile Robot Navigation and Obstacle Avoidance Techniques: A Review. International Robotics & Automation Journal, 2(3), 1–12. https://doi.org/10.15406/iratj.2017.02.00023
  • Pandya, H., Gaud, A., Kumar, G., & Krishna, K. M. (2019). Instance invariant visual servoing framework for part‐aware autonomous vehicle inspection using MAVs. Journal of Field Robotics, 36(5), 892–918. https://doi.org/10.1002/rob.21859
  • Parikh, D. (2020, April 27). Computer Vision Tools and Libraries. The Research Nest . https://medium.com/the-research-nest/computer-vision-tools-and-libraries-52bb34023bdf
  • Patel, P., & Bhavsar, B. (2021). Object Detection and Identification. International Journal of Advanced Trends in Computer Science and Engineering, 10(3), 1611–1618. https://doi.org/10.30534/ijatcse/2021/181032021
  • Patil, M. (2017). Robot Manipulator Control Using PLC with Position Based and Image Based Algorithm. International Journal of Swarm Intelligence and Evolutionary Computation, 6(1), 1–8. https://doi.org/10.4172/2090-4908.1000154
  • Patle, B. K., Ganesh Babu, L., Pandey, A., Parhi, D. R. K., & Jagadeesh, A. (2019). A review: On path planning strategies for navigation of mobile robot. Defence Technology, 15(4), 582–606. https://doi.org/10.1016/j.dt.2019.04.011
  • Paya, L., Gil, A., & Reinoso, O. (2017). A State-of-the-Art Review on Mapping and Localization of Mobile Robots Using Omnidirectional Vision Sensors. Journal of Sensors, 2017, 1–20. https://doi.org/10.1155/2017/3497650
  • Pedersen, O. M., Misimi, E., & Chaumette, F. (2020). Grasping unknown objects by coupling deep reinforcement learning, generative adversarial networks, and visual servoing. In 2020 IEEE International Conference on Robotics and Automation (ICRA), 31 May-31 Aug, (pp. 5655–5662). IEEE: Paris, France. https://doi.org/10.1109/ICRA40945.2020.9197196
  • Peng, Y.-C., Jivani, D., Radke, R. J., & Wen, J. (2020). Comparing Position- and Image-Based Visual Servoing for Robotic Assembly of Large Structures. In 2020 IEEE 16th International Conference on Automation Science and Engineering (CASE), Aug 20-21, (pp. 1608–1613). IEEE: Hong Kong, China. https://doi.org/10.1109/CASE48305.2020.9217028
  • Peters, J., Tedrake, R., Roy, N., & Morimoto, J. (2011). Robot learning. In Encyclopedia of Machine Learning (pp. 865–869). Springer: Bruno Siciliano, Oussama Khatib.
  • Pirahansiah, F., Abdullah, S. N., & Sahran, S. (2015). Camera calibration for multi-modal robot vision based on image quality assessment. In 2015 10th Asian Control Conference (ASCC),May 31-Jun 3, (pp. 1–6). IEEE: Kota Kinabalu, Malaysia.
  • Prasad V., Das D., & Bhowmick B. (2018). Epipolar Geometry based Learning of Multi-view Depth and Ego-Motion from Monocular Sequences. arXiv: 1812.11922, 1–10. https://doi.org/10.1145/3293353.3293427
  • Premebida, C., Ambrus, R., & Marton, Z. C. (2018). Intelligent robotic perception systems. In Efren Gorrostieta Hurtado (Ed.) Applications of Mobile Robots. IntechOpen. https://doi.org/10.5772/intechopen.79742
  • Premebida, C., & Nunes, U. (2013). Fusing LIDAR, camera and semantic information: A context-based approach for pedestrian detection. The International Journal of Robotics Research, 32(3), 371–384. https://doi.org/10.1177/0278364912470012
  • Qingyang, C., Zhenping, S., Daxue, L., Yuqiang, F., & Xiaohui, L. (2012). Local path planning for an unmanned ground vehicle based on SVM. International Journal of Advanced Robotic Systems, 9(6), 1–13. https://doi.org/10.5772/54130
  • Qiu, Z., Hu, S., & Liang, X. (2019). Model predictive control for uncalibrated and constrained image-based visual servoing without joint velocity measurements. IEEE Access, 7, 73540–73554. https://doi.org/10.1109/ACCESS.2019.2920389
  • Qiu, Y., Li, B., Shi, W., & Chen, Y. (2017). Homography-based visual servo tracking control of wheeled mobile robots with simultaneous depth identification. In International Conference on Neural Information Processing, 324–333. Guangzhou, China: springer. https://doi.org/10.1007/978-3-319-70136-3_35.
  • Rastegarpanah, A., Aflakian, A., & Stolkin, R. (2021). Optimized hybrid decoupled visual servoing with supervised learning. Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering, 236(2), 338–354. https://doi.org/10.1177/09596518211028379
  • Ray, P. P. (2016). Internet of robotic things: Concept, technologies, and challenges. IEEE Access, 4, 9489–9500. https://doi.org/10.1109/ACCESS.2017.2647747
  • Razafimandimby, C., Loscri, V., & Vegni, A. M. (2018). Towards efficient deployment in Internet of Robotic Things. In Raffaele Gravina, Carlos E. Palau, Marco Manso, Antonio Liotta, Giancarlo Fortino (Eds.), Integration, interconnection, and interoperability of IoT systems (pp. 21–37). Springer.
  • Reis, R., Santos, F. N., & Santos, L. (2019). Forest Robot and Datasets for Biomass Collection. In Manuel F. Silva, Jose Luis Lima,Luis Paulo Reis,Alberto Sanfeliu,Danilo Tardioli (Eds.), Advances in Intelligent Systems and Computing (pp. 152–163). Springer Nature.
  • Ribeiro, E. G., de Queiroz Mendes, R., & Grassi, V., Jr. (2021). Real-time deep learning approach to visual servo control and grasp detection for autonomous robotic manipulation. Robotics and Autonomous Systems, 139, 1–24. https://doi.org/10.1016/j.robot.2021.103757
  • Richard Hartley, R. G., & Chang, T. (1992). Stereo from uncalibrated cameras. In Proceedings of Computer Vision and Pattern Recognition, Jun 15-18, (vol. 92, pp. 761–764). IEEE: Champaign, IL, USA.
  • Rigoberto, J.-S., Juan, Z., & Victor, H. D.-R. (2020). Distorted pinhole camera modeling and calibration. Applied Optics, 59(36), 11310–11318. https://doi.org/10.1364/AO.412159
  • Rout, A., Deepak, B. B. V. L., & Biswal, B. B. (2019). Advances in weld seam tracking techniques for robotic welding: A review. Robotics and Computer Integrated Manufacturing, 56, 12–37. https://doi.org/10.1016/j.rcim.2018.08.003
  • Saarinen, J., Andreasson, H., & Lilienthal, A. J. (2012) Independent Markov chain occupancy grid maps for representation of dynamic environment. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), May 14-19, (pp. 3489–3495). IEEE: Vilamoura-Algarve, Portugal.
  • Sampedro, C., Rodriguez-Ramos, A., Gil, I., Mejias, L., & Campoy, P. (2018). Image-based visual servoing controller for multirotor aerial robots using deep reinforcement learning. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Oct 1-5, pp. 979–986, IEEE: Madrid, Spain. https://doi.org/10.1109/IROS.2018.8594249
  • Santos, L. C., Aguiar, A. S., Santos, F. N., Valente, A., & Petry, M. (2020). Occupancy grid and topological maps extraction from satellite images for path planning in agricultural robots. Robotics, 9(4), 1–22. https://doi.org/10.3390/robotics9040077
  • Santos, L. C., Santos, F. N., Solteiro Pires, E. J., Valente, A., Costa, P., & Magalhaes, S. (2020). Path Planning for ground robots in agriculture: A short review. In 2020 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), Apr 15-17, (pp. 61–66). IEEE: Azores, Portugal. https://doi.org/10.1109/ICARSC49921.2020.9096177.
  • Sarabu, A., & Santra, A. K. (2021). Human Action recognition in videos using convolution long short-term memory network with spatio-temporal networks. Emerging Science Journal, 5(1), 25–33. https://doi.org/10.28991/esj-2021-01254
  • Sarapura, A., Roberti, F., Gimenez, J., Patino, D., & Carelli, R. (2018) Adaptive visual servoing control of a manipulator with uncertainties in vision and dynamics. In 2018 Argentine Conference on Automatic Control (AADECA), Nov 7-9, (pp. 1–6). Buenos Aires, Argentina. https://doi.org/10.23919/AADECA44134.2018
  • Sarikaya, D., Corso, J. J., & Guru, K. A. (2017). Detection and localization of robotic tools in robot-assisted surgery videos using deep neural networks for region proposal and detection. IEEE Transactions on Medical Imaging, 36(7), 1542–1549. https://doi.org/10.1109/TMI.2017.2665671
  • Saxena, A., Pandya, H., Kumar, G., Gaud, A., & Krishna, K. M. (2017). Exploring convolutional networks for end-to-end visual servoing. In 2017 IEEE International Conference on Robotics and Automation (ICRA), 29 May-3 Jun, (pp. 3817–3823). Singapore: IEEE. https://doi.org/10.1109/ICRA.2017.7989442.
  • Senapati, M., Srinivas, J., & Balakrishnan, V. (2014). Visual servoing and motion control of a robotic device in inspection and replacement tasks. International Journal of Engineering and Technical Research (IJETR), 2(4), 37–43. https://www.academia.edu/7725817/Visual_Servoing_and_Motion_control_of_a_robotic_device_in_inspection_and_replacement_tasks
  • Shademan, A., Farahmand, A., & Jagersand, M.(2010). Robust Jacobian estimation for uncalibrated Visual Servoing. In 2010 IEEE International Conference on Robotics and Automation, Nov 7-9, (pp. 5564–5569). Anchorage, AK, USA: IEEE. https://doi.org/10.1109/ROBOT.2010.5509911
  • Sharma, R., Shukla, S., Behera, L., & Subramanian, V. (2020). Position-based visual servoing of a mobile robot with an automatic extrinsic calibration scheme. Robotica, 38(5), 831–844. https://doi.org/10.1017/S0263574719001115
  • Shi, H., Chen, J., Pan, W., Hwang, K., & Cho, Y. (2019, September). Collision Avoidance for Redundant Robots in Position-Based Visual Servoing. IEEE Systems Journal, 13(3), 3479–3489. https://doi.org/10.1109/JSYST.2018.2865503
  • Shi, L., Copot, C., & Vanlanduit, S. (2021). A Bayesian deep neural network for safe visual servoing in human–robot interaction. Frontiers in Robotics and AI, 8, 1–13. https://doi.org/10.3389/frobt.2021.687031
  • Shi, H., Xu, M., & Hwang, K. S. (2019). A fuzzy adaptive approach to decoupled visual servoing for a wheeled mobile robot. IEEE Transactions on Fuzzy Systems, 28(12), 3229–3243. https://doi.org/10.1109/TFUZZ.2019.2931219
  • Song, L., Ma, H., Wu, M., Zhou, Z., & Fu, M. (2018). A brief survey of dimension reduction. In International Conference on Intelligent Science and Big Data Engineering, Oct 18-20, (pp. 189–200). Lanzhou, China: springer.
  • Su, Q., Yu, W., & Liu, J. (2021). Mobile robot path planning based on improved ant colony algorithm. In 2021 Asia-Pacific Conference on Communications Technology and Computer Science (ACCTCS), Jan 22-24, (pp. 220–224). Shenyang, China: IEEE.
  • Sunderhauf, N., Dayoub, F., McMahon, S., Talbot, B., Schulz, R., Corke, P., Wyeth, G., Upcroft, B., & Milford, M. (2016). Place categorization and semantic mapping on a mobile robot. In IEEE International Conference on Robotics and Automation (ICRA), May 16-21, (pp. 5729–5736). Stockholm, Sweden: IEEE.
  • Talebpour, Z., & Martinoli, A. (2019). Adaptive risk-based replanning for human-aware multi-robot task allocation with local perception. IEEE Robotics and Automation Letters, 4(4), 3790–3797. https://doi.org/10.1109/LRA.2019.2926966
  • Tang, S. H., Khaksar, W., Ismail, N. B., & Ariffin, M. K. (2012). A review on robot motion planning approaches. Journal of Science and Technology, 20(1), 15–29. http://www.pertanika.upm.edu.my/resources/files/Pertanika%20PAPERS/JST%20Vol.%2020%20(1)%20Jan.%202012/07%20Pg%2015-29.pdf
  • Trigatti, G., Boscariol, P., Scalera, L., Pillan, D., & Gasparetto, A. (2018). A new path-constrained trajectory planning strategy for spray painting robots - rev.1. The International Journal of Advanced Manufacturing Technology, 98(9–12), 2287–2296. https://doi.org/10.1007/s00170-018-2382-2
  • Triggs, B. (1997). Autocalibration and the absolute quadric. Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Juan, PR, USA, 17-19 June 1997, IEEE. doi:https://doi.org/10.1109/CVPR.1997.609388
  • Tsai, R. Y. (1987). A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses. IEEE Journal of Robotics and Automation, 3(4), 323–344. https://doi.org/10.1109/JRA.1987.1087109
  • Tzafestas Spyros, G. (2018). Mobile robot control and navigation: A global overview. Journal of Intelligent and Robotic Systems, 91(1), 35–58. https://doi.org/10.1007/s10846-018-0805-9
  • Verma, J. K., & Ranga, V. (2021). Multi-robot coordination analysis, taxonomy, challenges and future scope. Journal of Intelligent and Robotic Systems, 102(1), 1–36. https://doi.org/10.1007/s10846-021-01378-2
  • Vijayan, A. T., & Ashok, S. (2018). Comparative study on the performance of neural networks for prediction in assisting Visual Servoing. Journal of Intelligent & Fuzzy Systems, 36(1), 675–688. https://doi.org/10.3233/JIFS-171475
  • Wang, X., Fang, G., Wang, K., Xie, X., Lee, K. H., Ho, T. W. L., Kwok, J. L., W, K., & Kwok, K.-W. (2020). Eye-in-hand visual servoing enhanced with sparse strain measurement for soft continuum robots. IEEE Robotics and Automation Letters, 5(2), 2161–2168. https://doi.org/10.1109/LRA.2020.2969953
  • Wang, B., Liu, Z., Li, Q., & Prorok, A. (2020). Mobile robot path planning in dynamic environments through globally guided reinforcement learning. IEEE Robotics and Automation Letters, 5(4), 6932–6939. https://doi.org/10.1109/LRA.2020.3026638
  • Wang, W., Luo, H., Zheng, Q., Wang, C., & Guo, W. (2020). A deep reinforcement learning framework for vehicle detection and pose estimation in 3D point clouds. In International Conference on Artificial Intelligence and Security, pp. 405–416, Hohhot, China: springer. https://doi.org/10.1007/978-3-030-57881-7_36
  • Wang, F., Sun, F., Zhang, J., Lin, B., & Li, X. (2019). Unscented particle filter for online total image Jacobian matrix estimation in robot visual servoing. IEEE Access, 7, 92020–92029. https://doi.org/10.1109/ACCESS.2019.2927413
  • Wang, T., Wang, W., & Wei, F. (2020, October 25). An overview of control strategy and trajectory planning of visual servoing. In Recent Featured Applications of Artificial Intelligence Methods, LSMS 2020 and ICSEE 2020 Workshops, (pp. 358–370). Shanghai, China: springer.
  • Wang, T., Yao, Y., Chen, Y., Zhang, M., Tao, F., & Snoussi, H. (2018). Auto-Sorting System Toward Smart Factory Based on Deep Learning for Image Segmentation. IEEE Sensors Journal, 18(20), 8493–8501. https://doi.org/10.1109/JSEN.2018.2866943
  • Wang, R., Zhang, X., & Fang, Y. (2021). Visual tracking of mobile robots with both velocity and acceleration saturation constraints. Mechanical Systems and Signal Processing, 150, 1–16. https://doi.org/10.1016/j.ymssp.2020.107274
  • Wang, R., Zhang, X., Fang, Y., & Li, B. (2017). Visual Servoing of mobile robots with input saturation at kinematic level. In 2017 International Conference on Image and Graphics, Sep 13-15, (pp. 432–442). Shanghai, China: Springer cham.
  • Wang, X., Zhang, X., Ren, X., Li, L., Feng, H., He, Y., Chen, H., & Chen, X. (2020). Point cloud 3D parent surface reconstruction and weld seam feature extraction for robotic grinding path planning. The International Journal for Advanced Manufacturing Technologies, 107(1), 827–841. https://doi.org/10.1007/s00170-020-04947-1
  • Wang, A. S., Zhang, W., Troniak, D., Liang, J., & Kroemer, O. (2019). Homography-based deep visual servoing methods for planar grasps. In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Nov 4-8, (pp. 6570–6577). Macau, China: IEEE. https://doi.org/10.1109/IROS40897.2019.8968160
  • Wang, X., Zhao, Y., & Yang, F. (2019). Camera calibration method based on Pascal’s theorem. International Journal of Advanced Robotic Systems, 16(3), 1–10 https://doi.org/10.1177/1729881419846406
  • Wu, P., Cao, Y., He, Y., & Li, D. (2017). Vision-Based Robot Path Planning with Deep Learning. In M. Liu, H. Chen, & M. Vincze (Eds.), Computer Vision Systems. ICVS 2017. lecture notes in computer science (Vol. 10528, pp. 1–11). Springer.
  • Xiao, H., Li, Z., Yang, C., Zhang, L., Yuan, P., Ding, L., & Wang, T. (2017). Robust stabilization of a wheeled mobile robot using model predictive control based on neurodynamics optimization. IEEE Transactions on Industrial Electronics, 64(1), 505–516. https://doi.org/10.1109/TIE.2016.2606358
  • Xie, J., Shao, Z., Li, Y., Guan, Y., & Tan, J. (2019). Deep reinforcement learning with optimized reward functions for robotic trajectory planning. IEEE Access, 7, 105669–105679. https://doi.org/10.1109/ACCESS.2019.2932257
  • Xin, J., Cheng, H., & Ran, B. (2021). Visual servoing of robot manipulator with weak field-of-view constraints. International Journal of Advanced Robotic Systems, 18(1), 1–11. https://doi.org/10.1177/1729881421990320
  • Xu, S., Chou, W., & Dong, H. (2019). A robust indoor localization system integrating visual localization aided by CNN-based image retrieval with Monte Carlo localization. Sensors, 19(2), 1–19. https://doi.org/10.3390/s19020249
  • Xu, Y., Shmaliy, Y. S., Li, Y., Chen, X., & Guo, H. (2019). Indoor INS/LiDAR-based robot localization with improved robustness using cascaded FIR filter. IEEE Access, 7, 34189–34197. https://doi.org/10.1109/ACCESS.2019.2903435
  • Xu, S., Wang, J., Shou, W., Ngo, T., Sadick, A. M., & Wang, X. (2020). Computer vision techniques in construction: A critical review. Archives of Computational Methods in Engineering, 28(5), 3383–3397. https://doi.org/10.1007/s11831-020-09504-3
  • Xu, Z., Zhou, X., & Li, S. (2019). Deep recurrent neural networks based obstacle avoidance control for redundant manipulators. Frontier in Neurorobotics, 13(47), 1–13. https://doi.org/10.3389/fnbot.2019.00047
  • Yang, L., Qi, J., Song, D., Xiao, J., Han, J., & Xia, Y. (2016). Survey of robot 3D path planning algorithms. Journal of Control Science and Engineering, 2016, 1–23. https://doi.org/10.1155/2016/7426913
  • Zafar, M. N., & Mohanta, J. C. (2018). Methodology for path planning and optimization of mobile robots: A review. Procedia Computer Science, 133, 141–152. https://doi.org/10.1016/j.procs.2018.07.018
  • Zake, Z., Chaumette, F., Pedemonte, N., & Caro, S. (2020). Robust 2 1/2D visual servoing of a cable-driven parallel robot thanks to trajectory tracking. IEEE Robotics and Automation Letters, 5(2), 660–667. https://doi.org/10.1109/LRA.2020.2965033
  • Zhang, Z. (2000). A flexible new technique for camera calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(11), 1330–1334. https://doi.org/10.1109/34.888718
  • Zhang, K., Chaumette, F., & Chen, J. (2019). Tensor-based 6-DOF visual servoing. The International Journal of Robotics Research, 38(10–11), 1208–1228. https://doi.org/10.1177/0278364919872544
  • Zhang, Y., Li, S., Liao, B., Jin, L., & Zheng, L. (2017). A recurrent neural network approach for visual servoing of manipulators. In 2017 IEEE international conference on information and automation (ICIA), Jul 18-20, (pp. 614–619). https://doi.org/10.1109/ICInfA.2017.8078981.
  • Zhang, H. Y., Lin, W. M., & Chen, A. X. (2018). Path planning for the mobile robot: A review. Symmetry, 10(10), 1–17. https://doi.org/10.3390/sym10100450
  • Zhang, H., Tan, J., Zhao, C., Liang, Z., Liu, L., Zhong, H., & Fan, S. (2020). A fast detection and grasping method for mobile manipulator based on improved faster R-CNN. Industrial Robot: The International Journal of Robotics Research and Application, 47(2), 167–175. https://doi.org/10.1108/IR-07-2019-0150
  • Zheng, W., Wang, H. B., Zhang, Z. M., Li, N., & Yin, P. H. (2019). Multi-layer feed-forward neural network deep learning control with hybrid position and virtual-force algorithm for mobile robot obstacle avoidance. International Journal of Control, Automation and Systems, 17(4), 1007–1018. https://doi.org/10.1007/s12555-018-0140-8