0
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Behavioral learning of dish rinsing and scrubbing based on interruptive direct teaching considering assistance rate

ORCID Icon, ORCID Icon, &
Received 14 Nov 2023, Accepted 29 Jun 2024, Published online: 05 Aug 2024

References

  • Okada K, Kojima M, Sagawa Y, et al. Vision based behavior verification system of humanoid robot for daily environment tasks. Proceedings of the 2006 6th IEEE-RAS International Conference on Humanoid Robots, HUMANOIDS, 2006; p. 7–12.
  • Fujimoto J, Mizuuchi I, Sodeyama Y, et al. Picking up dishes based on active groping with multisensory robot hand. RO-MAN 2009-The 18th IEEE International Symposium on Robot and Human Interactive Communication, IEEE; 2009; p. 220–225.
  • Leidner D, Bejjani W, Albu-Schäffer A, et al. Robotic agents representing, reasoning, and executing wiping tasks for daily household chores. International Conference on Autonomous Agents and Multiagent Systems (AAMAS); 2016.
  • Lambeta M, Chou P-W, Tian S, et al. DIGIT: A novel design for a low-cost compact high-resolution tactile sensor with application to in-hand manipulation. IEEE Robot Autom Lett. 2020;5(3):3838–3845. doi: 10.1109/LSP.2016.
  • Hogan FR, Ballester J, Dong S, et al. Tactile dexterity: manipulation primitives with tactile feedback. Proceedings – IEEE International Conference on Robotics and Automation, Institute of Electrical and Electronics Engineers Inc., may 2020; p. 8863–8869.
  • Kuniyoshi Y, Inoue H, Inaba M. Teaching by showing: generating robot command sequences based on real time visual recognition of human pick and place actions. J Robot Soc Jpn. 1991;9(3):295–303. doi: 10.7210/jrsj.9.295
  • Caccavale R, Saveriano M, Fontanelli GA, et al. Imitation learning and attentional supervision of dual-arm structured tasks. In: 7th Joint IEEE International Conference on Development and Learning and on Epigenetic Robotics, ICDL-EpiRob 2017; volume 2018-Janua; 2018.
  • Perez-D'Arpino C, Shah JA. C-LEARN: Learning geometric constraints from demonstrations for multi-step manipulation in shared autonomy. In: Proceedings – IEEE International Conference on Robotics and Automation (d); 2017; p. 4058–4065.
  • Tanwani AK, Calinon S. A generative model for intention recognition and manipulation assistance in teleoperation. IEEE Int Conf Intell Robots Syst. 2017;2017:43–50.
  • Osa T, Neumann G, Pajarinen J, et al. An algorithmic perspective on imitation learning. arXiv; 2018; Vol. 7. p. 1–179.
  • Yu T, Finn C, Xie A, et al. One-Shot Imitation from Observing Humans via Domain-Adaptive Meta-Learning. arXiv; 2018.
  • Noda K, Arie H, Suga Y, et al. Multimodal integration learning of robot behavior using deep neural networks. Rob Auton Syst. 2014;62(6):721–736. doi: 10.1016/j.robot.2014.03.003
  • Lee MA, Zhu Y, Zhu Y, et al. Making sense of vision and touch: learning multimodal representations for Contact-Rich tasks. IEEE Trans Robot. 2020;36(3):582–596. doi: 10.1109/TRO.8860
  • Anzai T, Takahashi K. Deep gated multi-modal learning: in-hand object pose changes estimation using tactile and image data. IEEE Int Conf Intell Robots Syst. 2020;5:9361–9368.
  • Suzuki K, Kanamura M, Suga Y, et al. In-air knotting of rope using dual-arm robot based on deep learning. 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2021; p. 6724–6731.
  • Saito N, Wang D, Ogata T, et al. Wiping 3d-objects using deep learning model based on image/force/joint information.2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020; p. 10152–10157.
  • Pomerleau DA. Efficient training of artificial neural networks for autonomous navigation. Neural Comput. 1991;3(1):88–97. doi: 10.1162/neco.1991.3.1.88
  • Ross S, Gordon G, Bagnell D. A reduction of imitation learning and structured prediction to no-regret online learning. Proceedings of the fourteenth international conference on artificial intelligence and statistics, JMLR Workshop and Conference Proceedings, 2011; p. 627–635.
  • Srivastava S, Fang E, Riano L, et al. Combined task and motion planning through an extensible planner-independent interface layer. Proceedings – IEEE International Conference on Robotics and Automation, 2014; p. 639–646.
  • Zimmermann S, Hakimifard G, Zamora M, et al. A multi-level optimization framework for simultaneous grasping and motion planning. IEEE Robot Autom Lett. 2020;5(2):2966–2972. doi: 10.1109/LSP.2016.
  • Tanaka D, Arnold S, Yamazaki K. Disruption-resistant deformable object manipulation on basis of online shape estimation and prediction-driven trajectory correction. IEEE Robot Autom Lett. 2021;6(2):3809–3816. doi: 10.1109/LRA.2021.3060679
  • Kawaharazuka K, Hiraoka N, Koga Y, et al. Online learning of danger avoidance for complex structures of musculoskeletal humanoids and its applications. 2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids), 2021; p. 349–355.
  • Kingma DP, Ba JL. Adam: a method for stochastic optimization. 3rd International Conference on Learning Representations, ICLR 2015 – Conference Track Proceedings, 2015; p. 1–15.
  • Hochreiter S, Schmidhuber J. Long short-term memory. Neural Comput. 1997;9(8):1735–1780. doi: 10.1162/neco.1997.9.8.1735
  • Liu W, Anguelov D, Erhan D, et al. SSD: Single shot multibox detector. In: Leibe B, Matas J, Sebe N, and Welling M, editors. Computer vision – ECCV 2016, Cham, Springer International Publishing; 2016; p. 21–37.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.