34
Views
0
CrossRef citations to date
0
Altmetric
Full Papers

Multi-step planning with learned effects of partial action executions

ORCID Icon, ORCID Icon & ORCID Icon
Pages 562-576 | Received 02 Nov 2023, Accepted 17 Mar 2024, Published online: 16 Apr 2024

References

  • Taniguchi T, Murata S, Suzuki M, et al. World models and predictive coding for cognitive and developmental robotics: frontiers and challenges. Adv Robot. 2023;37(13):1–27.
  • Jamone L, Ugur E, Cangelosi A, et al. Affordances in psychology, neuroscience, and robotics: a survey. IEEE Trans Cogn Dev Syst. 2016;10(1):4–25. doi: 10.1109/TCDS.2016.2594134
  • Yamanobe N, Wan W, Ramirez-Alpizar IG, et al. A brief review of affordance in robotic manipulation research. Adv Robot. 2017;31(19–20):1086–1101. doi: 10.1080/01691864.2017.1394912
  • Zech P, Haller S, Lakani SR, et al. Computational models of affordance in robotics: a taxonomy and systematic classification. Adapt Behav. 2017;25(5):235–271. doi: 10.1177/1059712317726357
  • Taniguchi T, Ugur E, Hoffmann M, et al. Symbol emergence in cognitive developmental systems: a survey. IEEE Trans Cogn Dev Syst. 2018;11(4):494–516. doi: 10.1109/TCDS
  • Gibson JJ. The ecological approach to visual perception: classic edition. Psychology Press; 2014.
  • Şahin E, Cakmak M, Doğar MR, et al. To afford or not to afford: a new formalization of affordances toward affordance-based robot control. Adapt Behav. 2007;15(4):447–472. doi: 10.1177/1059712307084689
  • Montesano L, Lopes M, Bernardino A, et al. Learning object affordances: from sensory–motor coordination to imitation. IEEE Trans Robot. 2008;24(1):15–26. doi: 10.1109/TRO.2007.914848
  • Fitzpatrick P, Metta G, Natale L, et al. Learning about objects through action-initial steps towards artificial cognition. In: 2003 IEEE International Conference on Robotics and Automation (Cat. No. 03CH37422). Vol. 3. IEEE; 2003. p. 3140–3145.
  • Ugur E, Dogar MR, Cakmak M, et al. The learning and use of traversability affordance using range images on a mobile robot. In: Proceedings 2007 IEEE International Conference on Robotics and Automation. IEEE; 2007. p. 1721–1726.
  • Ugur E, Oztop E, Şahin E. Going beyond the perception of affordances: learning how to actualize them through behavioral parameters. In: 2011 IEEE International Conference on Robotics and Automation. IEEE; 2011. p. 4768–4773.
  • Ugur E, Oztop E, Sahin E. Goal emulation and planning in perceptual space using learned affordances. Rob Auton Syst. 2011;59(7-8):580–595. doi: 10.1016/j.robot.2011.04.005
  • Montesano L, Lopes M, Bernardino A, et al. Modeling affordances using Bayesian networks. In: 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE; 2007. p. 4102–4107.
  • Ugur E, Piater J. Bottom-up learning of object categories, action effects and logical rules: from continuous manipulative exploration to symbolic planning. In: 2015 IEEE International Conference on Robotics and Automation (ICRA). IEEE; 2015. p. 2627–2633.
  • Ugur E, Piater J. Refining discovered symbols with multi-step interaction experience. In: 2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids). IEEE; 2015. p. 1007–1012.
  • Ahmetoglu A, Seker MY, Piater J, et al. Deepsym: deep symbol generation and rule learning for planning from unsupervised robot interaction. J Artif Intell Res. 2022;75:709–745. doi: 10.1613/jair.1.13754
  • Ahmetoglu A, Oztop E, Ugur E. Learning multi-object symbols for manipulation with attentive deep effect predictors. arXiv preprint arXiv:220801021. 2022.
  • Ames B, Thackston A, Konidaris G. Learning symbolic representations for planning with parameterized skills. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE; 2018. p. 526–533.
  • Nguyen A, Kanoulas D, Caldwell DG, et al. Detecting object affordances with convolutional neural networks. In: 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE; 2016. p. 2765–2770.
  • Do TT, Nguyen A, Reid I. Affordancenet: an end-to-end deep learning approach for object affordance detection. In: 2018 IEEE international conference on robotics and automation (ICRA). IEEE; 2018. p. 5882–5889.
  • Mi J, Tang S, Deng Z, et al. Object affordance based multimodal fusion for natural human-robot interaction. Cogn Syst Res. 2019;54:128–137. doi: 10.1016/j.cogsys.2018.12.010
  • Hämäläinen A, Arndt K, Ghadirzadeh A, et al. Affordance learning for end-to-end visuomotor robot control. In: 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE; 2019. p. 1781–1788.
  • Chu FJ, Xu R, Seguin L, et al. Toward affordance detection and ranking on novel objects for real-world robotic manipulation. IEEE Robot Autom Lett. 2019;4(4):4070–4077. doi: 10.1109/LSP.2016.
  • Thermos S, Potamianos G, Daras P. Joint object affordance reasoning and segmentation in rgb-d videos. IEEE Access. 2021;9:89699–89713. doi: 10.1109/ACCESS.2021.3090471
  • Zhang L, Du W, Zhou S, et al. Inpaint2learn: a self-supervised framework for affordance learning. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA. 2022. p. 2665–2674.
  • Saito N, Ogata T, Mori H, et al. Tool-use model to reproduce the goal situations considering relationship among tools, objects, actions and effects using multimodal deep neural networks. Front Robot AI. 2021;8:309.
  • Mar T, Tikhanoff V, Natale L. What can i do with this tool? self-supervised learning of tool affordances from their 3-d geometry. IEEE Trans Cogn Dev Syst. 2017;10(3):595–610. doi: 10.1109/TCDS.2017.2717041
  • Ruiz E, Mayol-Cuevas W. Geometric affordance perception: leveraging deep 3d saliency with the interaction tensor. Front Neurorobot. 2020;14:45. doi: 10.3389/fnbot.2020.00045
  • Khazatsky A, Nair A, Jing D, et al. What can i do here? learning new skills by imagining visual affordances. In: 2021 IEEE International Conference on Robotics and Automation (ICRA). IEEE; 2021. p. 14291–14297.
  • Seker MY, Tekden AE, Ugur E. Deep effect trajectory prediction in robot manipulation. Rob Auton Syst. 2019;119:173–184. doi: 10.1016/j.robot.2019.07.003
  • Tekden AE, Erdem A, Erdem E, et al. Belief regulated dual propagation nets for learning action effects on groups of articulated objects. In: 2020 IEEE International Conference on Robotics and Automation (ICRA). IEEE; 2020. p. 10556–10562.
  • Hochreiter S, Schmidhuber J. Long short-term memory. Neural Comput. 1997;9(8):1735–1780. doi: 10.1162/neco.1997.9.8.1735
  • Cho K, Van Merriënboer B, Gulcehre C, et al. Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:14061078. 2014.
  • Brown T, Mann B, Ryder N, et al. Language models are few-shot learners. Adv Neural Inf Process Syst. 2020;33:1877–1901.
  • Devlin J, Chang MW, Lee K, et al. Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:181004805. 2018.
  • Naveed H, Khan AU, Qiu S, et al. A comprehensive overview of large language models. arXiv preprint arXiv:230706435. 2023.
  • Ahn M, Brohan A, Brown N, et al. Do as i can, not as i say: grounding language in robotic affordances. arXiv preprint arXiv:220401691. 2022.
  • Brohan A, Brown N, Carbajal J, et al. Rt-1: robotics transformer for real-world control at scale. arXiv preprint arXiv:221206817. 2022.
  • Brohan A, Brown N, Carbajal J, et al. Rt-2: vision-language-action models transfer web knowledge to robotic control. arXiv preprint arXiv:230715818. 2023.
  • Padalkar A, Pooley A, Jain A, et al. Open x-embodiment: robotic learning datasets and rt-x models. arXiv preprint arXiv:231008864. 2023.
  • Wei J, Wang X, Schuurmans D, et al. Chain-of-thought prompting elicits reasoning in large language models. Adv Neural Inf Process Syst. 2022;35:24824–24837.
  • Chu Z, Chen J, Chen Q, et al. A survey of chain of thought reasoning: advances, frontiers and future. arXiv preprint arXiv:230915402. 2023.
  • Valmeekam K, Olmo A, Sreedharan S, et al. Large language models still can't plan (a benchmark for llms on planning and reasoning about change). arXiv preprint arXiv:220610498. 2022.
  • Celik B, Ahmetoglu A, Ugur E, et al. Developmental scaffolding with large language models. In: 2023 IEEE International Conference on Development and Learning (ICDL). IEEE; 2023. p. 396–402.
  • Garnelo M, Rosenbaum D, Maddison CJ, et al. Conditional neural processes. arXiv preprint arXiv:180701613. 2018.
  • Seker MY, Ahmetoglu A, Nagai Y, et al. Imitation and mirror systems in robots through deep modality blending networks. Neural Netw. 2022;146:22–35. doi: 10.1016/j.neunet.2021.11.004
  • Srivastava N, Hinton G, Krizhevsky A, et al. Dropout: a simple way to prevent neural networks from overfitting. J Mach Learn Res. 2014;15(1):1929–1958.
  • Kingma DP, Ba J. Adam: a method for stochastic optimization. arXiv preprint arXiv:14126980. 2014.
  • Hart P, Nilsson N, Raphael B. A formal basis for the heuristic determination of minimum cost paths. IEEE Trans Syst Man Cybern. 1968;4(2):100–107.
  • Rohmer E, Singh SPN, Freese M. Coppeliasim (formerly v-rep): a versatile and scalable robot simulation framework. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Tokyo, Japan; 2013.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.