1,888
Views
15
CrossRef citations to date
0
Altmetric
Short Papers

Semiotically adaptive cognition: toward the realization of remotely-operated service robots for the new normal symbiotic society

, ORCID Icon, , , &
Pages 664-674 | Received 14 Mar 2021, Accepted 28 Apr 2021, Published online: 24 May 2021

References

  • Taniguchi T, Ugur E, Hoffmann M, et al. Symbol emergence in cognitive developmental systems: a survey. IEEE Trans Cogn Dev Syst. 2018.
  • Harnad S. The symbol grounding problem. Physica D. 1990;42(1):335–346.
  • Chandler D. Semiotics: the basics. London: Taylor & Francis; 2017.
  • Lake BM, Ullman TD, Tenenbaum JB, et al. Building machines that learn and think like people. Behav Brain Sci. 2017;40:e253.
  • LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521(7553):436–444. Available from: https://doi.org/10.1038/nature14539.
  • Redmon J, Farhadi A. Yolov3: an incremental improvement. Preprint, 2018. arXiv:180402767.
  • Bengio Y. The consciousness prior. Preprint, 2017. arXiv:170908568.
  • Tada Y, Hagiwara Y, Tanaka H, et al. Robust understanding of robot-directed speech commands using sequence to sequence with noise injection. Front Robot AI. 2020 Jan;6(144):1–12. DOI:10.3389/frobt.2019.00144
  • Taniguchi A, Isobe S, Hafi LE, et al. Autonomous planning based on spatial concepts to tidy up home environments with service robots. Adv Robot. 2021;1–19. Available from: https://doi.org/10.1080/01691864.2021.1890212.
  • Maki K, Katayama N, Shimada N, et al. Image-based automatic detection of indoor scene events and interactive inquiry. In: Proceedings of 19th International Conference on Pattern Recognition (ICPR2008), Tampa, FL; 2008. p. 1–4.
  • Aksoy EE, Zhou Y, Wächter M, et al. Enriched manipulation action semantics for robot execution of time constrained tasks. In: Proceedings of IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids2016), Cancun, Mexico; 2016. p. 109–116.
  • Schwarz M, Behnke S. Semantic RGB-D perception for cognitive service robots. In: RGB-D image analysis and processing. Cham, Switzerland: Springer; 2019. p. 285–307.
  • Vaquette G, Orcesi A, Lucat L, et al. The daily home life activity dataset: a high semantic activity dataset for online recognition. In: Proceedings of 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG2017), Washington, DC; 2017. p. 497–504.
  • Das S, Dai R, Koperski M, et al. Toyota smarthome: real-world activities of daily living. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV2019), Seoul; 2019. p. 833–842.
  • Goel A, Abubakr A, Koperski M, et al. Online temporal detection of daily-living human activities in long untrimmed video streams. In: Proceedings of IEEE International Conference on Image Processing, Applications and Systems (IPAS2018), Sophia Antipolis; 2018. p. 43–48.
  • Wang P, Sun L, Smeaton AF, et al. Computer vision for lifelogging: characterizing everyday activities based on visual semantics. In: Computer vision for assistive healthcare. London: Academic Press; 2018. p. 249–282.
  • Kim K, Jalal A, Mahmood M. Vision-based human activity recognition system using depth silhouettes: a smart home system for monitoring the residents. J Electr Eng Technol. 2019;14(6):2567–2573.
  • Furnari A, Farinella GM. What would you expect? Anticipating egocentric actions with rolling-unrolling LSTMS and modality attention. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Seoul; 2019. p. 6252–6261.
  • Rhinehart N, Kitani KM. First-person activity forecasting with online inverse reinforcement learning. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice; 2017 Oct. p. 3696–3705.
  • Rawassizadeh R, Sen T, Kim SJ, et al. Manifestation of virtual assistants and robots into daily life: vision and challenges. CCF Trans Pervasive Comput Interact. 2019;1(3):163–174.
  • Ziaeetabar F, Kulvicius T, Tamosiunaite M, et al. Prediction of manipulation action classes using semantic spatial reasoning. In: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS2018), Madrid; 2018. p. 3350–3357.
  • Guillermo GH, Yuan S, Baek S, et al. First-person hand action benchmark with RGB-D videos and 3d hand pose annotations. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR2018), Salt Lake City, Utah; 2018. p. 409–419.
  • Sanada M, Matsuo T, Shimada N, et al. Recalling candidates of grasping method from an object image using neural network. In: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2019), Macau; 2019. p. 634–639.
  • Goodman N, Mansinghka V, Roy DM, et al. Church: a language for generative models. Preprint, 2012. arXiv:12063255.
  • Tran D, Hoffman MD, Saurous RA, et al. Deep probabilistic programming. Preprint, 2017. arXiv:170103757.
  • Bingham E, Chen JP, Jankowiak M, et al. Pyro: deep universal probabilistic programming. J Mach Learn Res. 2019;20(1):973–978.
  • Sato T, Kameya Y. Prism: a language for symbolic-statistical modeling. In: IJCAI; Vol. 97; Melbourne, 1997. p. 1330–1339.
  • Mao J, Gan C, Kohli P, et al. The neuro-symbolic concept learner: interpreting scenes, words, and sentences from natural supervision. Preprint, 2019. arXiv:190412584.
  • Devlin J, Chang MW, Lee K, et al. BERT: pre-training of deep bidirectional transformers for language understanding. Preprint, 2019. arXiv:181004805.
  • Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. Preprint, 2015. arXiv:14091556.
  • He K, Zhang X, Ren S, et al. Deep residual learning for image recognition. Preprint, 2015. arXiv:151203385.
  • Chollet F. Xception: deep learning with depthwise separable convolutions. Preprint, 2017. arXiv:161002357.
  • Tan C, Sun F, Kong T, et al. A survey on deep transfer learning. Preprint, 2018. arXiv:180801974.
  • Wang Y, Yao Q, Kwok J, et al. Generalizing from a few examples: a survey on few-shot learning. Preprint, 2020. arXiv:190405046.
  • Wang W, Zheng VW, Yu H, et al. A survey of zero-shot learning: settings, methods, and applications. ACM Trans Intell Syst Technol. 2019 Jan;10(2). Available from: https://doi.org/10.1145/3293318.
  • Hagiwara Y, Taguchi K, Ishibushi S, et al. Hierarchical Bayesian model for the transfer of knowledge on spatial concepts based on multimodal information. Preprint, 2021. arXiv:210306442.
  • Katsumata Y, Taniguchi A, Hagiwara Y, et al. Semantic mapping based on spatial concepts for grounding words related to places in daily environments. Front Robot AI. 2019;6:31. Available from: https://www.frontiersin.org/article/10.3389/frobt.2019.00031.
  • Taniguchi A, Taniguchi T, Inamura T. Spatial concept acquisition for a mobile robot that integrates self-localization and unsupervised word discovery from spoken sentences. IEEE Trans Cogn Dev Syst. 2016;8(4):285–297.
  • Taniguchi A, Hagiwara Y, Taniguchi T, et al. Online spatial concept and lexical acquisition with simultaneous localization and mapping. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, Vancouver. 2017. p. 811–818.
  • Taniguchi A, Hagiwara Y, Taniguchi T, et al. Improved and scalable online learning of spatial concepts and language models with mapping. Auton Robots. 2020 Feb;44(6):927–946. DOI:10.1007/s10514-020-09905-0
  • Friston K, Fitzgerald T, Rigoli F, et al. Active inference and learning. Neurosci Biobehav Rev. 2016;68:862–879. Available from: http://dx.doi.org/10.1016/j.neubiorev.2016.06.022.
  • Stachniss C. Information gain-based exploration using Rao-Blackwellized particle filters. In: Robotics: Science and systems; 2005. Cambridge, Massachusetts.
  • Thrun S, Burgard W, Fox D. Probabilistic robotics. Cambridge, MA: The MIT Press; 2005. (Intelligent robotics and autonomous agents series).
  • Chaplot DS, Gandhi D, Gupta S, et al. Learning to explore using active neural SLAM. In: Proceedings of the International Conference on Learning Representations (ICLR); 2020. Virtual Conference, Formerly Addis Ababa ETHIOPIA.
  • Taniguchi T, Yoshino R, Takano T. Multimodal hierarchical Dirichlet process-based active perception by a robot. Front Neurorobot. 2018;12:22.
  • Yoshino R, Takano T, Tanaka H, et al. Active exploration for unsupervised object categorization based on multimodal hierarchical Dirichlet process. In: IEEE/SICE International Symposium on System Integrations (SII); 2021. Fukushima, Japan.
  • Collett THJ, MacDonald BA. Augmented reality visualisation for player. In: Proceedings of 2006 IEEE International Conference on Robotics and Automation (ICRA 2006); Orlando, United States; 2006 May. p. 3954–3959.
  • El Hafi L, Isobe S, Tabuchi Y, et al. System for augmented human-robot interaction through mixed reality and robot training by non-experts in customer service environments. Adv Robot. 2020 Feb;34(3–4):157–172.
  • El Hafi L, Nakamura H, Taniguchi A, et al. Teaching system for multimodal object categorization by human-robot interaction in mixed reality. In: Proceedings of 2021 IEEE/SICE International Symposium on System Integration (SII 2021); Iwaki, Japan (Virtual); 2021 Jan. p. 320–324.
  • Liu H, Zhang Y, Si W, et al. Interactive robot knowledge patching using augmented reality. In: Proceedings of 2018 IEEE International Conference on Robotics and Automation (ICRA 2018); Brisbane, Australia; 2018 May. p. 1947–1954.
  • Ostanin M, Mikhel S, Evlampiev A, et al. Human-robot interaction for robotic manipulator programming in mixed reality. In: Proceedings of 2020 IEEE International Conference on Robotics and Automation (ICRA 2020); Paris, France (Virtual); 2020 May. p. 2805–2811.
  • Zolotas M, Demiris Y. Transparent intent for explainable shared control in assistive robotics. In: Proceedings of 29th International Joint Conference on Artificial Intelligence (IJCAI 2020); Yokohama, Japan (Virtual); 2020 Jul. p. 5184–5185.
  • Rosen E, Whitney D, Fishman M, et al. Mixed reality as a bidirectional communication interface for human-robot interaction. In: Proceedings of 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2020); Las Vegas, United States (Virtual); 2020 Oct. p. 11431–11438.
  • Clark MA. An acoustic lens as a directional microphone. J Acoust Soc Am. 1953;25(4):829–829.
  • Faller C, Favrot A, Langen C, et al. Digitally enhanced shotgun microphone with increased directivity. In: Audio Engineering Society Convention 129. Audio Engineering Society, San Francisco, CA; 2010.
  • Kaneda Y, Ohga J. Adaptive microphone-array system for noise reduction. IEEE Trans Acoust. 1986;34(6):1391–1400.
  • Nishiura T, Yamada T, Nakamura S, et al. Localization of multiple sound sources based on a CSP analysis with a microphone array. In: 2000 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (cat. no. 00ch37100); Vol. 2. Istanbul, IEEE; 2000. p. II1053–II1056.
  • Oikawa Y, Goto M, Ikeda Y, et al. Sound field measurements based on reconstruction from laser projections. In: Proceedings.(ICASSP'05). IEEE International Conference on Acoustics, Speech, and Signal Processing; 2005; Vol. 4. Philadelphia, PA, IEEE; 2005. p. iv–661.
  • Cai C, Iwai K, Nishiura T, et al. Speech enhancement for optical laser microphone with deep neural network. In: 2020 Asia-Pacific signal and Information Processing Association Annual Summit and Conference (APSIPA ASC). IEEE; 2020. p. 449–454.
  • Davis A, Rubinstein M, Wadhwa N, et al. The visual microphone: passive recovery of sound from video. ACM Trans Graph. 2014;33(4):79:1–79:10. Proceedings of the SIGGRAPH.
  • Yoshida A, Iwai K, Nishiur T. Sound quality improvement of extracted sound from video with rolling-shuttered camera. In: 2020 IEEE 9th Global Conference on Consumer Electronics (GCCE). IEEE; 2020. p. 16–20.
  • Wada K. New robot technology challenge for convenience store. In: Proceedings of 2017 IEEE/SICE International Symposium on System Integration (SII 2017); Taipei, Taiwan; 2017 Dec. p. 1086–1091.
  • Okada H, Inamura T, Wada K. What competitions were conducted in the service categories of the world robot summit? Adv Robot. 2019 Sep;33(17):900–910.
  • Taniguchi T, Nagai T, Nakamura T, et al. Symbol emergence in robotics: a survey. Adv Robot. 2016;30(11-12):706–728.
  • Tangiuchi T, Mochihashi D, Nagai T, et al. Survey on frontiers of language and robotics. Adv Robot. 2019;33(15-16):700–730.
  • Ha D, Schmidhuber J. World models. Preprint, 2018. arXiv:180310122.
  • Hafner D, Lillicrap T, Fischer I, et al. Learning latent dynamics for planning from pixels. In: International Conference on Machine Learning, Long Beach, CA; 2019. p. 2555–2565.
  • Hafner D, Lillicrap T, Ba J, et al. Dream to control: learning behaviors by latent imagination. Preprint, 2019. arXiv:191201603.
  • Hafner D, Lillicrap T, Norouzi M, et al. Mastering atari with discrete world models. Preprint, 2020. arXiv:201002193.
  • Okada M, Kosaka N, Taniguchi T. Planet of the Bayesians: reconsidering and improving deep planning network by incorporating bayesian inference. In: IEEE/RSJ International Conference on Intelligent Robots and Systems; IROS 2020; las Vegas, NV, USA; 2020 Oct 24–2021 Jan 24. IEEE; 2020. p. 5611–5618. Available from: https://doi.org/10.1109/IROS45743.2020.9340873.
  • Hagiwara Y, Kobayashi H, Taniguchi A, et al. Symbol emergence as an interpersonal multimodal categorization. Front Robot AI. 2019 Dec;6(134):1–17. DOI:10.3389/frobt.2019.00134
  • Kinose A, Taniguchi T. Integration of imitation learning using gail and reinforcement learning using task-achievement rewards via probabilistic graphical model. Adv Robot. 2020;34(16):1055–1067. DOI:10.1080/01691864.2020.1778521
  • Okumura R, Okada M, Taniguchi T. Domain-adversarial and-conditional state space model for imitation learning. Preprint, 2020. arXiv:200111628.
  • Okada M, Taniguchi T. Variational inference MPC for Bayesian model-based reinforcement learning. In: Conference on Robot Learning. PMLR; 2020. p. 258–272.
  • Nakamura T, Nagai T, Taniguchi T. Serket: an architecture for connecting stochastic models to realize a large-scale cognitive model. Front Neurorobot. 2018;12.
  • Taniguchi T, Nakamura T, Suzuki M, et al. Neuro-serket: development of integrative cognitive system through the composition of deep probabilistic generative models. New Gener Comput. 2020;38:1–26.
  • Taniguchi T, Yamakawa H, Nagai T, et al. Whole brain probabilistic generative model toward realizing cognitive architecture for developmental robots. Preprint,2021.