349
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Activity scenarios simulation by discovering knowledge through activities of daily living datasets

, & ORCID Icon
Pages 87-105 | Received 02 May 2023, Accepted 09 Feb 2024, Published online: 28 Feb 2024

References

  • Padmakumar A, Thomason J, Shrivastava A, et al. TEACh. Task-driven embodied agents that chat. AAAI Conference on Artificial Intelligence; 2021.
  • Wright J. Tactile care, mechanical hugs: Japanese caregivers and robotic lifting devices. Asian Anthropol. 2018;17:24–39. doi:10.1080/1683478X.2017.1406576
  • Pillai S, Leonard JJ. Monocular SLAM supported object recognition. 2015 Robotics: Science and Systems Conference; 2015 July.
  • Sigurdsson GA, Varol G, Wang X, et al. Hollywood in homes: crowdsourcing data collection for activity understanding. In: Leibe B, Matas J, Sebe N, et al., editors. Computer vision – ECCV; Lecture notes in computer science, Vol. 9905. Cham: Springer. doi:10.1007/978-3-319-46448-0_31
  • Das S, Dai R, Koperski M, et al. Toyota smarthome: real-world activities of daily living. In: 2019 IEEE/CVF International Conference on Computer Vision (ICCV); Seoul, Korea (South); 2019. p. 833–842. doi:10.1109/ICCV.2019.00092
  • Shahroudy A, Liu J, Ng T, et al. NTU RGB+D: a large scale dataset for 3D human activity analysis. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2016. p. 1010–1019.
  • Rai N, Chen H, Ji J, et al. Home action genome: cooperative compositional action understanding. In: 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); Nashville, TN; 2021. p. 11179–11188. doi:10.1109/CVPR46437.2021.01103
  • Egami S, Ugai T, Oono M, et al. Synthesizing event-centric knowledge graphs of daily activities using virtual space. IEEE Access. 2023;11:23857–23873. doi:10.1109/ACCESS.2023.3253807
  • Egami S, Nishimura S, Fukuda K. VirtualHome2KG: constructing and augmenting knowledge graphs of daily activities using virtual space. International Semantic Web Conference (ISWC) Posters, Demos, and Industry Tracks; 2021.
  • Egami S, Nishimura S, Fukuda K. A framework for constructing and augmenting knowledge graphs using virtual space: towards analysis of daily activities. In: 2021 IEEE 33rd Int. Conf. on Tools with Artificial Intelligence (ICTAI); 2021. p. 1226–1230. doi:10.1109/ICTAI52525.2021.00194
  • Htun SNN, Egami S, Fukuda K. A survey and comparison of activities of daily living datasets in real-life and virtual spaces. In: 2023 IEEE/SICE International Symposium on System Integration (SII); Atlanta, GA; 2023. p. 1–7. doi:10.1109/SII55687.2023.10039226
  • Beddiar DR, Nini B, Sabokrou M, et al. Vision-based human activity recognition: a survey. Multimed Tools Appl. 2020;79:30509–30555. doi:10.1007/s11042-020-09004-3
  • Sharma V, Gupta M, Pandey AK, et al. A review of deep learning-based human activity recognition on benchmark video datasets. Appl Artif Intell. 2022;36(1):2093705. doi:10.1080/08839514.2022.2093705
  • Olugbade T, Bieńkiewicz M, Barbareschi G, et al. Human movement datasets: an interdisciplinary scoping review. ACM Comput Surv 2023;55(6):1–29, Article no. 126. doi:10.1145/3534970
  • Fukuda K, Ugai T, Oono M, et al. Daily activity data generation in cyberspace for semantic AI technology and HRI simulation. 40th Annual Meeting of the Japan Robotics Society of Japan; 2022 Sept; 3J1-03.
  • Htun SNN, Zin TT, Hama H. Virtual grounding point concept for detecting abnormal and normal events in home care monitoring systems. Appl Sci. 2020;10:3005. doi:10.3390/app10093005
  • Htun SNN, Egami S, Duan Y, et al. Abnormal activity detection based on place and occasion in virtual home. In: 15th International Conference on Genetic and Evolutionary Computing (ICGEC 2023); Kaohsiung, Taiwan; 2023 Oct 6–8; Vol. 2, Lecture notes in electrical engineering, Vol. 1114. Springer.
  • Tayyub J, Hawasly M, Hogg DC, et al. CLAD: a complex and long activities dataset with rich crowdsourced annotations; ArXiv, abs/1709.03456; 2017.
  • Vaquette G, Orcesi A, Lucat L, et al. The daily home life activity dataset: a high semantic activity dataset for online recognition. In: 2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017); 2017. p. 497–504.
  • Zhang Q, Lin W, Chan AB. Cross-view cross-scene multi-view crowd counting. In: 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); 2021. p. 557–567.
  • Puig X, Ra KK, Boben M, et al. VirtualHome: simulating household activities via programs. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition; 2018. p. 8494–8502.
  • Hwang H, Jang C, Park G, et al. ElderSim: a synthetic data generation platform for human action recognition in eldercare applications. IEEE Access. 2023;11:9279–9294. doi:10.1109/ACCESS.2021.3051842
  • Roitberg A, Schneider D, Djamal A, et al. Let’s play for action: recognizing activities of daily living by learning from life simulation video games. In: 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); Prague, Czech Republic; 2021. p. 8563–8569. doi:10.1109/IROS51168.2021.9636381
  • Li C, Martín-Martín R, Lingelbach M, et al. iGibson 2.0: object-centric simulation for robot learning of everyday household tasks; ArXiv, abs/2108.03272; 2021.
  • Kolve E, Mottaghi R, Han W, et al. AI2-THOR: an interactive 3D Environment for visual AI; ArXiv, abs/1712.05474; 2017.
  • HUSKY unmanned ground vehicle [Internet]; [cited 2023 Apr 20]. Available from: https://clearpathrobotics.com/husky-unmanned-ground-vehicle-robot/
  • Fetch robotics [Internet]; [cited 2023 Apr 20]. Available from: https://fetchrobotics.com/
  • TurtleBot2 [Internet]; [cited 2023 Apr 20]. Available from: https://www.turtlebot.com/turtlebot2/
  • Duan J, Yu S, Li T, et al. A survey of embodied AI: from simulators to research tasks. IEEE Trans Emerging Topics Comput Intell. 2022;6:230–244. doi:10.1109/TETCI.2022.3141105
  • Pérez-D’Arpino C, Liu C, Goebel P, et al. Robot navigation in constrained pedestrian environments using reinforcement learning; arXiv preprint arXiv:2010.08600; 2020.
  • Shen B, Xia F, Li C, et al. Igibson 1.0: a simulation environment for interactive tasks in large realistic scenes. 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); 2020.
  • Anderson P, Wu Q, Teney D, et al. Vision-and-language navigation: interpreting visually-grounded navigation instructions in real environments. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 2018. p. 3674–3683.
  • Yu L, Chen X, Gkioxari G, et al. Multi-target embodied question answering. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019; Long Beach, CA; 2019 Jun 16–20. p. 6309–6318.
  • Gordon D, Kembhavi A, Rastegari M, et al. IQA: visual question answering in interactive environments. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 2018. p. 4089–4098.
  • Robinovitch SN, Feldman F, Fabio YY, et al. Video capture of the circumstances of falls in elderly people residing in long-term care: an observational study. Lancet. 2013;381:47–54. doi:10.1016/S0140-6736(12)61263-X