References
- Bono M, Suzuki N, Katagiri Y. An analysis of participation structure in conversation based on interaction corpus of ubiquitous sensor data. In: Rauterberg M, Menozzi M, Wesson J, editors. Human-Computer Interaction INTERACT'03: IFIP TC13 International Conference on Human-Computer Interaction, 2003, Zurich, Switzerland. Amsterdam, Netherlands: IOS Press; 2003.
- Kawahara T, Hayashi S, Takanashi K. Estimation of interest and comprehension level of audience through multi-modal behaviors in poster conversations. In: Proceedings Interspeech 2013, Lyon. 2013. p. 1882–1885. https://www.isca-speech.org/archive/interspeech_2013/
- Miyahara M, Aoki M, Takiguchi T, et al. Tagging video contents withpositive/negative interest based on user's facial expression. In: Proceedings Advances in Multimedia Modeling 2008. Berlin, Heidelberg: Springer-Verlag; 2008. p. 210–219.
- Lopez G, Ide H, Shuzo M, et al. Workplace stress estimation from physiological indices in real situation. In: Cipresso P, Matic A, Lopez G, editors. Pervasive Computing Paradigms for Mental Health. Cham: Springer International Publishing; 2014. p. 13–22.
- Uema Y, Inoue K. JINS MEME algorithm for estimation and tracking of concentration of users. In: Proceedings of the 2017 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2017 ACM International Symposium on Wearable Computers. New York (NY): Association for Computing Machinery; 2017. p. 297–300.
- Hamaguchi N, Yamamoto K, Iwai D, et al. Subjective difficulty estimation for interactive learning by sensing vibration sound on desk panel. In: de Ruyter B, Wichert R, Keyson DV, Markopoulos P, Streitz N, Divitini M, Georgantas N, Mana Gomez A, editors. Ambient Intelligence. Berlin, Heidelberg: Springer Berlin Heidelberg; 2010. p. 138–147.
- Uchida T, Minato T, Nakamura Y, et al. Female-type android's drive to quickly understand a user's concept of preferences stimulates dialogue satisfaction: dialogue strategies for modeling user's concept of preferences. Int J Soc Robot. 2021;13:1499–1516. doi: 10.1007/s12369-020-00731-z
- Maruyama K, Yamato J, Sugiyama H. Analysis of how motivation to conversation varies depending on the number of dialogue robots and presence/absence of gestures. IEICE Trans Inf & Syst (Japanese Ed.). 2021;J104-D(1):30–41.
- Inoue K, Lala D, Takanashi K, et al. Engagement recognition by a latent character model based on multimodal listener behaviors in spoken dialogue. APSIPA Trans Signal Inf Process. 2018;7:e9. doi: 10.1017/ATSIP.2018.11
- Goffman E. Behavior in public places: notes on the social organization of gatherings. New York: Free Press; 1963.
- Kawamoto M, Shuzo M, Maeda E. Improving user's sense of participation in robot-driven dialogue. Preprint, 2022. arXiv:2210.09746. cs.RO.
- Higashinaka R, Minato T, Sakai K, et al. Dialogue robot competition for the development of an android robot with hospitality. In: 2022 IEEE 11th Global Conference on Consumer Electronics (GCCE), Osaka. 2022. p. 357–360. https://ieeexplore.ieee.org/document/10014078
- Minato T, Higashinaka R, Sakai K, et al. Overview of dialogue robot competition 2022. Preprint, 2022. arXiv:2210.12863. cs.RO.
- Nishio S, Ishiguro H, Hagita N. Geminoid: teleoperated android of an existing person. In: de Pina Filho AC, editor. Humanoid Robots. Rijeka: IntechOpen; 2007. Chapter 20.
- Glas DF, Minato T, Ishi CT, et al. ERICA: the ERATO intelligent conversational android. In: 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). New York (NY): IEEE Press; 2016. p. 22–29.
- Matsumura R, Shiomi M, Hagita N. Does an animation character robot increase sales? In: Proceedings of the 5th International Conference on Human Agent Interaction. New York (NY): Association for Computing Machinery; 2017. p. 479–482.
- Iwamoto T, Baba J, Nakanishi J, et al. Playful recommendation: sales promotion that robots stimulate pleasant feelings in stead of product explanation. IEEE Robot Autom Lett. 2022;7(4):11815–11822. doi: 10.1109/LRA.2022.3189149
- Sato W, Namba S, Yang D, et al. An android for emotional interaction: spatiotemporal validation of its facial expressions. Front Psychol. 2022;12. doi: 10.3389/fpsyg.2021.800657
- Collins GR. Improving human-robot interactions in hospitality settings. Int Hosp Rev. 2020;34(1):61–79.
- Yamazaki T, Yoshikawa K, Kawamoto T, et al. Tourist guidance robot based on HyperCLOVA. Preprint, 2022. arXiv:2210.10400. cs.CL.
- Han S, Bang J, Ryu S, et al. Exploiting knowledge base to generate responses for natural language dialog listening agents. In: Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue; September 2015. Prague, Czech Republic: Association for Computational Linguistics; 2015. p. 129–133.
- Johansson M, Hori T, Skantze G, et al. Making turn-taking decisions for an active listening robot for memory training. In: Agah A, Cabibihan JJ, Howard AM, Salichs MA, He H, editors. Social Robotics. Cham: Springer International Publishing; 2016. p. 940–949.
- Schröder M, Bevacqua E, Cowie R, et al. Building autonomous sensitive artificial listeners (Extended abstract). In: 2015 International Conference on Affective Computing and Intelligent Interaction (ACII), Xi'an. 2015. p. 456–462. https://www.computer.org/csdl/proceedings/acii/2015/12OmNyugyR5
- Watanabe M, Ogawa K, Ishiguro H. At the department store–can androids be a social entity in the real world? In: Geminoid Studies: Science and Technologies for Humanlike Teleoperated Androids. 2018 Apr, Springer. p. 423–427. https://link.springer.com/book/10.1007/978-981-10-8702-8
- Wu Y, Huang TS. Vision-based gesture recognition: a review. In: Braffort A, Gherbi R, Gibet S, Teil D, Richardson J, editors. Gesture-Based Communication in Human-Computer Interaction. Berlin, Heidelberg: Springer Berlin Heidelberg; 1999. p. 103–115.
- Mitra S, Acharya T. Gesture recognition: a survey. IEEE Trans Syst Man Cybern Syst. 2007;37(3):311–324. doi: 10.1109/TSMCC.2007.893280
- Suarez J, Murphy RR. Hand gesture recognition with depth images: a review. In: 2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication, Paris. 2012. p. 411–417. https://www.rsj.or.jp/event/international/roman/