ABSTRACT
Recent studies in human–human interaction (HHI) have revealed the propensity of negative emotional expression to initiate affiliating functions which are beneficial to the expresser and also help fostering cordiality and closeness amongst interlocutors during conversation. Effort in human–robot interaction has also been devoted to furnish robots with the expression of both positive and negative emotions. However, only a few have considered body gestures in context with the dialogue act functions conveyed by the emotional utterances. This study aims on furnishing robots with humanlike negative emotional expression, specifically anger-based body gestures roused by the utterance context. In this regard, we adopted a multimodal HHI corpus for the study, and then analyzed and established predominant gestures types and dialogue acts associated with anger-based utterances in HHI. Based on the analysis results, we implemented these gesture types in an android robot, and carried out a subjective evaluation to investigate their effects on the perception of anger expression in utterances with different dialogue act functions. Results showed significant effects of the presence of gesture on the anger degree perception. Findings from this study also revealed that the functional content of anger-based utterances plays a significant role in the choice of the gesture accompanying such utterances.
GRAPHICAL ABSTRACT
![](/cms/asset/a9b78e1c-5bea-404e-82d0-ebcd1e905e2d/tadr_a_1855244_uf0001_oc.jpg)
Acknowledgments
We thank Yuka Nakayama and Megumi Taniguchi for contributions in the data analysis, and Takashi Minato for assistance in the android hardware settings.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Notes on contributors
Chinenye Augustine Ajib is currently a graduate student at the Department of Systems Innovation, Graduate School of Engineering Science, Osaka University, Japan. He is also an Intern at the Hiroshi Ishiguro Laboratory at the Advanced Telecommunication Research Institute (ATR) Kyoto, Japan.
Carlos Toshinori Ishi received the PhD degree in Engineering from The University of Tokyo, Japan, in 2001. He worked at the JST/CREST Expressive Speech Processing Project from 2002 to 2004 at ATR. He joined ATR Intelligent Robotics and Communication Labs, since 2005, and is the group leader of the Department of Sound Environment Intelligence at ATR Hiroshi Ishiguro Labs, since 2013. He joined RIKEN Robotics Project from 2020.
Ryusuke Mikata received his MS degree from the Graduate School of Engineering Science Osaka University, Japan, in 2020. He made his internship at the Hiroshi Ishiguro Laboratory at the Advanced Telecommunication Research Institute (ATR) Kyoto, Japan.
Chaoran Liu received his PhD degree from the Graduate School of Engineering Science, Osaka University, Japan, in 2015. He is currently working at Advanced Telecommunications Research Institute. His research interests include sound signal processing and machine learning.
Hiroshi Ishiguro received a D.Eng. in Systems Engineering from Osaka University, Japan in 1991. He is currently Professor of the Department of Systems Innovation in the Graduate School of Engineering Science at Osaka University (2009–) and Distinguished Professor of Osaka University (2017–). He is also visiting Director (2014–) (group leader: 2002–2013) of Hiroshi Ishiguro Laboratories at the Advanced Telecommunications Research Institute and an ATR fellow. His research interests include sensor networks, interactive robotics, and android science. In 2015, he received the Prize for Science and Technology (Research Category) by the Minister of Education, Culture, Sports, Science and Technology (MEXT).