749
Views
1
CrossRef citations to date
0
Altmetric
Article

Emotional experiences of service robots’ anthropomorphic appearance: a multimodal measurement method

, ORCID Icon, ORCID Icon &
Pages 2039-2057 | Received 03 May 2022, Accepted 14 Feb 2023, Published online: 10 Mar 2023
 

Abstract

Anthropomorphic appearance is a key factor to affect users’ attitudes and emotions. This research aimed to measure emotional experience caused by robots’ anthropomorphic appearance with three levels – high, moderate, and low – using multimodal measurement. Fifty participants’ physiological and eye-tracker data were recorded synchronously while they observed robot images that were displayed in random order. Afterward, the participants reported subjective emotional experiences and attitudes towards those robots. The results showed that the images of the moderately anthropomorphic service robots induced higher pleasure and arousal ratings, and yielded significantly larger pupil diameter and faster saccade velocity, than did the low or high robots. Moreover, participants’ facial electromyography, skin conductance, and heart-rate responses were higher when observing moderately anthropomorphic service robots. An implication of the research is that service robots’ appearance should be designed to be moderately anthropomorphic; too many human-like features or machine-like features may disturb users’ positive emotions and attitudes.

Practitioner Summary: This research aimed to measure emotional experience caused by three types of anthropomorphic service robots using a multimodal measurement experiment. The results showed that moderately anthropomorphic service robots evoked more positive emotion than high and low anthropomorphic robots. Too many human-like features or machine-like features may disturb users’ positive emotions.

Acknowledgments

We thank the editor and anonymous reviewers for their valuable comments and advice.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by National Natural Science Foundation of Anhui Province [grant number 2208085MG183], National Natural Science Foundation of China [grant numbers 71701003, 71801002], the Key Project for Natural Science Fund of Colleges in Anhui Province [grant number KJ2021A0502], and the Project for Social Science Innovation and Development in Anhui Province [grant number 2021CX075], the National College Students Innovation Training Program [grant number 202210363068]. Anhui Provincial Research Group of Cognitive Neuroscience [2022AH010060].

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.