1,311
Views
58
CrossRef citations to date
0
Altmetric
Original Articles

Review of Semantic-Free Utterances in Social Human–Robot Interaction

, , &
Pages 63-85 | Published online: 05 Jan 2016
 

Abstract

As a young and emerging field in social human–robot interaction (HRI), semantic-free utterances (SFUs) research has been receiving attention over the last decade. SFUs are an auditory interaction means for machines that allow emotion and intent expression, which are composed of vocalizations and sounds without semantic content or language dependence. Currently, SFUs are most commonly utilized in animation movies (e.g., R2-D2, WALL-E, Despicable Me), cartoons (e.g., “Teletubbies,” “Morph,” “La Linea”), and computer games (e.g., The Sims) and hold significant potential for applications in HRI. SFUs are categorized under four general types: Gibberish Speech (GS), Non-Linguistic Utterances (NLUs), Musical Utterances (MU), and Paralinguistic Utterances (PU). By introducing the concept of SFUs and bringing multiple sets of studies in social HRI that have never been analyzed jointly before, this article addresses the need for a comprehensive study of the existing literature for SFUs. It outlines the current grand challenges, open questions, and provides guidelines for future researchers considering to utilize SFU in social HRI.

View correction statement:
Corrigendum

Notes

1 Blizzard Challenge has been developed to better understand and compare research techniques in building corpus-based speech synthesizers on the same data (http://www.synsig.org/index.php/Blizzard_Challenge).

Additional information

Notes on contributors

Selma Yilmazyildiz

Selma Yilmazyildiz is a PhD candidate at the Digital Speech and Audio Processing laboratory of Vrije Universiteit Brussels (VUB). She received her BSc in Electronic Engineering at Uludag University in 2004 and MSc in Applied Computer Science at VUB. Her research interests include emotion in speech, voice modification, and human–robot interaction.

Robin Read

Robin Read received his PhD in human–robot interaction from Plymouth University in 2014, where he worked on the FP7 ALIZ-E project under the supervision of Professor Tony Belpaeme. He then continued as a postdoc on the same project before moving to work in industry.

Tony Belpeame

Tony Belpaeme is Professor of Cognitive Systems and Robotics at Plymouth University. He is associated with the Cognition Institute and the Centre for Robotics and Neural Systems. His research interests include human–robot interaction, social systems, and artificial intelligence.

Werner Verhelst

Werner Verhelst obtained his MSc in Electrical Engineering in 1980 and his PhD degree in 1985, both from the Vrije Universiteit Brussel (VUB). He currently heads the Laboratory for Digital Speech and Audio Processing at VUB. His current research interests are in the domains of speech, audio, and audiovisual signal processing.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 61.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 306.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.