197
Views
29
CrossRef citations to date
0
Altmetric
Review

Robots that can hear, understand and talk

Pages 533-564 | Published online: 02 Apr 2012

Keep up to date with the latest research on this topic with citation updates for this article.

Read on this site (1)

Johan Bos & Tetsushi Oka. (2007) Meaningful conversation with mobile robots. Advanced Robotics 21:1-2, pages 209-232.
Read now

Articles from other publishers (28)

Timothy Applewhite, Vivienne Jia Zhong & Rolf Dornberger. (2021) Novel Bidirectional Multimodal System for Affective Human-Robot Engagement. Novel Bidirectional Multimodal System for Affective Human-Robot Engagement.
Wookey Lee, Jessica Jiwon Seong, Busra Ozlu, Bong Sup Shim, Azizbek Marakhimov & Suan Lee. (2021) Biosignal Sensors and Deep Learning-Based Speech Recognition: A Review. Sensors 21:4, pages 1399.
Crossref
Schenita Floyd. (2019) Identifying Variables that Improve Communication with Bots. Identifying Variables that Improve Communication with Bots.
Antoine Deleforge, Alexander Schmidt & Walter Kellermann. 2019. Multimodal Behavior Analysis in the Wild. Multimodal Behavior Analysis in the Wild 27 51 .
Sayuri Kohmura, Taro Togawa & Takeshi Otani. (2017) Source separation based on transfer function between microphones and its dispersion. Source separation based on transfer function between microphones and its dispersion.
Joe Crumpton & Cindy L. Bethel. (2015) A Survey of Using Vocal Prosody to Convey Emotion in Robot Speech. International Journal of Social Robotics 8:2, pages 271-285.
Crossref
Joe Crumpton & Cindy L. Bethel. (2015) Validation of vocal prosody modifications to communicate emotion in robot speech. Validation of vocal prosody modifications to communicate emotion in robot speech.
Omar Mubin, Joshua Henderson & Christoph Bartneck. (2014) You just do not understand me! Speech Recognition in Human Robot Interaction. You just do not understand me! Speech Recognition in Human Robot Interaction.
Joe Crumpton & Cindy Bethel. (2014) Conveying emotion in robotic speech: Lessons learned. Conveying emotion in robotic speech: Lessons learned.
Haibin Yan, Marcelo H. AngJr.Jr. & Aun Neow Poo. (2013) A Survey on Perception Methods for Human–Robot Interaction in Social Robots. International Journal of Social Robotics 6:1, pages 85-119.
Crossref
Hiroshi Saruwatari & Ryoichi Miyazaki. 2014. Blind Source Separation. Blind Source Separation 291 322 .
Omar Mubin, Christoph Bartneck, Loe Feijs, Hanneke Hooft van Huysduynen, Jun Hu & Jerry Muelver. (2012) Improving Speech Recognition with the Robot Interaction Language. Disruptive Science and Technology 1:2, pages 79-88.
Crossref
Hiroshi Saruwatari, Nobuhisa Hirata, Toshiyuki Hatta, Ryo Wakisaka, Kiyohiro Shikano & Tomoya Takatani. (2011) Semi-blind speech extraction for robot using visual information and noise statistics. Semi-blind speech extraction for robot using visual information and noise statistics.
T. Oka, H. Matsumoto & R. Kibayashi. (2011) A multimodal language to communicate with life-supporting robots through a touch screen and a speech interface. Artificial Life and Robotics 16:3, pages 292-296.
Crossref
Tetsushi Oka, Toyokazu Abe, Kaoru Sugita & Masao Yokota. (2011) User study of a life-supporting humanoid directed in a multimodal language. Artificial Life and Robotics 16:2, pages 224-228.
Crossref
Ekapol Chuangsuwanich, Scott Cyphers, James Glass & Seth Teller. (2010) Spoken command of large mobile robots in outdoor environments. Spoken command of large mobile robots in outdoor environments.
Hiroshi Saruwatari, Hiromichi Kawanami & Kiyohiro Shikano. (2010) . Journal of the Robotics Society of Japan 28:1, pages 31-34.
Crossref
M. Ajallooeian, A. Borji, B. N. Araabi, M. Nili Ahmadabadi & H. Moradi. (2009) Fast Hand gesture recognition based on saliency maps: An application to interactive robotic marionette playing. Fast Hand gesture recognition based on saliency maps: An application to interactive robotic marionette playing.
Yu Takahashi, Hiroshi Saruwatari, Yuki Fujihara, Kentaro Tachibana, Yoshimitsu Mori, Shigeki Miyabe, Kiyohiro Shikano & Akira Tanaka. (2009) Source adaptive blind signal extraction using closed-form ICA for hands-free robot spoken dialogue system. Source adaptive blind signal extraction using closed-form ICA for hands-free robot spoken dialogue system.
Clemens Lombriser, Andreas Bulling, Andreas Breitenmoser & Gerhard Troster. (2009) Speech as a feedback modality for smart objects. Speech as a feedback modality for smart objects.
Tetsushi Oka, Toyokazu Abe, Kaoru Sugita & Masao Yokota. (2009) RUNA: a multimodal command language for home robot users. Artificial Life and Robotics 13:2, pages 455-459.
Crossref
Yu. Takahashi, H. Saruwatari & K. Shikano. (2008) Real-time implementation of blind spatial subtraction array for hands-free robot spoken dialogue system. Real-time implementation of blind spatial subtraction array for hands-free robot spoken dialogue system.
Manoj Kumar Mukul, Rajkishore Prasad, M.M. Choudhary & Fumitoshi Matsuno. (2008) Steering of camera by stepper motor towards active speaker using microphone array. Steering of camera by stepper motor towards active speaker using microphone array.
Rajkishore Prasad, Takuji Koike & Fumitoshi Matsuno. (2008) Speech signal captured by PVDF sensor. Speech signal captured by PVDF sensor.
Oscar Reinoso, César Fernández & Ramón Ñeco. 2007. Advances in Telerobotics. Advances in Telerobotics 107 120 .
Y. Ohashi, T. Nishikawa, H. Saruwatari, A. Lee & K. Shikano. (2005) Noise-robust hands-free speech recognition based on spatial subtraction array and known noise superimposition. Noise-robust hands-free speech recognition based on spatial subtraction array and known noise superimposition.
T. Takatani, S. Ukai, T. Nishikawa, H. Saruwatari & K. Shikano. (2005) Blind sound scene decomposition for robot audition using SIMO-model-based ICA. Blind sound scene decomposition for robot audition using SIMO-model-based ICA.
H. Saruwatari, Y. Mori, T. Takatani, S. Ukai, K. Shikano, T. Hiekata & T. Morita. (2005) Two-stage blind source separation based on ICA and binary masking for real-time robot audition system. Two-stage blind source separation based on ICA and binary masking for real-time robot audition system.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.