1,311
Views
58
CrossRef citations to date
0
Altmetric
Original Articles

Review of Semantic-Free Utterances in Social Human–Robot Interaction

, , &

REFERENCES

  • Abelin, Å., & Allwood, J. (2000). Cross linguistic interpretation of emotional prosody. ISCA tutorial and research workshop on speech and emotion, 110–113.
  • Alty, J. L., Rigas, D., & Vickers, P. (2005). Music and speech in auditory interfaces: When is one mode more appropriate than another? International Conference on Auditory Display, 351–357.
  • Arslan, L. M., & Talkin, D. (1998). Speaker transformation using sentence HMM based alignments and detailed prosody modification. Proceedings of the IEEE international conference on acoustics, speech and signal processing, 289–292.
  • Ayesh, A. (2006, September). Structured sound based language for emotional robotic communicative interaction. The 15th IEEE international symposium on robot and human interactive communication, 135–140.
  • Ayesh, A. (2009). Emotionally expressive music based interaction language for social robots. ICGST International Journal on Automation, Robotics and Autonomous Systems, 9, 1–10.
  • Banse, R., & Scherer, K. (1996). Acoustic profiles in vocal emotion expression. Journal of Personality and Social Psychology, 70, 614–636.
  • Barker, J., & Cooke, M. (1999). Is the sine-wave speech cocktail party worth attending? Speech Communication, 27(3), 159–174.
  • Bartneck, C., & Michio, O. (2001). eMuu-An emotional robot. Proceedings of 2001 Robofesta.
  • Beck, A., Canamero, L., Hiolle, A., Damiano, L., Cosi, P., Tesser, F., & Sommavilla, G. (2013). Interpretation of emotional body language displayed by a humanoid robot: A case study with children. International Journal of Social Robotics, 5, 325–334.
  • Becker-Asano, C., & Ishiguro, H. (2009). Laughter in social robotics—no laughing matter. International workshop on social intelligence design, 287–300.
  • Becker-Asano, C., Kanda, T., Ishi, C., & Ishiguro, H. (2011). Studying laughter in combination with two humanoid robots. AI & Society, 26, 291–300.
  • Belpaeme, T., Baxter, P., Greeff, J., Kennedy, J., Read, R., Looije, R., … Zelati, M. C. (2013). Child–robot interaction: Perspectives and challenges. In G. Herrmann, J. Pearson Martin, A. Lenz, P. Bremner, A. Spiers, & U. Leonards (Eds.), Social robotics (Vol. 8239, pp. 452–459). New York, NY: Springer.
  • Belpaeme, T., Baxter, P., Read, R., Wood, R., Cuayáhuitl, H., Kiefer, B., … Humbert, R. (2012). Multimodal child–robot interaction: Building social bonds. Journal of Human–Robot Interaction, 1, 33–53. doi:10.5898/JHRI.1.2.Belpaeme
  • Bennett, K. (2004). Linguistic steganography: Survey, analysis, and robustness concerns for hiding information in text (Tech. Rep.). Purdue University.
  • Blattner, M. M., Sumikawa, D. A., & Greenberg, R. M. (1989). Earcons and icons: Their structure and common design principles. Human–Computer Interaction, 4, 11–44.
  • Bradley, M. M., & Lang, P. J. (1994). Measuring emotion: The self-assessment manikin and the semantic differential. Journal of Behavior Therapy and Experimental Psychiatry, 25, 49–59.
  • Bramas, B., Kim, Y.-M., & Kwon, D.-S. (2008). Design of a sound system to increase emotional expression impact in human-robot interaction. International conference on control, automation and systems, 2732–2737.
  • Breazeal, C. (2000). Sociable machines: Expressive social exchange between humans and robots ( Unpublished doctoral dissertation). Massachusetts Institute of Technology, Cambridge, MA.
  • Breazeal, C. (2002). Designing sociable robots. Cambridge, MA: MIT Press.
  • Breazeal, C., Depalma, N., Orkin, J., & Chernova, S. (2013). Crowdsourcing human–robot interaction: New methods and system evaluation in a public environment. Journal of Human–Robot Interaction, 2, 82–111. doi:10.5898/JHRI.2.1.Breazeal
  • Broekens, J., & Brinkman, W.-P. (2013). Affect Button: a method for reliable and valid affective self-report. International Journal of Human–Computer Studies, 71, 641–667.
  • Brooks, A., & Arkin, R. (2007). Behavioral overlays for non-verbal communication expression on a humanoid robot. Autonomous Robots, 22, 55–74.
  • Brown, S. (2000). The “musilanguage” model of music evolution. In N. L. Wallin & B. Merker (Eds.), The origins of music (271–300). Cambridge, MA: MIT Press.
  • Burkhardt, F., & Sendlmeier, W. (2000). Verification of acoustical correlates of emotional speech using formant-synthesis. ISCA tutorial and research workshop on speech and emotion, 151–156.
  • Cahn, J. (1990). The generation of affect in synthesized speech. Journal of the American Voice I/O Society, 8, 1–19.
  • Chao, C., & Thomaz, A. (2013). Controlling social dynamics with a parametrized model of floor regulation. Journal of Human–Robot Interaction, 2, 4–29. doi:10.5898/JHRI.2.1.Chao
  • Chomsky, N. (1956). Three models for the description of language. IRE Transactions on Information Theory 2, 113–124.
  • Connell, J. H. (2014). Extensible grounding of speech for robot instruction. In J. Markowitz (Ed.), Robots that Talk and Listen (pp. 175–199). Berlin/Boston/Munich: De Gruyter.
  • Cowie, R., & Cornelius, R. (2003). Describing the emotional states that are expressed in speech. Speech Communication, 40, 5–32.
  • Deits, R., Tellex, S., Thaker, P., Simeonov, D., Kollar, T., & Roy, N. (2013). Clarifying commands with information-theoretic human–robot dialog. Journal of Human–Robot Interaction, 2, 58–79.
  • Delaunay, F., de Greeff, J., & Belpaeme, T. (2010). A study of a retro-projected robotic face and its effectiveness for gaze reading by humans. Proceedings of the 5th international conference on human–robot interaction, 39–44.
  • Dingler, T., Lindsay, J., & Walker, B. N. (2008). Learnability of sound cues for environmental features: Auditory icons, earcons, spearcons, and speech. Proceedings of the 14th international conference on auditory display, 1–6.
  • D’Mello, S., McCauley, L., & Markham, J. (2005). A mechanism for human–robot interaction through informal voice commands. Robot and human Interactive communication, IEEE International Workshop on 184–189.
  • Dombois, F., & Eckel, G. (2011). Audification. In T. Hermann, A. Hunt, & J. G. Neuhoff (Eds.), The Sonification Handbook (pp. 301–324). Berlin: Logos.
  • Duffy, B. R. (2003, March). Anthropomorphism and the social robot. Robotics and Autonomous Systems, 42, 177–190.
  • Duffy, B. R., & Zawieska, K. (2012). Suspension of disbelief in social robotics. Proceedings of the 21st international symposium on robot and human interactive communication, 484–489.
  • Dutoit, T., & Leich, H. (1993). MBR-PSOLA: Text-To-Speech synthesis based on an {MBE} re-synthesis of the segments database. Speech Communication, 13, 435–440.
  • Ekman, P., & Friesen, W. (1971). Constants across cultures in the face and emotion. Journal of Personality and Social Psychology, 17, 124–129. Retrieved from http://psycnet.apa.org/journals/psp/17/2/124/
  • Embgen, S., Luber, M., Becker-Asano, C., Ragni, M., Evers, V., & Arras, K. (2012). Robot-specific social cues in emotional body language. Proceedings of the 21st international symposium on robot and human interactive communication, 1019–1025. doi:10.1109/ROMAN.2012.6343883
  • Esnaola, U., & Smithers, T. (2005, June). MiReLa: A musical robot. Proceedings of IEEE international symposium on computational intelligence in robotics and automation, 67–72.
  • Fridin, M., & Belokopytov, M. (2014). Embodied robot versus virtual agent: Involvement of preschool children in motor task performance. International Journal of Human–Computer Interaction, 30, 459–469.
  • Friend, M. (2000). Developmental changes in sensitivity to vocal paralanguage. Developmental Science, 3, 148–162.
  • Gabsdil, M. (2003). Clarification in spoken dialogue systems. Proceedings of the 2003 AAAI spring symposium. workshop on natural language generation in spoken and written dialogue, 28–35.
  • Gardner, M. (1984). Codes, ciphers and secret writing. Mineola, NY: Dover.
  • Gaver, W. (1986). Auditory icons: Using sound in computer interfaces. Human–Computer Interaction, 2, 167–177.
  • Goodrich, M. A., & Schultz, A. C. (2007). Human–robot interaction: A survey. Foundations and Trends in Human-Computer Interaction, 1, 203–275.
  • Gorostiza, J. F., & Salichs, M. A. (2011). End-user programming of a social robot by dialog. Robotics and Autonomous Systems, 59, 1102–1114.
  • Haring, M., Bee, N., & André, E. (2011). Creation and evaluation of emotion expression with body movement, sound and eye color for humanoid robots. 2011 IEEE Ro-man, 204–209.
  • Hermann, T., Hunt, A., & Neuhoff, J. G. (Eds.). (2011). The Sonification handbook. Berlin: Logos Verlag.
  • Holzapfel, H., & Gieselmann, P. (2004). A way out of dead–end situations in dialogue systems for human–robot interaction. Humanoid robots, fourth IEEE/RAS international conference on, 184–195.
  • Imai, M., Hiraki, K., Miyasato, T., Nakatsu, R., & Anzai, Y. (2003). Interaction with robots: Physical constraints on the interpretation of demonstrative pronouns. International Journal of Human–Computer Interaction, 16, 367–384.
  • Jee, E., Jeong, Y., Kim, C., & Kobayashi, H. (2010). Sound design for emotion and intention expression of socially interactive robots. Intelligent Service Robotics, 3, 199–206.
  • Jee, E.-S., Kim, C. H., Park, S.-Y., & Lee, K.-W. (2007, August). Composition of musical sound expressing an emotion of robot based on musical factors. In Proceedings of the 16th international symposium on robot and human interactive communication, 637–641.
  • Jee, E.-S., Park, S.-Y., Kim, C. H., & Kobayashi, H. (2009, September). Composition of musical sound to express robot’s emotion with intensity and synchronized expression with robot’s behavior. Proceedings of the 18th international symposium on robot and human interactive communication, 369–374.
  • Johannsen, G. (2001). Auditory displays in human–machine interfaces of mobile robots for non-linguistic speech communication with humans. Journal of Intelligent and Robotic Systems, 32, 161–169.
  • Johannsen, G. (2002). Auditory display of directions and states for mobile systems. Proceedings of the international conference on auditory display, 98–103.
  • Johannsen, G. (2004, Apr). Auditory displays in human–machine interfaces. Proceedings of the IEEE, 92, 742–758.
  • Johnson, W. F., Emde, R. N., Scherer, K. R., & Klinnert, M. D. (1986). Recognition of emotion from vocal cues. Archives of General Psychiatry, 43, 280–283.
  • Jung, H., Seon, C.-N., Kim, J. H., Sohn, J. C., Sung, W.-K., & Park, D.-I. (2005). Information extraction for users utterance processing on ubiquitous robot companion. In Natural language processing and information systems (pp. 337–340). New York, NY: Springer.
  • Juslin, P. N., & Laukka, P. (2003, September). Communication of emotions in vocal expression and music performance: Different channels, same code? Psychological Bulletin, 129, 770–814.
  • Knoll, M., Uther, M., & Costall, A. (2009). Effects of low-pass filtering on the judgment of vocal affect in speech directed to infants, adults and foreigners. Speech Communication, 51, 210–216. doi:10.1016/j.specom.2008.08.001
  • Kobayashi, T., & Fujie, S. (2013). Conversational robots: An approach to conversation protocol issues that utilizes the paralinguistic information available in a robot-human setting. Acoustical Science and Technology, 34, 64–72.
  • Komatsu, T. (2005). Toward making humans empathize with artificial agents by means of subtle expressions. First international conference on affective computing and intelligent interaction, 458–465.
  • Komatsu, T., & Kobayashi, K. (2012). Can users live with overconfident or unconfident systems? A comparison of artificial subtle expressions with human-like expression. Proceedings of conference on human factors in computing systems, 1595–1600.
  • Komatsu, T., & Yamada, S. (2007). How appearance of robotic agents affects how people interpret the agents’ attitudes. Proceedings of the international conference on Advances in computer entertainment technology – ACE’ 07, 123.
  • Komatsu, T., & Yamada, S. (2008). How does appearance of agents affect how people interpret the agents’ attitudes: An experimental investigation on expressing the same information from agents having different appearance. IEEE congress on evolutionary computation, 1935–1940.
  • Komatsu, T., & Yamada, S. (2011, February). How does the agents’ appearance affect users’ interpretation of the agents’ attitudes: Experimental investigation on expressing the same artificial sounds from agents with different appearances. International Journal of Human–Computer Interaction, 27, 260–279.
  • Komatsu, T., Yamada, S., Kobayashi, K., Funakoshi, K., & Nakano, M. (2010). Artificial subtle expressions: Intuitive notification methodology of artifacts. Proceedings of the 28th international conference on human factors in computing systems, 1941–1944.
  • Kozima, H., Michalowski, M. P., & Nakagawa, C. (2009, November). Keepon: A playful robot for research, therapy, and entertainment. International Journal of Social Robotics, 1, 3–18.
  • Krosnick, J. A., Holbrook, A. L., Berent, M. K., Carson, R. T., Hanemann, W. M., Kopp, R. J., … Conaway, M. (2002). The impact of “No Opinion” response options on data quality: Non-attitude reduction or an invitation to satisfice? Public Opinion Quarterly, 66, 371–403.
  • Laukka, P. (2005, September). Categorical perception of vocal emotion expressions. Emotion (Washington, D.C.), 5, 277–295.
  • Lee, M. K., Kiesler, S., Forlizzi, J., Srinivasa, S., & Rybski, P. (2010, March). Gracefully mitigating breakdowns in robotic services. Proceedings of the 5th international conference on human–robot interaction, 203–210.
  • Leite, I., Pereira, A., Martinho, C., & Paiva, A. (2008). Are emotional robots more fun to play with? Robot and human interactive communication, the 17th IEEE international symposium on 77–82.
  • Libin, A., & Libin, E. (2004, Nov). Person–robot interactions from the robopsychologists’ point of view: The robotic psychology and robotherapy approach. Proceedings of the IEEE, 92, 1789–1803.
  • Lison, P., & Kruiff, G.-J. (2009, September). Robust processing of situated spoken dialogue. Proceedings of the 32nd annual german conference on advances in artificial intelligence, 241–248.
  • Lohse, M., Rohlfing, K. J., Wrede, B., & Sagerer, G. (2008, May). “Try something else!” When users change their discursive behaviour in human–robot interaction. Proceedings of the international conference on robotics and automation, 3481–3486.
  • McCartney, J. (2002). Rethinking the computer music language: SuperCollider. Computer Music Journal, 26(4), 61–68.
  • Moore, R.K. (2014). Spoken language processing: Time to look outside? In L. Besacier, A.-H. Dediu, & C. Martin-Vide (Eds.), Statistical language and speech processing (pp. 21–36). New York, NY: Springer.
  • Mozos, O. M., Jensfelt, P., Zender, H., Kruijff, G.-J. M., & Burgard, W. (2007). From labels to semantics: An integrated system for conceptual spatial representations of indoor environments for mobile robots. ICRA workshop: Semantic information in robotics, 33–40.
  • Mubin, O., Bartneck, C., & Feijs, L. (2009). What you say is not what you get: Arguing for artificial languages instead of natural languages in human robot speech interaction. The spoken dialogue and human--robot interaction workshop.
  • Mubin, O., Bartneck, C., & Feijs, L. (2010). Towards the design and evaluation of ROILA: a speech recognition friendly artificial language. Proceedings of the 7th international conference on advances in natural language processing, 250–256.
  • Mumm, J., & Mutlu, B. (2011). Human–robot proxemics: Physical and psychological distancing in human–robot interaction. Proceedings of the 6th international conference on human–robot interaction, 331–338.
  • Murray, I. R., & Arnott, J. L. (1995). Implementation and testing of a system for producing emotion-by-rule in synthetic speech. Speech Communication, 16, 369–390.
  • Murray, I. R., & Arnott, J. L. (1996). Synthesizing emotions in speech: is it time to get excited? Proceeding of fourth international conference on spoken language processing, 1816–1819.
  • Németh, G., Olaszy, G., & Csapó, T. G. (2011). Spemoticons: Text-to-speech based emotional auditory cues. International conference on auditory display, 1–7.
  • Niewiadomski, R., Hofmann, J., Urbain, J., Platt, T., Wagner, J., & Piot, B. (2013). Laugh-aware virtual agent and its impact on user amusement. Proceedings of the 2013 international conference on autonomous agents and multi-agent systems, 619–626.
  • Olaszy, G., NÃmeth, G., Olaszi, P., Kiss, G., Zainkó, C., & Gordos, G. (2000). Profivox — A hungarian text-to-speech system for telecommunications applications. International Journal of Speech Technology, 3, 201–215.
  • Oudeyer, P.-Y. (2003). The production and recognition of emotions in speech: features and algorithms. International Journal of Human–Computer Studies, 59, 157–183.
  • Paepcke, S., & Takayama, L. (2010, March). Judging a bot by its cover: An experiment on expectation setting for personal robots. Proceedings of the 5th international conference on human–robot interaction, 45–52.
  • Palladino, D. K., & Walker, B. N. (2007). Learning rates for auditory menus enhanced with spearcons versus earcons. Proceedings of the 13th international conference on auditory display, 274–279.
  • Paulmann, S., & Pell, M. (2011). Is there an advantage for recognizing multimodal emotional stimuli? Motivation and Emotion, 35, 192–201.
  • Pell, M. D., Paulmann, S., Dara, C., Alasseri, A., & Kotz, S. A. (2009). Factors in the recognition of vocally expressed emotions: A comparison of four languages. Journal of Phonetics, 37, 417–435.
  • Picard, R. W. (1997). Affective computing. Cambridge, MA: MIT Press.
  • Prendinger, H., Becker, C., & Ishizuka, M. (2006). A study in users’ physiological response to an empathic interface agent. International Journal of Humanoid Robotics, 3, 371–391.
  • Rae, I., Takayama, L., & Mutlu, B. (2013, March). The influence of height in robot-mediated communication. Proceedings of the 8th international conference on human–robot interaction, 1–8.
  • Rastle, K., Harrington, J., & Coltheart, M. (2002). 358,534 nonwords: The ARC nonword database. The Quarterly Journal of Experimental Psychology Section A, 55, 1339–1362.
  • Read, R., & Belpaeme, T. (2010, October). Interpreting non-linguistic utterances by robots: Studying the influence of physical appearance. Proceedings of the 3rd international workshop on affective interaction in natural environments, 65–70.
  • Read, R., & Belpaeme, T. (2012, March). How to use non-linguistic utterances to convey emotion in child-robot interaction. Proceedings of the 7th international conference on human-robot interaction, 219–220.
  • Read, R., & Belpaeme, T. (2013, March). People interpret robotic non-linguistic utterances categorically. Proceedings of the 8th international conference on human-robot interaction, 209–210.
  • Read, R., & Belpaeme, T. (2014). Situational context directs how people affectively interpret robotic non-linguistic utterances. Proceedings of the 9th international conference on human–robot interaction, 41–48.
  • Remez, R., & Rubin, P. (1993). On the intonation of sinusoidal sentences: Contour and pitch height. The Journal of the Acoustical Society of America, 94, 1983–1988.
  • Remez, R., Rubin, P., Pisoni, D., & Carrell, T. (1981). Speech perception without traditional speech cues. Science, 212, 947–950.
  • Ribeiro, T., & Paiva, A. (2012). The illusion of robotic Life. Proceedings of the 7th international conference on human–robot interaction, 383–390.
  • Robins, B., Dickerson, P., Stribling, P., & Dautenhahn, K. (2004). Robotmediated joint attention in children with autism: A case study in robothuman interaction. Interaction Studies, 5, 161–198.
  • Ros Espinoza, R., Nalin, M., Wood, R., Baxter, P., Looije, R., Demiris, Y., & Belpaeme, T. (2011, November). Child–robot interaction in the wild: Advice to the aspiring experimenter. Proceedings of the 13th international conference on multimodal interfaces, 335–342.
  • Saerbeck, M., & Bartneck, C. (2010). Perception of affect elicited by robot motion. Proceedings of the 5th international conference on human–robot interaction, 53–60.
  • Saldien, J., Goris, K., Yilmazyildiz, S., Verhelst, W., & Lefeber, D. (2008). On the design of the huggable robot Probo. Journal of Physical Agents, 3–12.
  • Scherer, K. (1971). Randomized splicing: A note on a simple technique for masking speech content. Journal of Experimental Research in Personality, 5, 155–159.
  • Scherer, K. (1985). Vocal affect signalling: A comparative approach. In J. Rosenblatt, C. Beer, M.-C. Busnel, & P. Slater (Eds.), Advances in the study of behavior (pp. 189–244). New York, NY: Academic Press.
  • Scherer, K. (1986, March). Vocal affect expression: a review and a model for future research. Psychological Bulletin, 99, 143–65.
  • Scherer, K. R. (1994). Affect bursts. In S. van Goozen, N. E. van de Poll, & J. A. Sergeant (Eds.), Emotions: Essays on emotion theory (ed., pp. 161–196). Hillsdale, NJ: Erlbaum.
  • Scherer, K. (1995, October). Expression of emotion in voice and music. Journal of Voice, 9, 235–248.
  • Scherer, K. (2003). Vocal communication of emotion: A review of research paradigms. Speech Communication, 40, 227–256.
  • Scherer, K., & Ekman, P. (1982). Methods of research on vocal communication: Paradigms and parameters. In K. Scherer & P. Ekman (Eds.), Handbook of methods in nonverbal behavior research (pp. 136–198). Cambridge, UK: Cambridge University Press.
  • Scherer, K., Koivumaki, J., & Rosenthal, R. (1972). Minimal cues in the vocal communication of affect: Judging emotions from content-masked speech. Journal of Psycholinguistic Research, 1, 269–285.
  • Scherer, K., & Oshinsky, J. (1977). Cue utilization in emotion attribution from auditory stimuli. Motivation and Emotion, 1, 331–346.
  • Schröder, M. (2001). Emotional speech synthesis: A review. Proceedings of the 7th european conference on speech communication and technology, 2–5.
  • Schröder, M. (2003a). Experimental study of affect bursts. Speech Communication, 40, 99–116.
  • Schröder, M. (2003b). Speech and emotion research: An overview of research frameworks and a dimensional approach to emotional speech synthesis ( Unpublished doctoral dissertation). Institute of Phonetics, Saarland University, Saarbrücken, Germany.
  • Schröder, M., Bevacqua, E., Cowie, R., Eyben, F., Gunes, H., Heylen, D., … Wollmer, M. (2012). Building autonomous sensitive artificial listeners. Transactions on Affective Computing, 3, 165–183.
  • Schröder, M., Burkhardt, F., & Krstulovic, S. (2010). Synthesis of emotional speech. In K. R. Scherer, T. Bänziger, & E. Roesch (Eds.), Blueprint for affective computing (pp. 222–231). Oxford, UK: Oxford University Press.
  • Schuller, B., & Batliner, A. (2014). Computational paralinguistics: Emotion, affect and personality in speech and language processing. New York, NY: Wiley & Sons.
  • Schwent, M., & Arras, K. (2014). R2-D2 reloaded: A flexible sound synthesis system for sonic human–robot interaction design. Proceedings of the 23rd international symposium on robot and human interaction communiation, 161–167.
  • Seo, S. H., Geiskkovitch, D., Nakane, M., King, C., & Young, J. E. (2015). Poor thing! Would you feel sorry for a simulated robot?: A comparison of empathy toward a physical and a simulated robot. Proceedings of the tenth annual ACM/IEEE international conference on human–robot interaction, 125–132.
  • Shiwa, T., Kanda, T., Imai, M., Ishiguro, H., & Hagita, N. (2009, February). How quickly should a communication robot respond? Delaying strategies and habituation effects. International Journal of Social Robotics, 1, 141–155.
  • Silva-Pereyra, J., Conboy, B. T., Klarman, L., & Kuhl, P. K. (2007). Grammatical processing without semantics? An event-related brain potential study of preschoolers using jabberwocky sentences. Journal of cognitive neuroscience, 19, 1050–1065.
  • Singh, A., & Young, J. (2012). Animal-inspired human–robot interaction: A robotic tail for communicating state. Proceedings of the 7th international conference on human–robot interactinon, 237–238.
  • Snel, J., & Cullen, C. (2013). Judging emotion from low-pass filtered naturalistic emotional speech. Affective computing and intelligent interaction, 336–342.
  • Steels, L. (2003). Evolving grounded communication for robots. Trends in Cognitive Science, 7, 308–312.
  • Steels, L., Kaplan, F., McIntyre, A., & Van Looveren, J. (2002). Crucial factors in the origins of word-meaning. In A. Wray (Ed.), The transition to language (pp. 252–271). Oxford, UK: Oxford University Press. Retrieved from http://groups.lis.illinois.edu/amag/langev/paper/steels02crucialFactors.html
  • Takayama, L., & Pantofaru, C. (2009, Oct). Influences on proxemic behaviors in human–robot interaction. Proceedings of the international conference on intelligent robots and systems, 5495–5502.
  • Teshigawara, M., Amir, N., Amir, O., Wlosko, E. M., & Avivi, M. (2007). Effects of random splicing on listeners perceptions. 16th international congress of phonetic sciences, 2101–2104.
  • Theobalt, C., Bos, J., Chapman, T., Espinosa-Romero, A., Fraser, M., Hayes, G., … Reeve, R. (2002). Talking to Godot: Dialogue with a mobile robot. Ieee/rsj international conference on intelligent robots and systems, 1338–1343.
  • Tickle, A. (2000). English and Japanese speaker’s emotion vocalizations and recognition: a comparison highlighting vowel quality. ISCA workshop on speech and emotion, 157–183.
  • Trouvain, J., & Schröder, M. (2004). How (not) to add laughter to synthetic speech. In E. André, L. Dybkjaer, W. Minker, & P. Heisterkamp (Eds.), Affective dialogue systems (Vol. 3068, pp. 229–232). Berlin, Germany: Springer.
  • Trovato, G., Zecca, M., Kishi, T., Endo, N., Hashimoto, K., & Takanishi, A. (2013). Generation of humanoid robot’s facial expressions for context-aware communication. International Journal of Humanoid Robotics, 10, 1350013-1–1350013-23.
  • Van Tassel, D. (1969). Cryptographic techniques for computers. Proceedings of the may 14–16, 1969, spring joint computer conference, 367–372.
  • Vatsa, A., Mohan, T., & Vatsa, S. (2012). Novel cipher technique using substitution method. International Journal of Information and Network Security, 1, 313–320.
  • Vazquez, M., Steinfeld, A., Hudson, S. E., & Forlizzi, J. (2014). Spatial and other social engagement cues in a child–robot interaction: Effects of a sidekick. Proceedings of the 9th international conference on human–robot interaction, 391–398.
  • Vickers, P., & Alty, J. L. (2002). Using music to communicate computing information. Interacting with Computers, 14, 435–456.
  • Walker, B. N., Nance, A., & Lindsay, J. (2006). Spearcons: Speech-based earcons improve navigation performance in auditory menus. Proceedings of the international conference on auditory display, 63–68.
  • Walters, M. L., Syrdal, D. S., Dautenhahn, K., te Boekhorst, R., & Koay, K. L. (2007). Avoiding the uncanny valley: Robot appearance, personality and consistency of behaviour in an attention-seeking home scenario for a robot companion. Autonomous Robots, 24, 159–178.
  • Wang, W., Athanasopoulos, G., Yilmazyildiz, S., Patsis, G., Enescu, V., Sahli, H., … Canamero, L. (2014, September). Natural emotion elicitation for emotion modeling in child–robot interactions. 4th workshop on child–computer interaction.
  • Ward, N. (1996). Using prosodic clues to decide when to produce back-channel utterances. Fourth international conference on Spoken language, 1728–1731.
  • Yilmazyildiz, S. (2006). Communication of emotions for e-creatures ( Unpublished master’s thesis). Vrije Universiteit Brussel, Belgium.
  • Yilmazyildiz, S., Athanasopoulos, G., Patsis, G., Wang, W., Oveneke, M. C., Latacz, L., … Lefeber, D. (2013). Voice modification for wizard-of-oz experiments in robot–child interaction. Proceedings of the workshop on affective social speech signals.
  • Yilmazyildiz, S., Henderickx, D., Vanderborght, B., Verhelst, W., Soetens, E., & Lefeber, D. (2011, October). EMOGIB: Emotional gibberish speech database for affective human–robot interaction. Proceedings of the international conference on affective computing and intelligent interaction, 163–172.
  • Yilmazyildiz, S., Henderickx, D., Vanderborght, B., Verhelst, W., Soetens, E., & Lefeber, D. (2013). Multi-modal emotion expression for affective human–robot interaction. Proceedings of the workshop on affective social speech signals.
  • Yilmazyildiz, S., Latacz, L., Mattheyses, W., & Verhelst, W. (2010, September). Expressive gibberish speech synthesis for affective human–computer interaction. Proceedings of the 13th international conference on text, speech and dialogue, 584–590.
  • Yilmazyildiz, S., Mattheyses, W., Patsis, Y., & Verhelst, W. (2006). Expressive speech recognition and synthesis as enabling technologies for affective robot–child communication. Advances in Multimedia Information Processing—PCM 2006, Lecture Notes in Computer Science, 4261, 1–8.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.