383
Views
20
CrossRef citations to date
0
Altmetric
Original Articles

How contingent should a lifelike robot be? The relationship between contingency and complexity

, , &
Pages 143-162 | Received 02 Feb 2006, Published online: 05 Jun 2007

Abstract

We believe that lifelikeness is important to make it possible for a communication robot to interact naturally with people. In this research, the relationship between contingency and complexity in a lifelike robot was investigated. We have developed a robot control system to control experimentally contingency and complexity in interaction by combining a humanoid robot with a motion-capturing system. Two independent experiments were conducted with different levels of interaction complexity: simple interaction and complex interaction. Experiments in the simple interaction situation indicated that subjects felt that the robot behaved autonomously when contingency was low, and there was no significant relationship between contingency and a lifelike impression. On the other hand, experiments in the complex interaction situation showed that the robot gave the subjects the impression of being more autonomous and lifelike when contingency was high.

1 Introduction

Several humanoid robots have been developed recently (Nakadai et al. Citation2001, Kidd and Breazeal Citation2004, Kanda et al. Citation2004, Trafton et al. Citation2005). In addition to these, there is work underway to design android robots that have a very human-like appearance (Goetz et al. Citation2003, Walters et al. Citation2005). Such developments are leading to the emergence of research into ‘android science’ (Ishiguro Citation2005). Android science is based on the premise that an android's human-like appearance would enable us to engage in natural interaction with it. At the same time, the question ‘What is human?’ can be investigated by implementing such an interaction capability into a human-like robot. Interaction is considered to be a bidirectional process where one subject affects the other through verbal and/or non-verbal modality. There have been several research studies on using human-like motion in order to make interaction with a robot natural, such as the robot blinking, mouth movement when it is speaking, and a combination of natural arm and head movements (Sakamoto et al. Citation2005). However, it is not yet clear what kind of behaviour is suitable for natural interaction with people.

Our long-term goal is to realize a ‘communication robot’ that is capable of communicating with humans and supporting our daily activities. We consider ‘lifelikeness’ to be important for a communication robot (Yamaoka et al. Citation2005), but existing robots lack this particular quality. Since humans usually interact more naturally with animate beings than inanimate ones (Rakison and Poulin-Dubois Citation2001), the lifelikeness that communication robots need is not only the lifelikeness of motion, such as biomechanical motion (Hirai and Hiraki Citation2005), but also the lifelikeness of behaviour toward a partner. We believe that a robot with such lifelike behaviour will enable us to interact with it naturally.

We particularly focus on the role of contingency and complexity in lifelikeness. In psychology, the word ‘contingency’ is defined and commonly used as ‘a correspondence of one's behaviour to another's behaviour’. This definition is different from its ordinary use as ‘referring to a probabilistic happening of an unforeseen event’. This paper follows the definition used in psychology. It is found that contingency plays an important role in inter-human communication. For example, when people talk together, one may adjust his/her tempo, intonation and wording of utterances to the partner's in order to make the conversation go smoothly, using a method that is known as pacing. Correspondence of non-verbal behaviour, such as similarity in facial expression, eye gaze and body movements, promotes friendly relationships among people, and is known as the chameleon effect (Lakin et al. Citation2003). The importance of contingency is also reported in developmental psychology. Rakison and Poulin-Dubois Citation(2001) reported that an infant observes the contingency of an object to judge whether the object is animate or inanimate. Arita et al. Citation(2005) revealed through a comparison of an interactive robot and an active robot that infants accepted an interactive robot as a possible conversation partner. In robotics, Yamaoka et al. Citation(2005) implemented contingency and other characteristics of animate objects into a humanoid robot, which demonstrated that the robot can be observed as lifelike. These earlier findings on contingency demonstrated that the lifelikeness of an object is associated with highly contingent behaviour. However, it is not clear how much contingency is required to be lifelike. On the other hand, humans do not feel a computer mouse is lifelike, although its behaviour is perfectly contingent on the operator's movement (Magyar and Gergely Citation1998). From these examples of evidence, we have started to explore what ideal contingency is. We expect that an appropriate level of contingency is between highly contingent and perfectly contingent, which is one hypothesis to be verified in the experiments.

Another of our experimental hypotheses is that ‘the required level of contingency depends on the level of complexity’. The example of the computer mouse reminds us of the importance of the complexity factor. Humans can easily find a correspondence between their behaviour and the reaction of such a simple tool. A simple tool is designed to be almost perfectly contingent on its user. Contrarily, we do not expect high contingency in simple creatures such as ants. We expect intelligent animals and humans, which are sufficiently complex, to exhibit contingent behaviour. Thus, in this study the relationship between contingency and complexity in lifelike robots is investigated.

The investigation of the appropriate contingency level will also give us an engineering advantage because contingency depends on the sensing capability of the robot: if a very high contingency is required it means that a very high sensing capability should be developed. A robot's lack of sensing capability results in a lack of contingency in interaction. For example, a robot that does not have a vision sensor cannot easily respond to human body gestures. If less contingency would suffice for a lifelike robot, we would not need to develop as much sensing capability. On the other hand, there are some problems with experimentally investigating contingency. There is no well-developed vision sensor that can provide a lifelike robot with perfect contingency. Thus, we bypassed the difficulties of developing such a sensing capability by using a motion-capturing system (Kamashima et al. Citation2004, Sakamoto et al. Citation2005). It enabled us to study the robot's behaviour for natural interaction with humans. Since there are several researchers studying the sensing aspects of robots, such as Shiomi et al. Citation(2004), we believe that we will be able to develop an appropriate sensing capability once we identify what is required through experiments.

2 Experimental system

2.1 Contingency and complexity in interaction

It is difficult to establish contingency and complexity as the independent variables of the experiment. Regarding contingency, we want to study the appropriate level of contingency, which requires us to define it as a continuous numerical variable. However, we usually do not measure it numerically. There is no numerical definition for contingency. The difficulty with complexity is that there are few examples of complex human–robot interaction. At the same time it is important to keep the experiment simple enough so that we can identify the effects of both contingency and complexity. Therefore, for our experiments we established the following definitions of contingency and complexity.

We defined the contingency of a robot as ‘the ratio of time during which the robot's behaviour corresponds to the partner's behaviour to the total time of interaction’, and developed an experimental system to control the degree of contingency in interaction. This is due to the vagueness of psychological definition and difficulty in implementing the experimental system. The psychological definition, i.e. ‘a correspondence of one's behaviour to another's behaviour’, is a mixture of various correspondences. Degree of contingency might be defined as several types of units, such as ‘a ratio of the time of being correspondent’, ‘a ratio of the amounts of correspondent movements’, ‘a ratio of the number of correspondent actions’, and so forth, even though we limit its range to non-verbal behaviour. However, to build an experimental system it is very hard to process these correspondences, with the exception of the first one. The first ‘ratio of time’ is easy to implement. Thus, we only adopted the ratio of time as a variable called ‘contingency’. Although such a limitation is caused by current technology, we need to start from something simple and practical; otherwise we cannot tackle the contingency problem. We believe that this is certainly associated with the conceptual definition of ‘contingency’, although it is not the entire concept.

The definition of complexity in our paper is based on a system that we have already developed. It is a challenge to develop a robot system that can engage in complex interaction. With a motion-capturing system, we have developed a robot system for complex bodily interaction with people (Yamaoka et al. Citation2005). We used this system to make an example of a complex interaction situation. Complex interaction is accomplished by a robot that performs various correspondent behaviours (eye contact, reacting to touch, approaching a person, and so on) according to a human's behaviour. In contrast, we prepared a simple interaction situation by reducing the complexity of the interaction based on the same system. In simple interaction, a robot performs only one behaviour, mimicking the body motion of its partner. Thus, the complexity variable is not continuous and does not allow us to compare the experimental results from these two types of situation. Instead, we shall try to interpret the effect of complexity by referring to both of two independent experiments: an example of complex interaction and an example of a simple interaction situation.

2.2 Experimental system

shows an overview of the experimental system. It consists of a humanoid robot, a motion-capturing system and a robot controller. Markers are attached to the head, arms and body of both the robot and the human, and the motion-capturing system obtains their body motions, outputting the position data of markers to the robot controller. The data of the robot's sensors (touch sensor, potentiometer of the motor, and so on) are also sent to the robot controller. Based on these motion data from the motion-capturing system and the sensor data of the robot, the robot controller cognizes the human's behaviour and commands the robot to behave accordingly. The details of each component (humanoid robot, motion-capturing system and robot controller) are described in the following sections.

Figure 1. System configuration.

Figure 1. System configuration.

2.2.1 Humanoid robot ‘Robovie’.

We used a humanoid robot named ‘Robovie’, which is characterized by its human-like body expressions (, right) (Kanda et al. Citation2004). Robovie's human-like body consists of two eyes, a head and two arms that generate the complex body movements required for interaction. Robovie has two 4-degrees-of-freedom arms, a 3-degrees-of-freedom head and two 2-degrees-of-freedom eyes. Thus, its body has sufficient expressive ability to make human-like gestures. In addition, it has two wheels to move. A total of eight touch sensors are attached to the robot's body (head, stomach, right and left upper arms, lower arms and shoulders). These touch sensors are used to check whether the human touches the robot's body, indicated by the dotted lines in .

Figure 2. Human and robot.

Figure 2. Human and robot.

2.2.2 Motion-capturing system.

We used a motion-capturing system to acquire three-dimensional (3D) numerical data on human and robot body movements. It consisted of 12 sets of infrared cameras with an infrared irradiation capability and markers that reflect infrared rays. The motion-capturing system calculates the 3D position of each marker based on the 2D positions. In our experimental environment, the system's time resolution was 60 Hz and the spatial resolution was about 1 mm. The positions of the markers attached to the human and the robot are shown by the solid circles in . The robot can acquire highly accurate information about the movements of the human and itself by using this motion-capturing system in place of complex visual sensors on the robot.

2.2.3 Robot controller.

shows an outline of the robot controller. The robot controller consists of the contingency control module and behaviour control module. The contingency control module controls contingency in interaction by selecting the interaction mode. Based on the interaction mode, the behaviour control module makes the robot perform with a certain behaviour. The details of the contingency and behaviour control module are described hereafter.

Figure 3. Robot controller.

Figure 3. Robot controller.
2.2.3.1  Contingency control module.

We provided the robot with two interaction modes (contingent or non-contingent). The contingency control module selects the interaction mode for the robot, contingent or non-contingent, and controls the degree of contingency by adjusting the time ratio of the contingent mode. The robot's behaviour corresponds to the subject's behaviour in the contingent mode. On the other hand, in the non-contingent mode the robot behaviour is not contingent on the subject's behaviour because it behaves without regard to its partner's behaviour.

We defined 5 s as one section in which the robot is in either the contingent or the non-contingent mode. The contingency is defined by the percentage of contingent sections in a five-section block, which comprises 25 s. Contingent and non-contingent sections were randomly allocated based on the degrees of contingency for 25 s (this example is illustrated in ). Based on this method, we can set a total of six patterns with contingency levels of 0, 20, 40, 60, 80 and 100%.

Figure 4. Example of contingency control.

Figure 4. Example of contingency control.
2.2.3.2  Behaviour control module.

The behaviour control module selects the type of control, i.e. contingent or non-contingent control, based on the interaction mode. In the case of the contingent mode, the behaviour control module selects contingent control. In the non-contingent mode, the behaviour control module selects non-contingent control. Details of the methods of contingent and non-contingent behaviour control are also described.

Non-contingent behaviour control.

The robot controller recalls behaviours from past interaction records recorded when it had interacted with another partner in the contingent mode. Thus, the robot does not behave contingent on the partner's behaviour.

Contingent behaviour control.

The robot controller obtains information on the subject's behaviour from the motion-capturing system and the robot's own touch sensors. Then, based on the data from these sensors, the robot controller executes contingent behaviour according to the human's behaviour. We can change the method of contingent behaviour control according to the purpose of the experiment. In experiments on simple interaction, the robot mimics a partner's behaviour, while in complex interaction experiments, the robot performs various behaviours depending on the partner's behaviour.

Contingent behaviour control for simple interaction. As contingent behaviour, the robot mimics the partner's behaviour, an example of which can be seen in . The robot controller calculates the destination angle of each joint of the robot's head and arms based on numerically obtained data on human body movements. The details of the mimic method are described in Appendix A.

Figure 5. Simple interaction—a scene of mimicking.

Figure 5. Simple interaction—a scene of mimicking.
Contingent behaviour control for complex interaction.

shows the internal model of the robot, and shows the behaviours of the robot. The robot controller controls the robot by selecting its behaviour based on the robot's internal state. The robot's internal state is chosen from three states, recipient, idling and agent, according to the partner's behaviour. The robot performs various behaviours according to its internal state. Details of the behaviours and the internal states are described next.

Figure 6. (a) State transition—complex interaction; (b) behaviours—complex interaction.

Figure 6. (a) State transition—complex interaction; (b) behaviours—complex interaction.
Behaviours.
  • Idling motion ( upper left). The robot performs an idling motion, moving its arm and neck at certain intervals.

  • Maintain eye contact ( upper middle). The robot gazes at its partner's face by moving its eye and neck motors.

  • Maintain distance and direction ( upper middle). The robot maintains a certain distance from and direction towards the partner. The distance, about 60 cm, is selected so that the robot's arms cannot touch the partner's body. The direction is towards the partner's face.

  • Touching the partner's hand ( upper right). The robot attempts to touch the partner's hand by controlling its arms and wheels.

  • Reaction to the partner's touch ( lower part). When the distance between the partner's hand and any touch panel is less than 30 cm, or when the touch sensor reacts, the robot recognizes that the partner is trying to touch the robot. Before the partner can touch the robot's body, the robot looks at the partner's hand by moving its eyes and neck and dodges the partner's hand by using its arms and wheels. As shown in the lower part of , such a dodging behaviour is different for each part of a touch (eight patterns according to eight touch parts).

Internal state.
  • Recipient state. The robot reacts to the touch of a human. The robot performs ‘Reaction to the partner's touch’ when the partner tries to touch the robot.

  • Idling state. The robot waits for the partner's touch. The robot performs ‘Idling motion’, ‘Maintain eye contact’ and ‘Maintain distance and direction’ when the partner does not try to touch the robot.

  • Agent state. The robot tries to touch the partner's hand. If the idling state continues for more than 5 s, the robot performs ‘Touching the partner's hand’ for 5 s. If the partner tries to touch the robot's body or an arm part other than the hand part that is trying to touch the partner's hand, the internal state changes from agent to recipient.

3. Experiment

Two independent experiments were conducted with different levels of interaction complexity: simple interaction and complex interaction. In each experiment, we controlled the contingency and investigated the relationship between contingency and subjective impression. Section 3.1 shows the experiment in simple interaction. Section 3.2 shows the experiment in complex interaction.

3.1 The importance of contingency in simple interaction

In this experiment, the robot mimicked the partner's behaviour based on the level of contingency. The participants were asked to move their arms and head. After participants interacted with the robot with different degrees of contingency, they answered questions about their impressions of, and preferences for, the robot.

3.1.1 Experimental method.

Subjects.

Twenty university students participated as subjects in this experiment (seven men, 13 women).

Instructions.

We instructed the subjects to move their arms and head, and to observe the robot's reactions toward their behaviour. In addition, we instructed the subjects to observe the following instructions.

1.

Sit down in a chair.

2.

Face the robot.

3.

Keep moving your arms and head.

4.

Do not move (tilt, twist) your upper body.

Experiment conditions.

We set six conditions (C0, C20, C40, C60, C80 and C100) with contingency levels of 0, 20, 40, 60, 80 and 100%. When the robot was in the non-contingent state, it recalled behaviours from past interaction records that were recorded when it had interacted with an experimenter in the contingent state. The robot's wheels did not move in this experiment. The experimental time for each trial was 50 s. The reaction delay was set at 1 s.

Evaluation method.

Since the differences in robot behaviour were subtle between the conditions, we used a paired comparison as the evaluation method. The comparison evaluation was performed based on the following procedure.

The subjects interacted with the robot in two trials with different contingency levels. After the two trials, we administered a questionnaire to obtain subjective evaluations. shows the questions.

Figure 7. Questions in the simple interaction experiment.

Figure 7. Questions in the simple interaction experiment.

This sequence was repeated 6 C 2=15 times, and the order of trials was randomly allocated among the subjects to remove the order effect.

3.1.2 Experimental results.

shows the results of the paired comparison. The number in each of the columns indicates how many of the subjects selected the condition against another condition in the row. For example, in the case where the C0 condition was compared with the C20 condition, 12 subjects chose the C0 condition (eight subjects chose the C20 condition) concerning the evaluation of ‘lifelike’.

Table 1. Experimental results—paired comparisons.

We employed the Bradley method to analyse the relationship between the degree of contingency and the degree of each impression (Bradley and Terry Citation1952). Details of the Bradley method are given in Appendix B.

The results from the Bradley analysis gave p i ( i = C0, C20, C40, C60, C80, C100), which express the degree of evaluation for each condition. These results are also shown in . Based on the result for p i , we obtained the three relationship models shown in .

Figure 8. Relationship between contingency and impression in simple interactions.

Figure 8. Relationship between contingency and impression in simple interactions.

We examined the matching of the model by the goodness-of-fit test, the results of which are given as follows:

  • Lifelikeness’:

  • Autonomy’:

  • Preference’: .

These results indicate the validity of p i and the matching of the model because the model was not rejected.

In addition, we examined the difference between conditions. Based on our experimental results:

  • Lifelikeness’:

  • Autonomy’:

  • Preference’: .

The results indicate that there is a significant difference between each pattern in the case of ‘autonomy’ and ‘preference’.

In summary, the relationship between the impression of the subjects and contingency was as follows. Subjects had a higher preference for robots with a high level of contingency. At the same time, robots with a low degree of contingency were perceived as being more autonomous. On the other hand, there was no significant relationship between contingency and lifelikeness.

3.2 The importance of contingency in complex interaction

In this experimental situation the robot performs various correspondent behaviours (eye contact, reacting to touch, approaching a person, and so on) according to the partner's behaviour based on the level of contingency. The participants were asked freely to touch or move around the robot. After participants interacted with the robot with different degrees of contingency, they gave their impressions of the ‘lifelikeness’ and ‘autonomy’ of the robot, and their preferences.

3.2.1 Experimental method.

Subjects.

We used 37 university students as subjects in this experiment (22 men, 15 women). None of them participated in the simple interaction experiment.

Instructions.

First, we taught the subjects how to touch the robot, which has several touch sensors. We then instructed them to interact with the robot, such as to try to approach or step away from the robot, or to touch its body, or to observe the robot's motions and reactions. Finally, we instructed the subjects to stop interacting with the robot when they got bored.

Experiment conditions.

In the same way as in the experiment in simple interaction, we set six conditions (C0, C20, C40, C60, C80 and C100) with contingency levels of 0, 20, 40, 60, 80 and 100%. When the robot was in the non-contingent state, it recalled behaviours from past interaction records that were recorded when it had interacted with an experimenter in the contingent state. The experimental time for each trial was 180 s.

Evaluation method.

We administered a questionnaire to obtain subjective evaluations for every condition. shows the questions asked, which deal with the degrees of lifelikeness, autonomy and preference. Subjects answered each question on a one-to-seven scale, where one is the lowest evaluation and seven is the highest. The order of trials was randomly allocated among the subjects to remove the order effect.

Figure 9. Questions in the complex interaction experiment.

Figure 9. Questions in the complex interaction experiment.

Moreover, we compared each subject's behaviour when changing conditions. We analysed the ‘interaction time’ and the ‘amount of hand movements’. The ‘interaction time’ ended when the participants stopped interacting with the robot (when the interaction time is over 3 min, interaction is stopped by the experimenter). The ‘amount of hand movements’ equals ‘the total amount of movements in which the subjects move their hand in interaction’ divided by ‘interaction time’. The position of the hand is calculated based on the relative position of the hand from the centre of the body.

3.2.2 Experimental results.

The effect of the robot on the subjects’ impression.

shows the results of the impression evaluation. We conducted a regression analysis for autonomy, lifelikeness and preference. As shown in , all of these are in direct proportion to contingency, and statistically significant: F(1, 221)=17.931, p<0.001; F(1, 221)=7.355, p=0.007; F(1, 221)=19.942, p<0.001.

Figure 10. Relationship between contingency and impression in the complex interaction.

Figure 10. Relationship between contingency and impression in the complex interaction.

Figure 11. Relationship between contingency and subject's behaviour in complex interaction.

Figure 11. Relationship between contingency and subject's behaviour in complex interaction.

Table 2. Experimental results.

We are interested in the peak contingency level that had the most positive effect, which seems to be around the 80% contingency and 100% contingency levels. We conducted a repeated-measure analysis of variance (ANOVA) in order to analyse the detailed relationship among each contingency level. It indicated a significant difference for each question. The results of the ANOVAs are as follows:

  • Lifelikeness’:

  • Autonomy’:

  • Preference’: .

Multiple comparisons with the Bonferroni and least significance difference (LSD) methods gave the following results:

  • Lifelikeness’:

            

  • Autonomy’:

            

  • Preference’:

            

            

            

(Here, * represents the significant difference at the p<0.05 level given by the Bonferroni method, and + indicates that by the LSD method. Using the LSD method is controversial because it only reveals the significance of each paired comparison, and it cannot guarantee an accurate type-II error rate throughout the experiment. However, when an ANOVA is employed to find the significant difference, the LSD method can be used with it to analyse the difference among the conditions in more detail. The Bonferroni method, though, can be used without significant difference among the conditions provided by the ANOVA.)

The relationship between the impression of the subjects and contingency can be summarized as follows. High contingency was associated with a good subjective impression of ‘autonomy’, ‘lifelikeness’ and ‘preference’. However, none of the three evaluation criteria varied significantly with contingency as long as the degree of contingency was relatively high (at 80 or 100%).

The effect of the robot on subjects’ behaviours.

shows the results of an analysis of the ‘interaction time’ and ‘amount of hand movements’. We conducted a regression analysis for these, but the results are not significant (F(1, 221)=0.689, p=0.407; F(1, 221)=1.515, p=0.220). We also conducted a repeated-measure ANOVA in order to analyse the detailed relationship among each contingency level. It indicated a significant difference for each question. The results of the ANOVAs are as follows:

  • Interaction time’: ,

  • Amount of hand movements’: .

Table 3. Experimental results.

The multiple comparisons with the Bonferroni and LSD methods gave the following results:

  • Interaction time’:

  • Amount of hand movements’: .

(Here, * represents the significant difference at the p<0.05 level given by the Bonferroni method, and + indicates that by the LSD method.)

In summary, there is no straightforward relationship between contingency and the behaviour of the subjects, although it affected behaviour as the ANOVA indicated.

4. Discussion

4.1 Summary of findings

With the two types of experiments, i.e. in the complex interaction situation and simple interaction situation, we investigated how contingency affects the subjects’ impression of a lifelike robot. The results are summarized in . The right-hand side of this figure illustrates the changes in the impression of autonomy and lifelikeness with respect to changes in contingency. In simple interaction (, Exp-simple line), a low degree of contingency was associated with a highly autonomous impression, and the impression of lifelikeness was not affected by the degree of contingency. In complex interaction (, Exp-complex line), a high degree of contingency was associated with a highly autonomous impression. Moreover, a high degree of contingency was also associated with a high impression of lifelikeness.

Figure 12. Relationship between contingency and complexity.

Figure 12. Relationship between contingency and complexity.

The results of the experiments cannot be compared directly. However, if we try to interpret the results from the experiments, the importance of contingency seems to depend on the degree of complexity of interaction. We believe that complex interactions make it difficult to predict the robot's behaviour and to understand the robot's mechanism. From free descriptions by subjects, most answered that the robot only mimicked or randomly moved in a simple interaction situation. Thus, the subjects saw more autonomy in the non-contingent robot in a simple interaction situation, and saw more autonomy in the contingent robot in a complex interaction situation. We believe this is because they felt more intentionality and meaning in the robot's behaviour. These interpretations are hypotheses to be verified in future studies.

A further interesting finding was that there is no significant difference among relatively high contingency conditions: C80 and C100 were evaluated similarly for both simple and complex interactions.

4.2 Implications for development of an animate robot

There have been many studies in the field of psychology on an animate–inanimate distinction (Rakison and Poulin-Dubois Citation2001), where autonomy is also considered to be a primary factor for animate beings. Our experimental results revealed that people sometimes attribute lifelikeness independent of autonomy. We believe that this is because subjects sometimes evaluate the robot's motion rather than its essential nature, that is, as a robot whose motion is lifelike, but does not seem to be alive. Regarding a specific type of lifelike robot, here we discuss what should be considered in order to develop an animate robot, which is a robot interpreted as a kind of animate being.

The left-hand side of represents our design model for developing an animate robot. There are many simple animate beings, such as amoeba and ants, whose interaction with people is very simple. Thus, in simple interaction (Exp-simple line), the animate beings’ contingency toward human behaviour is actually low. Thus, if a robot's contingency is low, it will be interpreted as behaving similarly to a simple animate being. On the contrary, if its behaviour is simple but highly contingent, it will seem equal to other inanimate machines. Thus, if we design a simple animate robot, the importance of contingency is rather low; too much contingency may result in a failure to develop a robot that appears to be animate.

On the other hand, the evaluations for lifelikeness and autonomy correlated with the degree of contingency in complex interaction (Exp-complex line). For a robot to be more autonomous and lifelike, we need to make its behaviour more contingent on people's behaviour. As mentioned in the summary, however, this does not mean that perfect contingency is required. Rather, it can sometimes be non-contingent and initiate interaction independent of people's behaviour, which can be interpreted as a mixed-initiative behaviour. This aspect should be more thoroughly investigated, with the results being included in our future research. We believe that highly intelligent interactive robots, such as communication robots, should follow this guideline.

4.3 Implications for android science

We believe that android robots in particular need to appear lifelike. Without such a quality, they may give an uncanny impression, as represented by the dead corpse in Mori's Uncanny Valley theory (Mori Citation1970). Android robots need to have sufficient lifelikeness at the motion level, such as eyeball movement, lip movement, facial expression, etc. otherwise they will simply result in an uncanny impression.

Moreover, as the experimental results indicate, we need to consider carefully complexity and contingency when we design a new android robot. Apparently, an android robot that performs very complex tasks will need high contingency; thus it will need a high sensing ability to recognize the situations around it. However, if its task is simple, the importance of contingency and sensing ability is reduced, and an android could give a lifelike impression without much contingency. An android performing a simple task may also make a strong impression of its autonomy when its behaviour is not contingent.

The experimental result in the complex interaction situation seems to match the previous findings in inter-human communication. When people talk together, the correspondence of behaviours promotes friendly relationships among them, such as tempo, pitch, facial expression, eye gaze and body movements (Lakin et al. Citation2003). A similar phenomenon had already been found in low-level human–robot interaction where an interactive robot engaged in child free-play (Kanda et al. Citation2003). We believe that contingency in behaviour is more important when a robot engages in more complex interaction, such as language-level communication.

4.4 Implications for investigating robot design

This paper also reveals that our approach of employing a motion-capturing system is a useful way to investigate the ideal design for a lifelike robot. In other words, our experimental system provided us with findings that we could not obtain with existing robots. In existing robot systems, we cannot produce such highly contingent behaviour due to real-world problems, such as sensor noise, but we cannot explore human factors with only a simulator-based system, even though it could produce highly contingent behaviours. It is interesting that the experiment revealed that 100% contingency is not necessary. This revelation was only possible due to such an experimental system, because a sufficient number of interacting people evaluated the robot system highly when its contingency reached around 80%.

This study has provided a solid base for designing lifelike communication robots. One aspect of our future research will involve investigating how body motion is helpful for a robot that can also interact verbally.

4.5 Limitations

To date, we have only tested this system with a humanoid robot, Robovie; thus, there is no guarantee that the findings will apply to other robots. We do believe, however, that it will result in similar behaviour even when using other robots, because the experimental method is mostly independent of Robovie's appearance, except for the fact that Robovie has an anthropomorphic head and arms. Robovie has a relatively simpler appearance than other humanoid robots, such as Asimo (Hirai et al. Citation1998). In non-verbal interaction, Kanda et al. Citation(2005) demonstrated that people's response was similar for different humanoid robots or a human. Thus, we believe that people will behave in a similar way even if we use a humanoid robot with a different appearance, which will result in similar trends in impressions. As for an android robot, we should also take care to design appropriate behaviour for experiments, particularly its eye, lip, face and finger motions, to name a few. With such care, we believe it will be possible to find trends similar to those for humanoid robots. Confirming the existence of generality among humanoid and android robots should be a part of our future work, helping us to investigate a mixture effect between appearance, contingency and complexity.

5. Conclusion

The purpose of this paper was to address subjective lifelikeness in lifelike robots in relation to their contingency and complexity. We used a motion-capturing system with a humanoid robot to develop an experimental system that allowed us easily to change its contingency and vary its behaviour. We conducted two sets of experiments with various degrees of contingency. The first experiment was conducted using a robot with simple behaviour, and the second using one with complex behaviour. The results demonstrated that contingency negatively correlates with autonomy and has no significant relationship with lifelikeness in simple interaction. In contrast, it positively correlates with both autonomy and lifelikeness in complex interaction. Another interesting finding was that people's evaluations for 80% contingency and 100% contingency were not significantly different. We believe that these findings will help in the further development of lifelike humanoid and android robots, and demonstrate the value of our research approach of employing an experimental system.

Acknowledgements

This research was supported by the Ministry of Internal Affairs and Communications of Japan.

References

  • Arita , A. , Hiraki , K. , Kanda , T. and Ishiguro , H. 2005 . Can we talk to robots? Ten-month-old infants expected interactive humanoid robots to be talked to by persons . Cognition , 95 : B49 – B57 .
  • Bradley , R. A. and Terry , M. E. 1952 . Rank analysis of incomplete block designs . Biometrilia , 39 : 324 – 345 .
  • Goetz , J. , Kiesler , S. and Powers , A. Matching robot appearance and behavior to tasks to improve human–robot cooperation . Proceedings of the 12th IEEE Workshop on Robot and Human Interactive Communication, RO-MAN 2003 . October–2 November . Vol. 31 ,
  • Hirai , K. , Hirose , M. , Haikawa , Y. and Takenaka , T. The development of the Honda humanoid robot . IEEE International Conference on Robotics and Automation (ICRA'98) . pp. 1321 – 1326 .
  • Hirai , M. and Hiraki , K. 2005 . An event-related potential study of biological motion perception in human infants . Cognitive Brain Res. , 22 : 301 – 304 .
  • Ishiguro , H. 2005 . Android science—toward a new cross-interdisciplinary framework . CogSci-2005 Workshop , : 1 – 6 .
  • Kamashima , M. , Kanda , T. , Imai , M. , Ono , T. , Sakamoto , D. , Ishiguro , H. and Anzai , Y. Embodied cooperative behaviors by an autonomous humanoid robot . IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS2004) . pp. 2506 – 2513 .
  • Kanda , T. , Ishiguro , H. , Imai , M. and Ono , T. 2004 . Development and evaluation of interactive humanoid robots . Proc. IEEE , 92 : 1839 – 1850 .
  • Kanda , T. , Ishiguro , H. , Imai , M. and Ono , T. Body movement analysis of human–robot interaction . International Joint Conference on Artificial Intelligence (IJCAI 2003) . pp. 177 – 182 .
  • Kanda , T. , Miyashita , T. , Osada , T. , Haikawa , Y. and Ishiguro , H. Analysis of humanoid appearances in human–robot interaction . IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS2005) . pp. 62 – 69 .
  • Kidd , C. and Breazeal , C. Effect of a robot on user perceptions . Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS2004) .
  • Lakin , J. L. , Jefferis , V. E. , Cheng , C. M. and Chartrand , T. L. 2003 . The chameleon effect as social glue: evidence for the evolutionary significance of nonconscious mimicry . J. Nonverbal Behav. , 27 ( 3 ) : 145 – 162 .
  • Magyar , J. and Gergely , G. The obscure object of desire ‘Nearly, but clearly not like me’. Perception of self-generated contingencies in normal infants and children with autism . Poster session presented at the International Conference on Infant Studies .
  • Mori , M. 1970 . Bukimi no tani [The Uncanny Valley] . Energy , 7 : 33 – 35 . (in Japanese)
  • Nakadai , K. , Hidai , K. , Mizoguchi , H. , Okuno , H. G. and Kitano , H. 2001 . Real-time auditory and visual multiple-object tracking for robots . Proceedings of the International Joint Conference on Artificial Intelligence , : 1425 – 1432 .
  • Rakison , D. H. and Poulin-Dubois , D. 2001 . Developmental origin of the animate–inanimate distinction . Psychol. Bull. , 127 : 209 – 228 .
  • Sakamoto , D. , Kanda , T. , Ono , T. , Kamashima , M. , Imai , M. and Ishiguro , H. 2005 . Cooperative embodied communication emerged by interactive humanoid robots . Int. J. Hum.-Comput. Stud. , 62 : 247 – 265 .
  • Shiomi , M. , Kanda , T. , MIralles , N. , Miyashita , T. , Fasel , I. , Movellan , J. and Ishiguro , H. Face-to-face interactive humanoid robot . IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS2004) . pp. 1340 – 1346 .
  • Trafton , J. G. , Cassimatis , N. L. , Bugajska , M. , Brock , D. , Mintz , F. and Schultz , A. 2005 . Enabling effective human–robot interaction using perspective-taking in robots . IEEE Trans. Syst., Man Cybernet. , 25 : 460 – 470 .
  • Walters , M. L. , Dautenhahn , K. , Koay , K. L. , Kaouri , C. , de Boekhorst , R. , Nehaniv , C. L. , Werry , I. and Lee , D. Close encounters: spatial distances between people and a robot of mechanistic appearance . Proceedings of the IEEE-RAS International Conference on Humanoid Robots (Humanoids2005) . pp. 450 – 455 .
  • Yamaoka , F. , Kanda , T. , Ishiguro , H. and Hagita , N. Lifelike behavior of communication robots based on developmental psychology findings . IEEE International Conference on Humanoid Robots (Humanoids2005 , pp. 406 – 411 .

Appendix A: The method of mimic control

The positions of the markers in the explanation below are described in terms of relative coordinates. The position of all markers attached to the robot and subject are transformed from absolute coordinates to relative coordinates, which centre on the midpoint of each body. The midpoint of the body is between the position of the markers that are attached to the left and right shoulders.

A.1. Mimic motion of the head

We define the subject's marker attached to the left side of the front of the head as LFHD S , the right side of the front of the head as RFHD S and the centre of the back head as CBHD S . When we define the midpoint between LFHD S and RFHD S as CFHD S , the head vector of the subject's is

In the same way, we can also calculate the head vector of the robot's . The robot controls its head so that the angle between and becomes zero.

A.2. Mimic motion of the arm

We define the marker attached to the subject's left elbow as LELB S , and left finger as LFIN S . Normalization of the left upper-arm vector from the left shoulder to the left elbow is

Furthermore, normalization of the left forearm vector from the left elbow to the left finger is
In the same way, we can also calculate , for the right arm, and obtain the robot's arm vector , , and in a similar way. In the case where the robot mimics the motion of the subject's right arm, the robot controls its left arm so that both of the angles between and and between and , become zero. In the same way, the robot mimics the motion of the subject's left arm.

Appendix B: Bradley method

According to the Bradley method, we hypothesized a model in which each degree of contingency has an evaluation for the degree of impression. That is, we consider that each degree of evaluation against each degree of contingency is defined as . For ease of analysis, we defined this as follows:

Based on this, when the 0% contingency condition is compared with the C20 condition, the probability that the 0% contingency condition is selected is , and the probability that the C20 contingency condition is selected is . However, in the case of the actual experiment, the result has random noise. Therefore, the goal of analysis was to obtain the estimate value p i of π i based on experimental data. The estimate value p i is obtained by
where the variable n is the number of subjects, and f i is the number of times that the condition i is selected among n·(t−1) times of judgement (t is the number of conditions). Condition i is compared with another (t−1) pattern per subject.

B.1. The matching of the model

The matching of the model obtained with the Bradley method is examined using the goodness-of-fit test. The matching equation is

where X ij is the number of judgements that i is better than j, and X 1ij is the expected value of X ij . This X 1ij is calculated with .

If this is below the 5% point of the X distribution, where the degree of freedom is , the validity of p i and the matching of the model are revealed.

B.2. Evaluation of the difference between each pattern

The difference between each pattern is calculated as follows:

If this is above the 5% point of the x distribution, where the degree of freedom is (t−1), there is a significant difference between each pattern.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.