1,594
Views
21
CrossRef citations to date
0
Altmetric
Articles

Socially grounded game strategy enhances bonding and perceived smartness of a humanoid robot

, , , &
Pages 81-98 | Received 23 Aug 2016, Accepted 16 May 2017, Published online: 16 Nov 2017

Figures & data

Figure 1. The human–robot interaction process in games. (a) The human–robot interaction process as taking turns between a human and a robot. (b) An individual turn of a robot consists of sensing/perception of the state of the game, strategy module that consists of a game strategy and a social strategy, and action towards the human. Different from existing studies in this study we ground social interaction via the strategy module, and not by adding social features to the observable action or by perception of social cues by the robot.

Figure 1. The human–robot interaction process in games. (a) The human–robot interaction process as taking turns between a human and a robot. (b) An individual turn of a robot consists of sensing/perception of the state of the game, strategy module that consists of a game strategy and a social strategy, and action towards the human. Different from existing studies in this study we ground social interaction via the strategy module, and not by adding social features to the observable action or by perception of social cues by the robot.

Figure 2. Relation between the reference frames of the camera and the position of the checkboard.

Figure 2. Relation between the reference frames of the camera and the position of the checkboard.

Figure 3. Pieces detection and state estimation example. (a) Image captured with NAO camera. All pieces of the robot and of the opponent are detected and their centers are estimated. (b) representation of the state of the game in the robot “mind”. The robot plays with the black pieces, denoted with b and the opponent pieces are denoted with w (for white). On the color prints and in the designed game red and green colors are used instead of black and white, since the colors might be more enjoyable for the children.

Figure 3. Pieces detection and state estimation example. (a) Image captured with NAO camera. All pieces of the robot and of the opponent are detected and their centers are estimated. (b) representation of the state of the game in the robot “mind”. The robot plays with the black pieces, denoted with b and the opponent pieces are denoted with w (for white). On the color prints and in the designed game red and green colors are used instead of black and white, since the colors might be more enjoyable for the children.

Figure 4. Illustrative example of the decision tree on which NAO base its decisions.

Figure 4. Illustrative example of the decision tree on which NAO base its decisions.

Figure 5. Reachable space for the NAO robot on the checkerboard.

Figure 5. Reachable space for the NAO robot on the checkerboard.

Figure 6. Schematic representation of the NAO robot equipped with a laser pointer.

Figure 6. Schematic representation of the NAO robot equipped with a laser pointer.

Figure 7. The computation time required to calculate the next move plotted to the depth used.

Figure 7. The computation time required to calculate the next move plotted to the depth used.

Table 1. The questionnaire for the children (English version).

Figure 8. The gaze duration of the children across all conditions.

Figure 8. The gaze duration of the children across all conditions.

Figure 9. The children's answers on the questionnaire.

Figure 9. The children's answers on the questionnaire.