Search for a command to run...
Problem. In the field of human-robot interaction (HRI), despite the development of anthropomorphic robots capable of initiating joint attention, it remains unclear how different types of cues (gesture+gaze vs. gaze only) and their accuracy influence automatic human following and subjective evaluation of the interaction. Aim. To compare the effectiveness of different types of cues from an anthropomorphic robot (pointing gestures combined with gaze and gaze-only cues) in a task requiring joint attention, as well as to assess the influence of cue accuracy on participants’ behaviour. Methods. The study involved 43 students from RANEPA (Russian Presidential Academy of National Economy and Public Administration): 38 females and 5 males aged 19 to 27 years (M=20.51; SD=1.82). The number of participant movements following the robot’s cues and coinciding with the cue direction was evaluated to assess the effectiveness of the robot’s cues in each condition. To study participants’ reactions to the cues after each task, a questionnaire based on Danek’s metacognitive scales was used. Results. The results demonstrated the robot’s ability to imitate the process of joint attention during problem-solving with the participant. The hypothesis regarding the greater effectiveness of robot cues using pointing gestures combined with gaze and head movement compared to gaze-only cues was confirmed. The hypothesis regarding the greater effectiveness of correct cues compared to incorrect robot cues was confirmed. Conclusions. The robot’s ability to imitate the joint attention process during problem-solving with the participant was demonstrated; participants paid attention to the robot’s cues and attempted to follow them in both correct and incorrect cue conditions. However, in the condition with correct cues, the percentage of response attempts coinciding with the cue direction was significantly higher than in the condition with incorrect cues.