Search for a command to run...
Despite the increasing anthropomorphism of virtual agents, a persistent “cooperation gap” exists where humans often cooperate less with artificial agents than with real human partners. The present research addresses this gap by investigating whether mutual self-disclosure training, in which humans and virtual agents reciprocally share personal information, can enhance interpersonal engagement and promote cooperative behavior. We focused on interactions with highly anthropomorphic virtual agents within an immersive virtual reality environment and assessed cooperation using the Chicken game paradigm before and after training. Results showed that mutual self-disclosure significantly increased cooperation rates with virtual partners, accompanied by heightened perceptions of human-likeness, social presence, and interpersonal closeness. Behavioral analyses suggested that these relational changes fostered fairness-oriented motivation, a key driver of cooperative behavior. Furthermore, individual differences in social value orientation moderated these effects, with prosocial participants showing the strongest gains. These findings demonstrate that reciprocal, emotionally grounded interactions can partially humanize virtual agents, enabling them to be perceived as socially meaningful partners capable of eliciting cooperative behavior. The study provides practical implications for designing artificial agents in contexts where human-agent collaboration is critical, including autonomous driving, collaborative robotics, and socially interactive AI systems. • Mutual self-disclosure training increased cooperation with virtual agents. • Training enhanced perceived interpersonal engagement with virtual agents • Model analyses showed increased advantageous inequity aversion after training. • Prosocial individuals showed the strongest cooperation gains with trained agents. • Findings highlight the potential of relational mechanisms to enhance human-AI cooperation.