Search for a command to run...
Can AI deliver ethical advice of expert quality? Will humans trust it? We address these questions using dilemmas published in the New York Times (NYT) column, The Ethicist, comparing humans’ perception of AI-generated advice (GPT-4) to that of the NYT expert. First, we investigated how useful AI-advice is perceived by diverse participants (N = 187), including laypeople, MBA students, and an expert panel of scholars and clergy. We find that people perceive the quality of AI-generated ethical advice to be on par with that of expert advice, with no significant difference in usefulness ratings between the two sources. When given a direct choice, 57% of participants preferred AI-generated advice. Building on these findings, we conducted a three-condition experiment (N = 642) investigating how much humans are willing to accept this advice. Our results reveal that before observing the advice, humans display a strong algorithm aversion in this context, with 72.6% of participants preferring to be advised ethically by humans. After being exposed to the quality of the AI-generated advice, however, algorithm aversion is reduced substantially to 53.2%. When hiding the source of the advice, algorithm aversion is further reduced to 46.3%. Taken together, our findings suggest that while humans initially exhibit strong resistance to AI-generated ethical advice, this aversion significantly diminishes when they experience the quality of AI guidance firsthand, indicating that trust in algorithmic ethical reasoning may be more malleable than previously assumed and could evolve as people gain direct exposure to AI’s capabilities in moral decision-making.