Search for a command to run...
Facial analysis AI applications have recently come under heavy criticism. They have been accused, for instance, of being discriminative, racist, and even pseudoscientific. In this paper, we assess the latter accusation. We argue that, in problematic cases, facial analysis AI should be considered a form of social pseudotechnology, although it might not be pseudoscientific. We also propose a new two-part definition of pseudotechnology: Either it is not justified that the pseudotechnology could achieve its intended practical purpose better than random chance through its operation principles, or it can achieve its purpose, but it is not justified that this happens through the features claimed to be crucial for its functioning. Stricter definitions of pseudotechnology have difficulty capturing such phenomena as facial analysis AI. When our definition is extended to social technologies, it becomes clear that social pseudotechnologies are more common than previously suggested. Because social pseudotechnologies influence how people act and think, they are not often perceived as pseudotechnologies. This resembles the phenomenon of reactivity, where scientific classifications affect people’s self-perceptions and perceptions of others. We support our claims with case studies of facial analysis AI technologies and pseudotechnologies. We engage with the current computer, behavioural, and social scientific literature to assess the extent to which it is justified to use facial analysis AI systems in these cases.
Published in: European Journal for Philosophy of Science
Volume 16, Issue 2