Search for a command to run...
Surveillance technology is increasingly employed both in private and public places with enhanced surveillance capacity, but with very grave ethical concerns, one of the significant among them being invasion of privacy, discriminatory algorithms, and having no mechanism of having sufficient legal accountability. These issues are conveniently brushed under the carpet due to technological advances at a breakneck speed, and that is endangering marginalized groups and civil liberties. This paper presents a novel, measurable Ethical Risk Score (ERS) model based on ethical theory mapping, mathematical risk modeling, and case analysis. Compared to previous work on intangible ethics, this model is measurable and scalable in terms of its capacity to assess AI surveillance activity. The application of a utilitarian argument is established in empirical evidence in the world at large, the most dangerous application of facial recognition technology to policing. Analytic insights are translated into policy and design implications for policy relevance. Findings indicate that Facial Recognition in Policing had the highest Ethical Risk Score at 77.6%, and Retail Consumer Tracking had the lowest score at 45.0%, with context-specific regulation coupled with ethical protection.