Search for a command to run...
The increasing use of artificial intelligence (AI) in areas like trade, healthcare, and the courts has made it harder to use compensatory justice under public international law. AI systems don't have legal personality as conventional legal actors do, but their judgements are having more and more of an effect on rights, duties, and results that affect people in other countries. This brings up a big question: who is responsible when algorithmic decisions hurt people, and how can victims get fair compensation? This study looks at how compensatory justice and AI relate to each other in the context of international law. It looks into how international legal documents, like the International Covenant on Civil and Political Rights, the UN Guiding Principles on Business and Human Rights, and relevant court decisions, deal with or don't deal with the liability and reparation systems that apply to harm caused by autonomous or semi-autonomous systems. The study shows that the current frameworks are not good enough for assigning blame, especially when it comes to AI systems that are not clear about who is responsible or are decentralised. Three real-world examples show how this works: AI-driven credit scoring in international finance, diagnostic tools in healthcare services across borders, and algorithmic risk assessments in court decisions. Each example shows a different problem that comes up when trying to deliver compensatory justice across different areas. The paper ends with strategic legal suggestions for making treaties clearer, creating soft law tools, and encouraging global cooperation for algorithmic governance. The goal is to let people who have been hurt by AI seek justice through international law. This will make sure that the law is fair and consistent in a world where technology is changing quickly.
Published in: Veredas do Direito Direito Ambiental e Desenvolvimento Sustentável
Volume 23, Issue 5, pp. e235665-e235665