Search for a command to run...
The rapid advancement of artificial intelligence (AI) and robotics introduces profound challenges to the military sector, particularly regarding the development of Lethal Autonomous Weapon Systems (LAWS). These systems, capable of identifying and engaging targets without direct human intervention, raise critical ethical and legal questions concerning accountability and human oversight. The integration of Lethal Autonomous Weapons Systems (LAWS) into modern arsenals necessitates a rigorous examination of the prevailing international legal and ethical landscape, particularly as these technologies challenge the foundational tenets of International Humanitarian Law (IHL). Central to this discourse is the inherent difficulty autonomous robotic systems may present in complying with the principle of distinction; specifically, the technical and moral challenge of reliably differentiating between active combatants and civilians, or distinguishing healthy soldiers from those who are hors de combat due to injury. This study investigates whether this phenomenon may be interpreted by some scholars as creating a potential regulatory gap resulting from the explicit exclusion of military and defense applications from the European Union AI Act (Regulation (EU) 2024/1689) (Artificial Intelligence Act, 2024). This study adopts a doctrinal legal methodology analysis combined with policy analysis. It examines the regulatory framework established by the EU Artificial Intelligence Act and evaluates its implications for the governance of Lethal Autonomous Weapon Systems (LAWS) in light of relevant principles of International Humanitarian Law, including distinction, proportionality, and accountability. It analyzes how the transition from automation to full algorithmic autonomy challenges the fundamental principles of International Humanitarian Law, specifically the requirements of distinction and proportionality. Furthermore, the article examines the strategic implications of automation bias and the potential erosion of human judgment in high-stakes decision-making, since at present, no commonly agreed definition of Lethal Autonomous Weapon Systems (LAWS) exists. Ultimately, the current fragmentation of the regulatory landscape, characterized by the exclusion of military AI from the EU AI Act of 2024, underscores the urgent need for a unified international governance body to ensure that the rapid evolution of autonomous force does not supersede the ethical and legal frameworks it is intended to serve.
Published in: European Scientific Journal ESJ
Volume 22, Issue 8, pp. 1-1