Search for a command to run...
Artificial intelligence (AI) has rapidly become embedded in core domains of international concern, from autonomous weapons and cyber operations to biometric border control, digital trade, and financial regulation.While existing debated in international law tend to focus on discrete questions-such as the legality of lethal autonomous weapons or the human rights implications of algorithmic surveillance-much less attention has been paid to whether the international legal system, as a structure, is ready to govern AI as a cross cutting phenomenon. This article offers a structural readiness assessment of international law for artificial intelligence. It develops a three part framework centred on normative coverage (the extent to which existing rules and principles apply to AI mediated conduct), institutional capacity (the ability of international bodies to interpret, monitor, and enforce those norms), and adaptive flexibility (the systems capacity to adjust to rapid technological change without constant crisis driven reform). Drawing on doctrinal analysis and case studies relating to autonomous weapons, AI enabled surveillance, and cross border algorithmic regulation, the article argues that international law is normatively rich but institutionally thin and procedurally slow in AI sensitive areas, producing fragmented, reactive, and often ad hoc responses. It concludes that meaningful readiness for AI will depend less on drafting entirely new AI treaties and more on clarifying responsibility for AI mediated harm,strengthening oversight mandates of existing institutions, and developing interpretive principles tailored to algorithmic opacity,explainability,and systemic risk.
Published in: International Journal of Advanced Research
Volume 14, Issue 02, pp. 464-476