Artificial Intelligence, Deep Learning, and the Black Box Opacity: International Law and Modern Governance Framework for Legal Compliance and Individual Responsibility

No Thumbnail Available

Date

2025

Journal Title

Journal ISSN

Volume Title

Publisher

Saudi Digital Library

Abstract

This dissertation examines the unprecedented challenges that deep learning models in artificial intelligence pose to international humanitarian law frameworks governing armed conflict, addressing critical questions about international humanitarian law compliance capabilities, legal personality under the framework of international law and international humanitarian law, and international individual criminal responsibility when autonomous weapons systems employ deep learning models in decision-making processes. Chapter Two provides a comprehensive technical analysis of deep learning architectures, including convolutional neural networks, recurrent neural networks, generative adversarial networks, and transformer networks, and their military applications in target recognition, threat assessment, and autonomous operations. The analysis demonstrates that properly trained deep learning systems can achieve exceptional accuracy in tasks relevant to the principles of distinction and proportionality. However, this technical capability exists alongside a fundamental limitation: the “black box challenge,” whereby decision-making processes emerge from statistical pattern recognition across billions of parameters in ways that remain incomprehensible to human operators, creating unprecedented challenges for legal compliance and individual responsibility. Chapter Three evaluates whether granting legal personality to advanced artificial intelligence could address emerging responsibility gaps. Applying the analytical pragmatic approach through dual criteria of “value context” and “legitimacy context,” the analysis reaches definitive negative conclusions. Granting artificial intelligence legal personality would contradict international humanitarian law’s human-centered foundations, fail to fill responsibility gaps, and potentially shield humans from liability while introducing conceptual incoherence into established normative structures. Chapter Four demonstrates that deep learning, as a black box model in statistical learning, fundamentally challenges traditional international frameworks for individual criminal responsibility. The analysis reveals structural incompatibilities between algorithmic opacity and the requirements of the Rome Statute for mens rea and actus reus. Similarly, command responsibility doctrines face parallel challenges when commanders possess formal control over systems whose decision-making processes transcend human comprehension. The dissertation proposes a modified command responsibility framework recognizing commanders as “AI enablers” rather than traditional superiors, establishing reasonable governance standards for controlled environments while imposing strict liability for high-risk deployments. This framework preserves meaningful accountability while acknowledging technological constraints, shifting focus from comprehending opaque statistical processes to governing deployment decisions and operational contexts within commanders’ control.

Description

Keywords

Artificial Intelligence, Deep Learning, Black Box, International Law, Legal Governance, Legal Compliance, Individual Responsibility

Citation

Endorsement

Review

Supplemented By

Referenced By

Copyright owned by the Saudi Digital Library (SDL) © 2026