Browsing by Author "Alaqil, Mohammed Saleh"
Now showing 1 - 1 of 1
- Results Per Page
- Sort Options
Item Restricted Regulating AI for National Security: A Critical Evaluation of UK AI Regulation(Saudi Digital Library, 2025) Alaqil, Mohammed Saleh; Wagner, StevenThis research examines the integration of Artificial Intelligence (AI) and specifically Generative AI (GenAI) within UK national security institutions. It highlights regulatory challenges which remain under-researched in the literature. Despite its operational advantages, GenAI introduces considerable risks, including adversarial interference through data poisoning and prompt injection attacks, algorithmic opacity that undermines transparency, divergence from intended operational subgoals, and dilution of institutional accountability. The UK's existing regulatory framework remains largely non-statutory, principles-based and reliant on voluntary sector-specific compliance, which raises critical concerns around coherence, enforcement capability and strategic readiness. Drawing upon Reflexive Control Theory and Handel’s Politics of Intelligence Theory, this research found that although the UK’s principles-based approach to AI regulation provides valuable flexibility and promotes innovation, it remains insufficient in safeguarding national security against the rising risks posed by GenAI. This research identifies nine critical vulnerabilities, including adversarial manipulation, black-box opacity, disinformation, skill shortages, and unclear institutional accountability. These risks are compounded by the voluntary nature of existing AI compliance, the lack of statutory duties and inconsistent regulation across sectors. Absence of a centralised authority and limited technical capacity among some regulators weaken preparedness for AI-related threats. Additionally, public trust is at risk due to a lack of transparency in decision-making. Although current frameworks enable national security agencies to rapidly adapt AI tools, this speed often comes at the expense of coherent governance. The findings make clear that without targeted reform, the UK’s AI regulation may exacerbate institutional blind spots and operational vulnerabilities, undermining both legitimacy and effectiveness in intelligence settings. To overcome these challenges, this research proposes targeted regulatory reforms, specifically advocating for binding statutory obligations for high-risk AI. An independent AI supervisory authority, compulsory pre-deployment testing, governance frameworks for agentic AI systems, international cooperation through a cross-border AI Safety Accord, and real-time AI incident monitoring platforms.9 0
