Regulating AI for National Security: A Critical Evaluation of UK AI Regulation

dc.contributor.advisorWagner, Steven
dc.contributor.authorAlaqil, Mohammed Saleh
dc.date.accessioned2026-04-26T20:10:28Z
dc.date.issued2025
dc.description.abstractThis research examines the integration of Artificial Intelligence (AI) and specifically Generative AI (GenAI) within UK national security institutions. It highlights regulatory challenges which remain under-researched in the literature. Despite its operational advantages, GenAI introduces considerable risks, including adversarial interference through data poisoning and prompt injection attacks, algorithmic opacity that undermines transparency, divergence from intended operational subgoals, and dilution of institutional accountability. The UK's existing regulatory framework remains largely non-statutory, principles-based and reliant on voluntary sector-specific compliance, which raises critical concerns around coherence, enforcement capability and strategic readiness. Drawing upon Reflexive Control Theory and Handel’s Politics of Intelligence Theory, this research found that although the UK’s principles-based approach to AI regulation provides valuable flexibility and promotes innovation, it remains insufficient in safeguarding national security against the rising risks posed by GenAI. This research identifies nine critical vulnerabilities, including adversarial manipulation, black-box opacity, disinformation, skill shortages, and unclear institutional accountability. These risks are compounded by the voluntary nature of existing AI compliance, the lack of statutory duties and inconsistent regulation across sectors. Absence of a centralised authority and limited technical capacity among some regulators weaken preparedness for AI-related threats. Additionally, public trust is at risk due to a lack of transparency in decision-making. Although current frameworks enable national security agencies to rapidly adapt AI tools, this speed often comes at the expense of coherent governance. The findings make clear that without targeted reform, the UK’s AI regulation may exacerbate institutional blind spots and operational vulnerabilities, undermining both legitimacy and effectiveness in intelligence settings. To overcome these challenges, this research proposes targeted regulatory reforms, specifically advocating for binding statutory obligations for high-risk AI. An independent AI supervisory authority, compulsory pre-deployment testing, governance frameworks for agentic AI systems, international cooperation through a cross-border AI Safety Accord, and real-time AI incident monitoring platforms.
dc.format.extent72
dc.identifier.citationChicago Style
dc.identifier.urihttps://hdl.handle.net/20.500.14154/78765
dc.language.isoen
dc.publisherSaudi Digital Library
dc.subjectChapter 1: Introduction
dc.subjectChapter 2: Literature Review
dc.subjectChapter 3: Effectiveness of UK Regulations in Mitigating National Security Risks posed by AI
dc.subjectChapter 4: Use of AI in UK Intelligence: Risks and Proposed Regulatory Responses
dc.subjectChapter 5: Conclusion
dc.titleRegulating AI for National Security: A Critical Evaluation of UK AI Regulation
dc.typeThesis
sdl.degree.departmentBrunel Univercity London
sdl.degree.disciplineIntelligence and Security Studies
sdl.degree.grantorBrunel Univercity London
sdl.degree.nameMaster of Arts

Files

Original bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
SACM-Dissertation.pdf
Size:
941.84 KB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.61 KB
Format:
Item-specific license agreed to upon submission
Description:

Copyright owned by the Saudi Digital Library (SDL) © 2026