Saudi Cultural Missions Theses & Dissertations

Permanent URI for this communityhttps://drepo.sdl.edu.sa/handle/20.500.14154/10

Browse

Search Results

Now showing 1 - 2 of 2
  • ItemRestricted
    Adversarial Machine Learning: Safeguarding Al models from Attacks
    (Lancaster University, 2025-01-10) Alammar, Ghaida; Bilal, Muhammad
    The field of AML has gained considerable popularity over the years with researchers seeking to explore gaps and new opportunities for growth. This goal of this report is to offer an in-depth survey of adversary attacks and defences in machine learning by examining existing gaps in current algorithms and understanding the implications for systems. By exploring evasion, poisoning, extraction, and inference attacks, the paper reveals the weaknesses of the existing methodologies such as adversarial training, data sanitization, and differential privacy. These techniques are usually not versatile to newer threats and have raised concerns about how effective they are in practical use. The research contributes to the field by conducting an extensive literature review of 35 articles and highlighting the need to implement adaptive and diverse defence strategies as well as empirical studies to evaluate the effectiveness of AML mechanisms. Some of the strategic suggestions are to incorporate continuous training frameworks, optimise real-time monitoring processes, and improve privacy-preserving methods to safeguard confidential information. This analysis is envisaged to offer practical data to foster the development of AML to help in the development of robust AI systems that will remain impregnable to various kinds of adversarial threats in numerous vital sectors. The study examines the basic design and consequences of various attacks in addition to the impact of subtle manipulation of input data on patterns and privacy. The report further addresses the modern challenges of large language models (LLMs) and autonomous systems. Furthermore, this research emphasises the significance of robust protection against enemy attack in strategic areas. The studies additionally evaluate present day protection mechanisms inclusive of antagonistic training, enter preprocessing, and making models stronger and more reliable. By evaluating the efficiency of these defences and evaluating key areas for improvement, the dissertation provides invaluable insights into enhancing the security and reliability of systems. The results of addressing the attacks and defences expose the need for unremitting advancements in data protection in various systems.
    11 0
  • ItemRestricted
    Evaluation and Detection of Adversarial Attacks in ML-based NIDS
    (Newcastle University, 2024) Alatwi, Huda Ali O; Morisset, Charles
    A Network Intrusion Detection System (NIDS) monitors network traffic to detect unauthorized access and potential security breaches. A Machine Learning (ML)-based NIDS is a security mechanism that uses ML algorithms to automatically detect and identify suspicious activities or potential threats in a network by analyzing traffic patterns, distinguishing between normal and malicious behaviors, and alerting or blocking unauthorized access. Despite high accuracy, ML-based NIDS are vulnerable to adversarial attacks, where attackers modify malicious traffic to evade detection and transfer these tactics across various systems. To the best of our knowledge, several crucial research gaps persist in this area that have not yet been addressed. First, there are no systematic threat models for identifying and analyzing potential threats and vulnerabilities in ML-based NIDS. This lack of structured threat modeling hinders the development of comprehensive defense strategies and leave these systems vulnerable to adversarial attacks that exploit unknown weaknesses in the ML algorithms or system architecture. The current literature employs generic adversarial attacks mainly designed for image recognition domain to assess the resilience of ML-based, but no research has verified the realism and compliance of these attacks with network domain constraints. Investigating whether these attacks produce valid network is crucial to determine their real-world threat level and the suitability of ML-based NIDS for deployment. Another gap in the literature is the lack of comprehensive evaluations that include a wide range of models, attack types, and defense strategies using contemporary network traffic data. This gap makes it difficult to verify the generalizability and applicability of the findings for real-world. The absence of standardized metrics further hampers the ability to evaluate and compare the resilience of ML-based NIDS to adversarial attacks. Finally, there is no a lightweight solution that effectively detects and classifies adversarial traffic with scoring high accuracy on both clean and perturbed data with proven efficiency over recent dataset and across various attack types and defenses. These gaps hinder the robustness of ML-based NIDS against adversarial attacks. Therefore, this Ph.D. thesis aims to address these vulnerabilities to enhance the ML-based NIDS resilience. The overall contributions include; 1) A threat modeling for ML-based NIDS using STRIDE and Attack Tree methodologies; 2) An investigation of the realism and performance of generic adversarial attacks against DL-based NIDS; 3) A comprehensive evaluation for adversarial attacks' performance consistency, models' resilience, and defenses' effectiveness; 4) Adversarial-Resilient NIDS, a framework for detecting and classifying adversarial attacks against ML-based NIDS.
    40 0

Copyright owned by the Saudi Digital Library (SDL) © 2025