SACM - United Kingdom

Permanent URI for this collectionhttps://drepo.sdl.edu.sa/handle/20.500.14154/9667

Browse

Search Results

Now showing 1 - 2 of 2
  • ItemRestricted
    Evaluation and Detection of Adversarial Attacks in ML-based NIDS
    (Newcastle University, 2024) Alatwi, Huda Ali O; Morisset, Charles
    A Network Intrusion Detection System (NIDS) monitors network traffic to detect unauthorized access and potential security breaches. A Machine Learning (ML)-based NIDS is a security mechanism that uses ML algorithms to automatically detect and identify suspicious activities or potential threats in a network by analyzing traffic patterns, distinguishing between normal and malicious behaviors, and alerting or blocking unauthorized access. Despite high accuracy, ML-based NIDS are vulnerable to adversarial attacks, where attackers modify malicious traffic to evade detection and transfer these tactics across various systems. To the best of our knowledge, several crucial research gaps persist in this area that have not yet been addressed. First, there are no systematic threat models for identifying and analyzing potential threats and vulnerabilities in ML-based NIDS. This lack of structured threat modeling hinders the development of comprehensive defense strategies and leave these systems vulnerable to adversarial attacks that exploit unknown weaknesses in the ML algorithms or system architecture. The current literature employs generic adversarial attacks mainly designed for image recognition domain to assess the resilience of ML-based, but no research has verified the realism and compliance of these attacks with network domain constraints. Investigating whether these attacks produce valid network is crucial to determine their real-world threat level and the suitability of ML-based NIDS for deployment. Another gap in the literature is the lack of comprehensive evaluations that include a wide range of models, attack types, and defense strategies using contemporary network traffic data. This gap makes it difficult to verify the generalizability and applicability of the findings for real-world. The absence of standardized metrics further hampers the ability to evaluate and compare the resilience of ML-based NIDS to adversarial attacks. Finally, there is no a lightweight solution that effectively detects and classifies adversarial traffic with scoring high accuracy on both clean and perturbed data with proven efficiency over recent dataset and across various attack types and defenses. These gaps hinder the robustness of ML-based NIDS against adversarial attacks. Therefore, this Ph.D. thesis aims to address these vulnerabilities to enhance the ML-based NIDS resilience. The overall contributions include; 1) A threat modeling for ML-based NIDS using STRIDE and Attack Tree methodologies; 2) An investigation of the realism and performance of generic adversarial attacks against DL-based NIDS; 3) A comprehensive evaluation for adversarial attacks' performance consistency, models' resilience, and defenses' effectiveness; 4) Adversarial-Resilient NIDS, a framework for detecting and classifying adversarial attacks against ML-based NIDS.
    41 0
  • Thumbnail Image
    ItemRestricted
    Examining Adversarial Examples as Defensive Approach Against Web Fingerprinting Attacks
    (Saudi Digital Library, 2023) Alzamil, Layla; Elahi, Tariq
    In the age of online surveillance, and the growth in privacy and security concerns for individuals activities over the internet. Tor browser is a widely used anonymisation network offering security and privacy-enhanced features to protect users online. However, web fingerprinting attacks (WF) have been a challenging threat that aims to deanonymise users browsing activities over Tor. This interdisciplinary project contributes to defending against WF attacks by employing the “attack-on-attack” approach, where Adversarial Examples (AEs) attacks are launched to exploit existing vulnerabilities in the neural network architecture. The FGSM and DeepFool construction methods are implemented to introduce perturbed data to these models and lead them to misclassify, significantly decreasing the classifier prediction accuracy.
    18 0

Copyright owned by the Saudi Digital Library (SDL) © 2025