Saudi Cultural Missions Theses & Dissertations
Permanent URI for this communityhttps://drepo.sdl.edu.sa/handle/20.500.14154/10
Browse
3 results
Search Results
Item Restricted Evaluation and Detection of Adversarial Attacks in ML-based NIDS(Newcastle University, 2024) Alatwi, Huda Ali O; Morisset, CharlesA Network Intrusion Detection System (NIDS) monitors network traffic to detect unauthorized access and potential security breaches. A Machine Learning (ML)-based NIDS is a security mechanism that uses ML algorithms to automatically detect and identify suspicious activities or potential threats in a network by analyzing traffic patterns, distinguishing between normal and malicious behaviors, and alerting or blocking unauthorized access. Despite high accuracy, ML-based NIDS are vulnerable to adversarial attacks, where attackers modify malicious traffic to evade detection and transfer these tactics across various systems. To the best of our knowledge, several crucial research gaps persist in this area that have not yet been addressed. First, there are no systematic threat models for identifying and analyzing potential threats and vulnerabilities in ML-based NIDS. This lack of structured threat modeling hinders the development of comprehensive defense strategies and leave these systems vulnerable to adversarial attacks that exploit unknown weaknesses in the ML algorithms or system architecture. The current literature employs generic adversarial attacks mainly designed for image recognition domain to assess the resilience of ML-based, but no research has verified the realism and compliance of these attacks with network domain constraints. Investigating whether these attacks produce valid network is crucial to determine their real-world threat level and the suitability of ML-based NIDS for deployment. Another gap in the literature is the lack of comprehensive evaluations that include a wide range of models, attack types, and defense strategies using contemporary network traffic data. This gap makes it difficult to verify the generalizability and applicability of the findings for real-world. The absence of standardized metrics further hampers the ability to evaluate and compare the resilience of ML-based NIDS to adversarial attacks. Finally, there is no a lightweight solution that effectively detects and classifies adversarial traffic with scoring high accuracy on both clean and perturbed data with proven efficiency over recent dataset and across various attack types and defenses. These gaps hinder the robustness of ML-based NIDS against adversarial attacks. Therefore, this Ph.D. thesis aims to address these vulnerabilities to enhance the ML-based NIDS resilience. The overall contributions include; 1) A threat modeling for ML-based NIDS using STRIDE and Attack Tree methodologies; 2) An investigation of the realism and performance of generic adversarial attacks against DL-based NIDS; 3) A comprehensive evaluation for adversarial attacks' performance consistency, models' resilience, and defenses' effectiveness; 4) Adversarial-Resilient NIDS, a framework for detecting and classifying adversarial attacks against ML-based NIDS.40 0Item Restricted Synonym-based Adversarial Attacks in Arabic Text Classification Systems(Clarkson University, 2024-05-21) Alshahrani, Norah Falah S; Matthews, JeannaText classification systems have been proven vulnerable to adversarial text examples, modified versions of the original text examples that are often unnoticed by human eyes, yet can force text classification models to alter their classification. Often, research works quantifying the impact of adversarial text attacks have been applied only to models trained in English. In this thesis, we introduce the first word-level study of adversarial attacks in Arabic. Specifically, we use a synonym (word-level) attack using a Masked Language Modeling (MLM) task with a BERT model in a black-box setting to assess the robustness of the state-of-the-art text classification models to adversarial attacks in Arabic. To evaluate the grammatical and semantic similarities of the newly produced adversarial examples using our synonym BERT-based attack, we invite four human evaluators to assess and compare the produced adversarial examples with their original examples. We also study the transferability of these newly produced Arabic adversarial examples to various models and investigate the effectiveness of defense mechanisms against these adversarial examples on the BERT models. We find that fine-tuned BERT models were more susceptible to our synonym attacks than the other Deep Neural Networks (DNN) models like WordCNN and WordLSTM we trained. We also find that fine-tuned BERT models were more susceptible to transferred attacks. We, lastly, find that fine-tuned BERT models successfully regain at least 2% in accuracy after applying adversarial training as an initial defense mechanism. We share our code scripts and trained models on GitHub at https://github.com/NorahAlshahrani/bert_synonym_attack.37 0Item Restricted Examining Adversarial Examples as Defensive Approach Against Web Fingerprinting Attacks(Saudi Digital Library, 2023) Alzamil, Layla; Elahi, TariqIn the age of online surveillance, and the growth in privacy and security concerns for individuals activities over the internet. Tor browser is a widely used anonymisation network offering security and privacy-enhanced features to protect users online. However, web fingerprinting attacks (WF) have been a challenging threat that aims to deanonymise users browsing activities over Tor. This interdisciplinary project contributes to defending against WF attacks by employing the “attack-on-attack” approach, where Adversarial Examples (AEs) attacks are launched to exploit existing vulnerabilities in the neural network architecture. The FGSM and DeepFool construction methods are implemented to introduce perturbed data to these models and lead them to misclassify, significantly decreasing the classifier prediction accuracy.15 0