Saudi Cultural Missions Theses & Dissertations

Permanent URI for this communityhttps://drepo.sdl.edu.sa/handle/20.500.14154/10

Browse

Search Results

Now showing 1 - 3 of 3
  • ItemRestricted
    Evaluation and Detection of Adversarial Attacks in ML-based NIDS
    (Newcastle University, 2024) Alatwi, Huda Ali O; Morisset, Charles
    A Network Intrusion Detection System (NIDS) monitors network traffic to detect unauthorized access and potential security breaches. A Machine Learning (ML)-based NIDS is a security mechanism that uses ML algorithms to automatically detect and identify suspicious activities or potential threats in a network by analyzing traffic patterns, distinguishing between normal and malicious behaviors, and alerting or blocking unauthorized access. Despite high accuracy, ML-based NIDS are vulnerable to adversarial attacks, where attackers modify malicious traffic to evade detection and transfer these tactics across various systems. To the best of our knowledge, several crucial research gaps persist in this area that have not yet been addressed. First, there are no systematic threat models for identifying and analyzing potential threats and vulnerabilities in ML-based NIDS. This lack of structured threat modeling hinders the development of comprehensive defense strategies and leave these systems vulnerable to adversarial attacks that exploit unknown weaknesses in the ML algorithms or system architecture. The current literature employs generic adversarial attacks mainly designed for image recognition domain to assess the resilience of ML-based, but no research has verified the realism and compliance of these attacks with network domain constraints. Investigating whether these attacks produce valid network is crucial to determine their real-world threat level and the suitability of ML-based NIDS for deployment. Another gap in the literature is the lack of comprehensive evaluations that include a wide range of models, attack types, and defense strategies using contemporary network traffic data. This gap makes it difficult to verify the generalizability and applicability of the findings for real-world. The absence of standardized metrics further hampers the ability to evaluate and compare the resilience of ML-based NIDS to adversarial attacks. Finally, there is no a lightweight solution that effectively detects and classifies adversarial traffic with scoring high accuracy on both clean and perturbed data with proven efficiency over recent dataset and across various attack types and defenses. These gaps hinder the robustness of ML-based NIDS against adversarial attacks. Therefore, this Ph.D. thesis aims to address these vulnerabilities to enhance the ML-based NIDS resilience. The overall contributions include; 1) A threat modeling for ML-based NIDS using STRIDE and Attack Tree methodologies; 2) An investigation of the realism and performance of generic adversarial attacks against DL-based NIDS; 3) A comprehensive evaluation for adversarial attacks' performance consistency, models' resilience, and defenses' effectiveness; 4) Adversarial-Resilient NIDS, a framework for detecting and classifying adversarial attacks against ML-based NIDS.
    40 0
  • Thumbnail Image
    ItemRestricted
    EXPLORING THE TRANSFERABILITY OF ADVERSARIAL EXAMPLES IN NATURAL LANGUAGE PROCESSING
    (Texas A&M University-Kingsville, 2024-06-21) Allahyani, Samah; Nijim, Mais
    In recent years, there has been a growing concern about the vulnerability of machine learning models, particularly in the field of natural language processing (NLP). Many tasks in natural language processing, such as text classification, machine translation, and question answering, are at risk of adversarial attacks where maliciously crafted inputs can cause them to make incorrect predictions or classifications. Adversarial examples created on one model can also fool another model. The transferability of adversarial has also garnered significant attention as it is a crucial property for facilitating black-box attacks. In our comprehensive research, we employed an array of widely used NLP models for sentiment analysis and text classification tasks. We first generated adversarial examples for a set of source models, using five state-of-the-art attack methods. We then evaluated the transferability of these adversarial examples by testing their effectiveness on different target models, to explore the main factors such as model architecture, dataset characteristics and the perturbation techniques impacting transferability. Moreover, we extended our investigation by delving into transferability-enhancing techniques. We assisted two transferability-enhancing methods and leveraged the power of Large Language Models (LLM) to generate natural adversarial examples that show a moderate transferability across different NLP architecture. Through our research, we aim to provide insights into the transferability of adversarial examples in NLP, and shed light on the factors that contribute to their transferability. This knowledge can then be used to develop more robust, and resilient, NLP models that are less susceptible to adversarial attacks; ultimately, enhancing the security and reliability of these systems in various applications.
    9 0
  • Thumbnail Image
    ItemRestricted
    DETECTING MANIPULATED AND ADVERSARIAL IMAGES: A COMPREHENSIVE STUDY OF REAL-WORLD APPLICATIONS
    (UCF STARS, 2023-11-06) Alkhowaiter, Mohammed; Zou, Cliff
    The great advance of communication technology comes with a rapid increase of disinformation in many kinds and shapes; manipulated images are one of the primary examples of disinformation that can affect many users. Such activity can severely impact public behavior, attitude, and be- lief or sway the viewers’ perception in any malicious or benign direction. Additionally, adversarial attacks targeting deep learning models pose a severe risk to computer vision applications. This dissertation explores ways of detecting and resisting manipulated or adversarial attack images. The first contribution evaluates perceptual hashing (pHash) algorithms for detecting image manipulation on social media platforms like Facebook and Twitter. The study demonstrates the differences in image processing between the two platforms and proposes a new approach to find the optimal detection threshold for each algorithm. The next contribution develops a new pHash authentication to detect fake imagery on social media networks, using a self-supervised learning framework and contrastive loss. In addition, a fake image sample generator is developed to cover three major image manipulating operations (copy-move, splicing, removal). The proposed authentication technique outperforms the state-of-the-art pHash methods. The third contribution addresses the challenges of adversarial attacks to deep learning models. A new adversarial-aware deep learning system is proposed using a classical machine learning model as the secondary verification system to complement the primary deep learning model in image classification. The proposed approach outperforms current state-of-the-art adversarial defense systems. Finally, the fourth contribution fuses big data from Extra-Military resources to support military decision-making. The study pro- poses a workflow, reviews data availability, security, privacy, and integrity challenges, and suggests solutions. A demonstration of the proposed image authentication is introduced to prevent wrong decisions and increase integrity. Overall, the dissertation provides practical solutions for detect- ing manipulated and adversarial attack images and integrates our proposed solutions in supporting military decision-making workflow.
    30 0

Copyright owned by the Saudi Digital Library (SDL) © 2025