Saudi Cultural Missions Theses & Dissertations

Permanent URI for this communityhttps://drepo.sdl.edu.sa/handle/20.500.14154/10

Browse

Search Results

Now showing 1 - 9 of 9
  • ItemRestricted
    Leveraging Brain-Computer Interface Technology to Interpret Intentions and Enable Cognitive Human-Computer Interaction
    (Univeristy of Manchester, 2024) Alsaddique, Luay; Breitling, Rainer
    In this paper, I present the developed, integration, and evaluation of a Brain–Computer Interface (BCI) system which showcases the accessibility and usability of a BCI head- set to interact external devices and services. The paper initially provides a detailed survey of the history of BCI technology and gives a comprehensive overview of BCI paradigms and the underpinning biology of the brain, current BCI technologies, recent advances in the field, the BCI headset market, and prospective applications of the technology. The research focuses on leveraging BCI headsets within a BCI platform to interface with these external end-points through the Motor Imagery BCI paradigm. I present the design, implementation, and evaluation of a fully functioning, efficient, and versatile BCI system which can trigger real-world commands in devices and digital services. The BCI system demonstrates its versatility through use cases such as control- ling IoT devices, infrared (IR) based devices, and interacting with advanced language models. The system’s performance was quantified across various conditions, achiev- ing detection probabilities exceeding 95%, with latency as low as 1.4 seconds when hosted on a laptop and 2.1 seconds when hosted on a Raspberry Pi. The paper concludes with a detailed analysis of the limitations and potential im- provements of the newly developed system, and its implications for possible appli- cations. It also includes a comparative evaluation of latency, power efficiency, and usability, when hosting the BCI system on a laptop versus a Raspberry Pi.
    16 0
  • Thumbnail Image
    ItemRestricted
    An AI-Driven, Secure, and Trustworthy Ranking System for Blockchain-Based Wallets
    (University of Technology Sydney, 2024-07-08) Almadani, Mwaheb; Farookh Hussain
    The significance of blockchain security has gained considerable interest as blockchain technologies grow in popularity. The spectacular rise in cryptocurrency values has also increased the adoption of blockchain-based wallets(BW/BWs). This tendency emphasizes the need for comprehensive security measures to protect digital assets, maintain transaction integrity and preserve trust in the blockchain networks. The most critical concern surrounding blockchain-based wallets is managing users' private keys, which are essential for authorizing transactions and accessing the digital cryptocurrencies stored in the blockchain network. In recent years, malicious actors have increased efforts to compromise these private keys and take control of the BW's digital assets. Therefore, ensuring the security of private keys through rigorous security protocols is paramount to defend against unauthorized access and potential financial losses. This thesis aims to investigate the integration of hard security, such as authentication techniques and access controls, and soft security measures, such as trust models and ranking systems, in the context of BWs. By incorporating tangible physical defenses (hard security) with intangible procedural strategies (soft security), we present a comprehensive framework for enhancing BW solution security and trustworthiness. This is essential for the widespread adoption and use of blockchain technology in financial transactions and digital asset management. This thesis proposes a secure, intelligent, and trustworthy approach for BW solutions that incorporates 2FA and MFA as hard security measures and an AI-driven ranking system as soft security measures. We have developed a BW website (BWW) with four authentication mechanisms, including different factors such as TOTP and biometrics through facial recognition, allowing BW users to choose their preferred level of security. The BWW remarkably improves the security of BW solutions by defending them against various threats, including sophisticated cyber-attacks, unauthorized access and human-caused weaknesses. Moreover, We introduce a trust-based ranking system (TBW-RAnk) for BW solutions that transparently ranks the BW solutions according to several objective and trusted criteria. TBW-RAnk is built using three AI models, namely the random forest classifier (RFC), the support vector classifier (SVC) and deep neural network (DNN). It has two modes: general and customized for a comprehensive and accurate assessment and recommendation for BW users. Consequently, BW users can make informed decisions and increase their security within the blockchain ecosystem. The proposed approach enhances the security and trustworthiness of BWs and increases their acceptance in the market.
    42 0
  • Thumbnail Image
    ItemRestricted
    Towards Numerical Reasoning in Machine Reading Comprehension
    (Imperial College London, 2024-02-01) Al-Negheimish, Hadeel; Russo, Alessandra; Madhyastha, Pranava
    Answering questions about a specific context often requires integrating multiple pieces of information and reasoning about them to arrive at the intended answer. Reasoning in natural language for machine reading comprehension (MRC) remains a significant challenge. In this thesis, we focus on numerical reasoning tasks. As opposed to current black-box approaches that provide little evidence of their reasoning process, we propose a novel approach that facilitates interpretable and verifiable reasoning by using Reasoning Templates for question decomposition. Our evaluations hinted at the existence of problematic behaviour in numerical reasoning models, underscoring the need for a better understanding of their capabilities. We conduct, as a second contribution of this thesis, a controlled study to assess how well current models understand questions and to what extent such models are basing their answers on textual evidence. Our findings indicate that applying transformations that obscure or destroy the syntactic and semantic properties of the questions does not change the output of the top-performing models. This behaviour reveals serious holes in how the models work. It calls into question evaluation paradigms that only use standard quantitative measures such as accuracy and F1 scores, as they lead to a false illusion of progress. To improve the reliability of numerical reasoning models in MRC, we propose and demonstrate, as our third contribution, the effectiveness of a solution to one of these fundamental problems: catastrophic insensitivity to word order. We do this by FORCED INVALIDATION: training the model to flag samples that cannot be reliably answered. We show it is highly effective at preserving word order importance in machine reading comprehension tasks and generalises well to other natural language understanding tasks. While our Reasoning Templates are competitive with the state-of-the-art on a single type, engineering them incurs a considerable overhead. Leveraging our better insights on natural language understanding and concurrent advancements in few-shot learning, we conduct a first investigation to overcome scalability limitations. Our fourth contribution combines large language models for question decomposition with symbolic rule learning for answer recomposition, we surpass our previous results on Subtraction questions and generalise to more reasoning types.
    14 0
  • Thumbnail Image
    ItemRestricted
    MULTI-TARGET REGRESSION APPLICATIONS FOR PREDICTING GENE EXPRESSION LEVELS
    (Saudi Digital Library, 2023-12-12) Altaweraqi, Nada; King, Ross
    The progress of cancer is subject to activities of cellular networks, all of which are governed by the dynamics of various factors, both inside and outside the cell. Although the mechanisms of these networks remain enigmatic, they can be explored by studying gene expression levels. However, these are challenging to model and predict. Predictions of gene expression levels can be based on two approaches: firstly, mechanistic models, which simulate some aspects of biological systems, and secondly, machine learning models, built using empirical data. Both approaches are widely deployed, with limitations experienced on both sides. This thesis outlines a novel framework for integrating models representing mechanism knowledge of signalling pathways in machine learning models. The latter models are multi-target regression models that predict gene expression levels. The study proposes multiple representations of signalling pathways transformed into features describingdifferent genes. The first representation is the graph-based representation which encodes interaction knowledge using graph heuristics and embedding methods. Applying multi-target regression staking aided by the common neighbour features resulted in a noticeable improvement in predictions, as the significant test resulted in a p- value of 4.4 e-244, which is a strong evidence that there is a clear improvement in predictions from the proposed model. Our frameworks achieved better performance than the baseline after changing the graph-based algorithm , with clear superiority for Deepwalk-based models. Deepwalk-based models outperformed the baseline in 208 of 300 genes. Furthermore, when compared using the significant test, all methods that integrate pathway knowledge significantly outperformed the baseline. We also investigated the utility of the machine learning models to develop sound hypotheses of gene associations. It was noticed that some of the knowledge retrieved from these models are reported in the literature. The second representation is a stochastic simulation model of signalling pathways, which reflects the activities of signalling pathways over time. As hypothesised, this model was found to surpass both the baseline and the Deepwalk-based model built using graph modelling techniques. Thee model built using this representation outperformed the baseline in 123 genes out 200 (p-value of 0.01 ). Finally, we present a surrogate modelling approach to reducing the impact of noise in gene expression data. The surrogate data is tested using association measures and proved to yield more accurate results than raw data.
    21 0
  • Thumbnail Image
    ItemRestricted
    Automated Repair of Accessibility Issues in Mobile Applications
    (Saudi Digital Library, 2023-11-29) Alotaibi, Ali; Halfond, William GJ
    Mobile accessibility is more critical than ever due to the significant increase in mobile app usage, particularly among people with disabilities who rely on mobile devices to access essential information and services. People with vision and motor disabilities often use assistive technologies to interact with mobile applications. However, recent studies show that a significant percentage of mobile apps remain inaccessible due to layout accessibility issues, making them challenging to use for older adults and people with disabilities. Unfortunately, existing techniques are limited in helping developers debug these issues; they can only detect issues but not repair them. Therefore, the repair of layout accessibility issues remains a manual, labor-intensive, and error-prone process. Automated repair of layout accessibility issues is complicated by several challenges. First, a repair must account for multiple issues holistically in order to preserve the relative consistency of the original app design. Second, due to the complex relationship between UI components, there is no straightforward way of identifying the set of elements and properties that need to be modified for a given issue. Third, assuming the relevant views and properties could be identified, the number of possible changes that need to be considered grows exponentially as more elements and properties need to be considered. Finally, a change in one element can create cascading changes that lead to new problems in other areas of the UI. Together, these challenges make a seemingly simple repair difficult to achieve. In this dissertation, I introduce a repair framework that builds and analyzes models of the User Interface (UI) and leverages multi-objective genetic search algorithms to repair layout accessibility issues. To evaluate the effectiveness of the framework, I instantiated it to repair the different known types of layout accessibility issues in mobile apps. The empirical evaluation of these instantiations on real-world mobile apps demonstrated their effectiveness in repairing these issues. In addition, I conducted user studies to assess the impact of the repairs on the UI quality and aesthetics. The results demonstrated that the repaired UIs were not only more accessible but also did not distort or significantly change their original design. Overall, these results are positive and indicate that my repair framework can be highly effective in automatically repairing layout accessibility issues in mobile applications. Overall, my results confirm my dissertation's hypothesis that a repair framework employing a multi-objective genetic search-based approach can be highly effective in automatically repairing layout accessibility issues in mobile applications.
    36 0
  • Thumbnail Image
    ItemRestricted
    iVFC: A proactive methodology for task offloading in VFC
    (Saudi Digital Library, 2023-11-16) Hamdi, Aisha Muhammad A; Hussain, Farookh
    In vehicular fog computing, the idle resources of moving and parked vehicles can be used for computation purposes to minimize the processing delay of compute-intensive vehicular applications by offloading tasks from the edge servers or vehicles to nearby fog node vehicles for execution. However, the offloading decision is a complicated process and the selection of an appropriate target node is a crucial decision that the source node has to make. Therefore, this thesis introduces an innovative and proactive methodology for task offloading in VFC. The key novelty of this approach is the use of utilization-based prediction techniques to predict a vehicle's future computational resource requirements. This predictive approach enables the intelligent selection of target nodes for task offloading, ensuring tasks are offloaded before resource exhaustion occurs. Moreover, the methodology proposed in this thesis includes an incentive mechanism to motivate fog node vehicles to accept incoming tasks and a service provider selection mechanism to help the overloaded node to find the most optimal target node vehicle that can effectively handle the offloaded task. The proactive nature of this approach promises an efficient, real-time, and responsive task offloading process, which is essential for meeting the demands of the Internet of vehicle applications.
    20 0
  • Thumbnail Image
    ItemRestricted
    Information Integrity: From a Lens of Explainable AI With Cultural and Social Behaviors
    (2023-08-11) Alharbi, Raed; Thai, My T
    The rapid development of Artificial intelligence (AI), such as machine learning (ML) and deep neural networks (DNNs), has changed the way information is processed and used. However, along with these advancements, challenges to information integrity have emerged. The widespread dissemination of misinformation through digital platforms, coupled with the lack of transparency in black-box ML models, has raised concerns about the reliability and trustworthiness of informa- tion to expert users (ML developers) and non-expert users (end-users). Unfortunately, employing eXplainable Artificial Intelligence (XAI) approaches on real-world applications to improve the trustworthiness of DNNs models is still far-fetched and not straightforward. Motivated by these observations, this thesis concentrates on two directions. • Misinformation Mitigation. In the first direction, we leverage XAI techniques to mitigate misinformation through three main approaches: evaluating the trustworthiness of fake news detection models from a user perspective, studying the influence of social and cultural behavior on misinformation propa- gation, and analyzing the diffusion of descriptive norms in social media networks to promote positive norms and combat misinformation. • Developing Advanced ML Models. In the second direction, we turn our attention to developing ML models from two aspects. The first aspect exploits XAI behaviors to provide a new method to simultaneously preserve the performance and explainability of student models, which in their primitive form provide little transparency. In the second aspect, we develop the Temporal graph Fake News Detec- tion Framework (T-FND), which effectively captures heterogeneous and repetitive charac- teristics of fake news behavior.
    22 0
  • Thumbnail Image
    ItemRestricted
    Investigation on Design and Development Methods for Internet of Things
    (Saudi Digital Library, 2023-09-06) AlZahrani, Yazeed; Shen, Jun; Yan, Jun
    The thesis work majorly focuses on the development methodologies of the Internet of Things (IoT). A detailed literature survey is presented for the discussion of various challenges in the development of software and design and deployment of hardware. The thesis work deals with the efficient development methodologies for the deployment of IoT system. Efficient hardware and software development reduces the risk of the system bugs and faults. The optimal placement of the IoT devices is the major challenge for the monitoring application. A Qualitative Spatial Reasoning (QSR) and Qualitative Temporal Reasoning (QTR) methodologies are proposed to build software systems. The proposed hybrid methodology includes the features of QSR, QTR, and traditional data-based methodologies. The hybrid methodology is proposed to build the software systems and direct them to the specific goal of obtaining outputs inherent to the process. The hybrid methodology includes the support of tools and is detailed, integrated, and fits the general proposal. This methodology repeats the structure of Spatio-temporal reasoning goals. The object-oriented IoT device placement is the major goal of the proposed work. Segmentation and object detection is used for the division of the region into sub-regions. The coverage and connectivity are maintained by the optimal placement of the IoT devices using RCC8 and TPCC algorithms. Over the years, IoT has offered different solutions in all kinds of areas and contexts. The diversity of these challenges makes it hard to grasp the underlying principles of the different solutions and to design an appropriate custom implementation on the IoT space. One of the major objective of the proposed thesis work is to study numerous production-ready IoT offerings, extract recurring proven solution principles, and classify them into spatial patterns. The method of refinement of the goals is employed so that complex challenges are solved by breaking them down into simple and achievable sub-goals. The work deals with the major sub-goals e.g. efficient coverage of the field, connectivity of the IoT devices, Spatio-temporal aggregation of the data, and estimation of spatially connected regions of event detection. We have proposed methods to achieve each sub-goal for all different types of spatial patterns. The spatial patterns developed can be used in ongoing and future research on the IoT to understand the principles of the IoT, which will, in turn, promote the better development of existing and new IoT devices. The next objective is to utilize the IoT network for enterprise architecture (EA) based IoT application. EA defines the structure and operation of an organization to determine the most effective way for it to achieve its objectives. Digital transformation of EA is achieved through analysis, planning, design, and implementation, which interprets enterprise goals into an IoT-enabled enterprise design. A blueprint is necessary for the readying of IT resources that support business services and processes. A systematic approach is proposed for the planning and development of EA for IoT-Applications. The Enterprise Interface (EI) layer is proposed to efficiently categorize the data. The data is categorized based on local and global factors. The clustered data is then utilized by the end-users. A novel four-tier structure is proposed for Enterprise Applications. We analyzed the challenges, contextualized them, and offered solutions and recommendations. The last objective of the thesis work is to develop energy-efficient data consistency method. The data consistency is a challenge for designing energy-efficient medium access control protocol used in IoT. The energy-efficient data consistency method makes the protocol suitable for low, medium, and high data rate applications. The idea of energy-efficient data consistency protocol is proposed with data aggregation. The proposed protocol efficiently utilizes the data rate as well as saves energy. The optimal sampling rate selection method is introduced for maintaining the data consistency of continuous and periodic monitoring node in an energy-efficient manner. In the starting phase, the nodes will be classified into event and continuous monitoring nodes. The machine learning based logistic classification method is used for the classification of nodes. The sampling rate of continuous monitoring nodes is optimized during the setup phase by using optimized sampling rate data aggregation algorithm. Furthermore, an energy-efficient time division multiple access (EETDMA) protocol is used for the continuous monitoring on IoT devices, and an energy-efficient bit map assisted (EEBMA) protocol is proposed for the event driven nodes.
    32 0
  • ItemRestricted
    Artificial Immune Systems for Detecting Unknown Malware in the IoT
    (Queen Mary University of London, 2023-01-27) Alrubayyi, Hadeel; Goteng, Gokop; Jaber, Mona
    With the expansion of the digital world, the number of the Internet of Things (IoT) devices is evolving dramatically. IoT devices have limited computational power and small memory. Also, they are not part of traditional computer networks. Consequently, existing and often complex security methods are unsuitable for malware detection in IoT networks. This has become a significant concern in the advent of increasingly unpredictable and innovative cyber-attacks. In this context, artificial immune systems (AIS) have emerged as effective IoT malware detection mechanisms with low computational requirements. In this research, we present a critical analysis to highlight the limitations of the AIS state-of-the-art solutions and identify promising research directions. Next, we propose Negative-Positive-Selection (NPS) method, which is an AIS-based for malware detection. The NPS is suitable for IoT's computation restrictions and security challenges. The NPS performance is benchmarked against the state-of-the-art using multiple real-time datasets. The simulation results show a 21% improvement in malware detection and a 65% reduction in the number of detectors. Then, we examine AIS solutions' potential gains and limitations under realistic implementation scenarios. We design a framework to mimic real-life IoT systems. The objective is to evaluate the method's lightweight, fault tolerance, and detection performance with regard to the system constraints. We demonstrate that AIS solutions successfully detect unknown malware in the most challenging IoT environment in terms of memory capacity and processing power. Furthermore, the systemic results with different system architectures reveal the AIS solutions' ability to transfer learning between IoT devices. Transfer learning is a critical feature in the presence of highly constrained devices in the network. More importantly, we highlight that the simulation environment cannot be taken at face value. In reality, AIS malware detection accuracy for IoT systems is likely to be close to 10% worse than simulation results, as indicated by the study results.
    68 0

Copyright owned by the Saudi Digital Library (SDL) © 2024