SACM - United Kingdom
Permanent URI for this collectionhttps://drepo.sdl.edu.sa/handle/20.500.14154/9667
Browse
4 results
Search Results
Item Restricted Leveraging Brain-Computer Interface Technology to Interpret Intentions and Enable Cognitive Human-Computer Interaction(Univeristy of Manchester, 2024) Alsaddique, Luay; Breitling, RainerIn this paper, I present the developed, integration, and evaluation of a Brain–Computer Interface (BCI) system which showcases the accessibility and usability of a BCI head- set to interact external devices and services. The paper initially provides a detailed survey of the history of BCI technology and gives a comprehensive overview of BCI paradigms and the underpinning biology of the brain, current BCI technologies, recent advances in the field, the BCI headset market, and prospective applications of the technology. The research focuses on leveraging BCI headsets within a BCI platform to interface with these external end-points through the Motor Imagery BCI paradigm. I present the design, implementation, and evaluation of a fully functioning, efficient, and versatile BCI system which can trigger real-world commands in devices and digital services. The BCI system demonstrates its versatility through use cases such as control- ling IoT devices, infrared (IR) based devices, and interacting with advanced language models. The system’s performance was quantified across various conditions, achiev- ing detection probabilities exceeding 95%, with latency as low as 1.4 seconds when hosted on a laptop and 2.1 seconds when hosted on a Raspberry Pi. The paper concludes with a detailed analysis of the limitations and potential im- provements of the newly developed system, and its implications for possible appli- cations. It also includes a comparative evaluation of latency, power efficiency, and usability, when hosting the BCI system on a laptop versus a Raspberry Pi.23 0Item Restricted Towards Numerical Reasoning in Machine Reading Comprehension(Imperial College London, 2024-02-01) Al-Negheimish, Hadeel; Russo, Alessandra; Madhyastha, PranavaAnswering questions about a specific context often requires integrating multiple pieces of information and reasoning about them to arrive at the intended answer. Reasoning in natural language for machine reading comprehension (MRC) remains a significant challenge. In this thesis, we focus on numerical reasoning tasks. As opposed to current black-box approaches that provide little evidence of their reasoning process, we propose a novel approach that facilitates interpretable and verifiable reasoning by using Reasoning Templates for question decomposition. Our evaluations hinted at the existence of problematic behaviour in numerical reasoning models, underscoring the need for a better understanding of their capabilities. We conduct, as a second contribution of this thesis, a controlled study to assess how well current models understand questions and to what extent such models are basing their answers on textual evidence. Our findings indicate that applying transformations that obscure or destroy the syntactic and semantic properties of the questions does not change the output of the top-performing models. This behaviour reveals serious holes in how the models work. It calls into question evaluation paradigms that only use standard quantitative measures such as accuracy and F1 scores, as they lead to a false illusion of progress. To improve the reliability of numerical reasoning models in MRC, we propose and demonstrate, as our third contribution, the effectiveness of a solution to one of these fundamental problems: catastrophic insensitivity to word order. We do this by FORCED INVALIDATION: training the model to flag samples that cannot be reliably answered. We show it is highly effective at preserving word order importance in machine reading comprehension tasks and generalises well to other natural language understanding tasks. While our Reasoning Templates are competitive with the state-of-the-art on a single type, engineering them incurs a considerable overhead. Leveraging our better insights on natural language understanding and concurrent advancements in few-shot learning, we conduct a first investigation to overcome scalability limitations. Our fourth contribution combines large language models for question decomposition with symbolic rule learning for answer recomposition, we surpass our previous results on Subtraction questions and generalise to more reasoning types.14 0Item Restricted MULTI-TARGET REGRESSION APPLICATIONS FOR PREDICTING GENE EXPRESSION LEVELS(Saudi Digital Library, 2023-12-12) Altaweraqi, Nada; King, RossThe progress of cancer is subject to activities of cellular networks, all of which are governed by the dynamics of various factors, both inside and outside the cell. Although the mechanisms of these networks remain enigmatic, they can be explored by studying gene expression levels. However, these are challenging to model and predict. Predictions of gene expression levels can be based on two approaches: firstly, mechanistic models, which simulate some aspects of biological systems, and secondly, machine learning models, built using empirical data. Both approaches are widely deployed, with limitations experienced on both sides. This thesis outlines a novel framework for integrating models representing mechanism knowledge of signalling pathways in machine learning models. The latter models are multi-target regression models that predict gene expression levels. The study proposes multiple representations of signalling pathways transformed into features describingdifferent genes. The first representation is the graph-based representation which encodes interaction knowledge using graph heuristics and embedding methods. Applying multi-target regression staking aided by the common neighbour features resulted in a noticeable improvement in predictions, as the significant test resulted in a p- value of 4.4 e-244, which is a strong evidence that there is a clear improvement in predictions from the proposed model. Our frameworks achieved better performance than the baseline after changing the graph-based algorithm , with clear superiority for Deepwalk-based models. Deepwalk-based models outperformed the baseline in 208 of 300 genes. Furthermore, when compared using the significant test, all methods that integrate pathway knowledge significantly outperformed the baseline. We also investigated the utility of the machine learning models to develop sound hypotheses of gene associations. It was noticed that some of the knowledge retrieved from these models are reported in the literature. The second representation is a stochastic simulation model of signalling pathways, which reflects the activities of signalling pathways over time. As hypothesised, this model was found to surpass both the baseline and the Deepwalk-based model built using graph modelling techniques. Thee model built using this representation outperformed the baseline in 123 genes out 200 (p-value of 0.01 ). Finally, we present a surrogate modelling approach to reducing the impact of noise in gene expression data. The surrogate data is tested using association measures and proved to yield more accurate results than raw data.21 0Item Restricted Artificial Immune Systems for Detecting Unknown Malware in the IoT(Queen Mary University of London, 2023-01-27) Alrubayyi, Hadeel; Goteng, Gokop; Jaber, MonaWith the expansion of the digital world, the number of the Internet of Things (IoT) devices is evolving dramatically. IoT devices have limited computational power and small memory. Also, they are not part of traditional computer networks. Consequently, existing and often complex security methods are unsuitable for malware detection in IoT networks. This has become a significant concern in the advent of increasingly unpredictable and innovative cyber-attacks. In this context, artificial immune systems (AIS) have emerged as effective IoT malware detection mechanisms with low computational requirements. In this research, we present a critical analysis to highlight the limitations of the AIS state-of-the-art solutions and identify promising research directions. Next, we propose Negative-Positive-Selection (NPS) method, which is an AIS-based for malware detection. The NPS is suitable for IoT's computation restrictions and security challenges. The NPS performance is benchmarked against the state-of-the-art using multiple real-time datasets. The simulation results show a 21% improvement in malware detection and a 65% reduction in the number of detectors. Then, we examine AIS solutions' potential gains and limitations under realistic implementation scenarios. We design a framework to mimic real-life IoT systems. The objective is to evaluate the method's lightweight, fault tolerance, and detection performance with regard to the system constraints. We demonstrate that AIS solutions successfully detect unknown malware in the most challenging IoT environment in terms of memory capacity and processing power. Furthermore, the systemic results with different system architectures reveal the AIS solutions' ability to transfer learning between IoT devices. Transfer learning is a critical feature in the presence of highly constrained devices in the network. More importantly, we highlight that the simulation environment cannot be taken at face value. In reality, AIS malware detection accuracy for IoT systems is likely to be close to 10% worse than simulation results, as indicated by the study results.68 0