Saudi Cultural Missions Theses & Dissertations
Permanent URI for this communityhttps://drepo.sdl.edu.sa/handle/20.500.14154/10
Browse
88 results
Search Results
Item Restricted Measuring Human’s Trust in Robots in Real-time During Human-Robot Interaction(Swansea University, 2025) Alzahrani, Abdullah Saad; Muneeb, Imtiaz AhmadThis thesis presents a novel, holistic framework for understanding, measuring, and optimising human trust in robots, integrating cultural factors, mathematical modelling, physiological indicators, and behavioural analysis to establish foundational methodologies for trust-aware robotic systems. Through this comprehensive approach, we address the critical challenge of trust calibration in human-robot interaction (HRI) across diverse contexts. Trust is essential for effective HRI, impacting user acceptance, safety, and overall task performance in both collaborative and competitive settings. This thesis investigated a multi-faceted approach to understanding, modelling, and optimising human trust in robots across various HRI contexts. First, we explored cultural and contextual differences in trust, conducting cross-cultural studies in Saudi Arabia and the United Kingdom. Findings showed that trust factors such as controllability, usability, and risk perception vary significantly across cultures and HRI scenarios, highlighting the need for flexible, adaptive trust models that can accommodate these dynamics. Building on these cultural insights as a critical dimension of our holistic trust framework, we developed a mathematical model that emulates the layered framework of trust (initial, situational, and learned) to estimate trust in real-time. Experimental validation through repeated interactions demonstrated the model's ability to dynamically calibrate trust with both trust perception scores (TPS) and interaction sessions serving as significant predictors. This model showed promise for adaptive HRI systems capable of responding to evolving trust states. To further enhance our comprehensive trust measurement approach, this thesis explored physiological behaviours (PBs) as objective indicators. By using electrodermal activity (EDA), blood volume pulse (BVP), heart rate (HR), skin temperature (SKT), eye blinking rate (BR), and blinking duration (BD), we showed that specific PBs (HR, SKT) vary between trust and distrust states and can effectively predict trust levels in real-time. Extending this approach, we compared PB data across competitive and collaborative contexts and employed incremental transfer learning to improve predictive accuracy across different interaction settings. Recognising the potential of less intrusive trust indicators, we also examined vocal and non-vocal cues—such as pitch, speech rate, facial expressions, and blend shapes—as complementary measures of trust. Results indicated that these cues can reliably assess current trust states in real-time and predict trust development in subsequent interactions, with trust-related behaviours evolving over time in repeated HRI sessions. Our comprehensive analysis demonstrated that integrating these expressive behaviours provides quantifiable measurements for capturing trust, establishing them as reliable metrics within real-time assessment frameworks. As the final component of our integrated trust framework, this thesis explored reinforcement learning (RL) for trust optimisation in simulated environments. Integrating our trust model into an RL framework, we demonstrated that dynamically calibrated trust can enhance task performance and reduce the risks of both under and over-reliance on robotic systems. Together, these multifaceted contributions advance a holistic understanding of trust measurement and calibration in HRI, encompassing cultural insights, mathematical modelling, physiological and expressive behaviour analysis, and adaptive control. This integrated approach establishes foundational methodologies for developing trust-aware robots capable of enhancing collaborative outcomes and fostering sustained user trust in real-world applications. The framework presented in this thesis represents a significant advancement in creating robotic systems that can dynamically adapt to human trust states across diverse contexts and interaction scenarios.17 0Item Restricted Towards Robust Software Vulnerability Detection: Exploring Machine Learning Models, Datasets, and Explainability(Lancaster University, 2025-05) AlDebeyan, Fahad Ahmad; Hall, TracySoftware vulnerability detection and prediction aim to reduce the costs associated with identifying software security vulnerabilities. Studies have shown that many vulnerability prediction models underperform in practical applications compared to the performance results reported on vulnerability prediction datasets. This performance drop can be attributed to datasets containing synthetic or biased data. Additionally, most vulnerability prediction models operate on a binary classification basis (vulnerable or nonvulnerable), leaving developers to determine the context of vulnerabilities, such as identifying vulnerable lines of code and specific vulnerability types. Aims: This thesis aims to enhance the robustness of software vulnerability detection. To achieve this aim, I will explore ways to improve vulnerability prediction models by generating fine-grained predictions and increasing model accuracy. Furthermore, I will investigate methods to enhance the quality of vulnerability prediction datasets by identifying biases in existing datasets. Lastly, I will examine the impact of explanations on the ability of software practitioners to validate detected software vulnerabilities (i.e., true positive vulnerabilities) and to fix them correctly. Methods: I propose a novel approach to cluster software vulnerability types using abstract syntax tree (AST) N-grams as model features. I trained various vulnerability prediction models on training sets with different ratios of `easy negatives' (very different from positive data) and `hard negatives' (closely similar to positive data). These models were then evaluated on test sets comprising entire projects. Additionally, I utilized eXplainable AI (XAI) to obtain line-level attributions from LineVul, a state-of-the-art model and compared these attributions to actual vulnerable lines. Finally, I surveyed 99 software practitioners to assess the effect of four types of vulnerability explanations on their ability to validate and fix vulnerabilities correctly. Results: Using a random forest model with AST N-grams as features, I successfully clustered seven types of vulnerabilities, achieving a Matthews Correlation Coefficient (MCC) of up to 81%. I discovered that the ratio of easy to hard negatives in a vulnerability prediction dataset significantly impacts model performance. When evaluating entire projects, models trained on datasets with more easy negatives performed better, reaching a performance plateau at a ratio of 15 easy negatives per vulnerable instance. Through XAI, I enhanced the MSR dataset and the LineVul model, increasing both F-measure and MCC from 92% to 97% and 96%, respectively. Additionally, vulnerability explanations were found to assist developers in validating and correctly fixing vulnerabilities, with short-form text-based explanations being more effective and preferred by software practitioners. Lastly, I observed that software practitioners are willing to accept some reduction in detection accuracy in exchange for improved explainability. Conclusions: This thesis presents an approach to generate finer-grained software vulnerability predictions by clustering vulnerability types. I introduced the concept of easy and hard negatives in vulnerability prediction datasets, offering a deeper understanding of what constitutes a high-quality dataset. Utilizing XAI, I identified two biases in a widely used vulnerability prediction dataset and uncovered a limitation in a state-of-the-art model. Finally, I provided valuable insights into the types of explanations that developers find useful, guiding future research and tool development towards more effective vulnerability explanations.15 0Item Restricted GRAPH-BASED APPROACH: BRIDGING INSIGHTS FROM STRUCTURED AND UNSTRUCTURED DATA(Temple University, 2025) Aljurbua, Rafaa; Obradovic, ZoranGraph-based methodologies provide powerful tools for uncovering intricate relationships and patterns in complex data, enabling the integration of structured and unstructured information for insightful decision-making across diverse domains. Our research focuses on constructing graphs from structured and unstructured data, demonstrating their applications in healthcare and power systems. In healthcare, we examine how social networks influence the attitudes of hemodialysis patients toward kidney transplantation. Using a network-based approach, we investigate how social networks within hemodialysis clinics affect patients' attitudes, contributing to a growing understanding of this dynamic. Our findings emphasize that social networks improve the performance of machine learning models, highlighting the importance of social interactions in clinical settings (Aljurbua et al., 2022). We further introduce Node2VecFuseClassifier, a graph-based model that combines patient interactions with patient characteristics. By comparing problem representations that focus on sociodemographics versus social interactions, we demonstrate that incorporating patient-to-patient and patient-to-staff interactions results in more accurate predictions. This multi-modal analysis, which merges patient experiences with staff expertise, underscores the role of social networks in influencing attitudes toward transplantation (Aljurbua et al., 2024b). In power systems, we explore the impact of severe weather events that lead to power outages, specifically focusing on predicting weather-induced outages three hours in advance at the county level in the Pacific Northwest of the United States. By utilizing a multi-model multiplex network that integrates data from multiple sources including weather, transmission lines, lightning, vegetation, and social media posts from two leading platforms (Twitter and Reddit), we show how multiplex networks offer valuable insights for predicting power outages. This integration of diverse data sources and network-based modeling emphasizes the importance of leveraging multiple perspectives to enhance the understanding and prediction of power disruptions (Aljurbua et al., 2023). We further present HMN-RTS, a hierarchical multiplex network that classifies disruption severity by temporal learning from integrated weather recordings and social media posts. The multiplex network layers of this framework gather information about power outages, weather, lighting, land cover, transmission lines, and social media comments. By incorporating multiplex network layers consisting of data collected over time and across regions, we demonstrate that HMN-RTS significantly improves the accuracy of predicting the duration of weather-related outages. This framework enables grid operators to make more reliable predictions up to 6 hours in advance, supporting early risk assessment and proactive mitigation (Aljurbua et al., 2024a, 2025a). Additionally, we introduce SMN-WVF, a spatiotemporal multiplex network designed to predict the duration of power outages in distribution grids. By integrating network-based approach and multi-modal data across space and time, SMN-WVF offers a novel method for predicting disruption durations in distribution grids, enhancing decision-making and mitigation efforts while highlighting the critical role of network-based approaches in forecasting (Aljurbua et al., 2025b). Overall, our research showcases the potential of graph-based models in tackling complex challenges in both power systems and healthcare. By combining the network-based approach with multi-modal data, we present innovative solutions for predicting power outages and understanding patient attitudes.13 0Item Restricted Optimizing Healthcare Outcomes with a Medical Recommendation System Based on Machine Learning(Bahrain Polytechnic, 2024-08-18) Aldalbahi, Shrouq; Fawzy, AbdelhameedHealth care is extremely reliant on technology in a digital age, and this plays an important part in the fight against many diseases.Despite technological advancements, Misdiagnosis continue to pose a significant global health challenge. For avoiding these risks, This thesis is to explore how Machine Learning algorithms can be used in the disease diagnosis by building an ML-based Medical Recommendation System that significantly enhances the accuracy of disease diagnosis .Additionally, Even in an age where technology has become far more advanced and information is readily available to many people, a lot of people still follow traditional long-winded methods for seeking medical attention which can be time-consuming. This thesis also proposes a method for providing complete and detailed treatment recommendations these recommendations include descriptions of diseases, precautions, medications, exercise routines, and dietary suggestions tailored to the patient’s needs using machine learning. Utilizing two datasets of patient symptoms to train and test the models, we found that for dataset 1 the MultinomialNB model performed best at 97.36\%,followed by SVC at 94.44\%. Regarding dataset 2, the DNN model performed best at 84.19\%. This study implies that ML and DL algorithms could decreasing misdiagnosis, and improving patient care. This thesis illustrates a strong framework for application of advanced technologies in healthcare highlighting their transformative impact and substantial benefits, ultimately optimizing resource utilization for doctors and enhancing care and information for patients.45 0Item Restricted Cloud computing efficiency: optimizing resource utilization, energy consumption, latency, availability, and reliability using intelligent algorithms(The Universit of Western Australia, 2024) Alelyani, Abdullah Hamed A; Datta, Amitava; Ghulam, Mubasher HassanCloud computing offers significant potential for transforming service delivery with a cost-efficient, pay-as-you-go model, which has led to a dramatic increase in demand. The advantages of virtual machine (VM) and container technologies further optimize resource utilization in cloud environments. Containers and VMs improve application reliability by distributing replicated tasks across different physical machines (PMs). However, several persistent issues in cloud computing remain, including energy consumption, resource management, network traffic costs, availability, latency, service level agreement (SLA) violations, and reliability. Addressing these issues is critical for ensuring QoS. This thesis proposes approaches to address these issues and improve cloud performance.17 0Item Restricted Agent-Based Simulation, Machine Learning, and Gamification: An Integrated Framework for Addressing Disruptive Behaviour and Enhancing Student and Teacher Performance in Educational Settings(Durham University, 2025) Alharbi, Khulood Obaid; Cristea, Alexandra IThe classroom environment is a major contributor to the learning process in schools. Young students are affected by different factors in their academic progress, be it their own characteristics, their teacher’s, or their peers’. Disruptive behaviour, in particular, is one of the main factors that create challenges in the classroom environment, by hindering learning and effective classroom management. To overcome these challenges, it is important to understand what causes disruptive behaviour, and how to predict and prevent it. While Machine Learning (ML) is already used in education to predict disruption-related outcomes, there is less focus on understanding the processes leading to the effect of disruptive behaviour on learning. Thus, in this thesis, I propose using Agent-Based Modelling (ABM) for the simulation of disruptive behaviour in the classroom, to provide teachers with a tool that helps them not only predict, but also understand how classroom interactions lead to disruptions. Reducing negative factors in the learning environment, like disruptive behaviour, is further supported by increasing positive factors, such as motivation and engagement. Therefore, the use of gamification is then introduced as a strategy to promote motivation and improve engagement by making not only the learning environment more rewarding, but also the ABM teacher simulation more appealing. This thesis focuses on these issues by designing and implementing for the first time an integrated approach that combines ABM and ML with gamification to simulate classroom interactions and predict disruptive behaviour. The ABM models the complex interactions between students, teachers, and peers, providing a means to study the processes leading to behavioural issues. Meanwhile, ML algorithms help predict learning outcomes with behaviours such as inattentiveness, hyperactivity, and impulsiveness. The simulation has revealed insights, such as the impact of peer influence on student behaviour and the varying effects of different types of disruptive behaviour, such as inattentiveness, hyperactivity and impulsiveness, on academic performance. The improved performance of the hybrid ML-ABM is shown by measuring results of simulation with ML integration using metrics like MAE, RMSE and Pearson correlation. Moreover, the inclusion of gamification elements was shown to improve engagement by increased login frequency and course completion rates in a MOOC setting, as well as be effective and appealing for teachers using the ML-ABM. In conclusion, this thesis presents the first comprehensive model that integrates ABM, ML, and gamification elements to explore educational outcomes in a disruptive classroom; it develops the first hybrid ML-ABM approach for predicting and managing classroom disruptive behaviour; it provides empirical evidence of the effectiveness of gamification in boosting student and teacher engagement; and it offers practical insights for educators and policymakers seeking to adopt innovative, technology-driven strategies for improving teaching and learning. The research lays a foundation for future studies, aiming to further explore and expand the capabilities of these technologies in an educational context.24 0Item Restricted Rasm: Arabic Handwritten Character Recognition: A Data Quality Approach(University of Essex, 2024) Alghamdi, Tawfeeq; Doctor, FaiyazThe problem of AHCR is a challenging one due to the complexities of the Arabic script, and the variability in handwriting (especially for children). In this context, we present ‘Rasm’, a data quality approach that can significantly improve the result of AHCR problem, through a combination of preprocessing, augmentation, and filtering techniques. We use the Hijja dataset, which consists of samples from children from age 7 to age 12, and by applying advanced preprocessing steps and label-specific targeted augmentation, we achieve a significant improvement of a CNN performance from 85% to 96%. The key contribution of this work is to shed light on the importance of data quality for handwriting recognition. Despite the recent advances in deep learning, our result reveals the critical role of data quality in this task. The data-centric approach proposed in this work can be useful for other recognition tasks, and other languages in the future. We believe that this work has an important implication on improving AHCR systems for an educational context, where the variability in handwriting is high. Future work can extend the proposed techniques to other scripts and recognition tasks, to further improve the optical character recognition field.50 0Item Restricted Explainable AI Approach for detecting Generative AI Imagery(Aston University, 2024-09-29) Alghamdi, Sara; Barns, ChloeThe rapid advancement of Artificial Intelligence (AI) and machine learning, particularly deep learning models such as Convolutional Neural Networks (CNNs), has revolutionized image classification across diverse fields, including healthcare, autonomous vehicles, and digital forensics. However, the proliferation of AI-generated images, commonly referred to as deepfakes, has introduced significant ethical, societal, and security challenges. Deepfakes leverage AI to create highly realistic yet synthetic media, complicating the ability to differentiate between authentic and manipulated content. This has heightened the need for robust tools capable of accurately detecting and classifying such media to combat the risks of misinformation, fraud, and erosion of public trust. Traditional models, while effective in classification, often lack transparency in their decision-making processes, limiting stakeholder trust. To address this limitation, this study explores the integration of Explainable AI (XAI) techniques, such as SHAP (SHapley Additive exPlanations), with CNNs to enhance interpretability and trust in model predictions. By employing CNNs for high-accuracy classification and XAI methods for feature-level explanations, the research aims to contribute to digital forensics and content moderation, offering both technical reliability and transparency. This study highlights the critical need for trustworthy AI systems in the fight against manipulated media, providing a framework that balances efficacy, transparency, and ethical considerations.49 0Item Restricted Enhancing Breast Cancer Diagnosis with ResNet50 Models: A Comparative Study of Dropout Regularization and Early Stopping Techniques(University of Exeter, 2024-09-20) Basager, Raghed Tariq Ahmed; Kelson, Mark; Rowland, SarehEarly detection and treatment of breast cancer depend on accurate image analysis. Deep learning models, particularly Convolutional Neural Networks (CNNs), have proven highly effective in automating this critical diagnostic process. While prior studies have explored CNN architectures [1, 2], there is a growing need to understand the role of dropout regularization and fine-tuning strategies in optimizing these models. This research seeks to improve breast cancer diagnosis by evaluating ResNet50 models trained from scratch and fine-tuned, with and without dropout regularization, using both original and augmented datasets. Assumptions and Limitations: This research assumes that the Kaggle Histopathologic Cancer Detection dataset is representative of real-world clinical images. Limitations include dataset diversity and computational resources, which may affect generalization to broader clinical applications. ResNet50 models were trained on the Kaggle Histopathologic Cancer Detection dataset with various configurations of dropout, early stopping, and data augmentation [3–6]. Performance was assessed using accuracy, precision, recall, F1-score, and AUC-ROC metrics [7, 8]. The best-performing model was a ResNet50 trained from scratch without dropout regularization, achieving a validation accuracy of 97.19%, precision of 96.20%, recall of 96.90%, F1-score of 96.55%, and an AUC-ROC of 0.97. Grad-CAM visualizations offered insights into the model’s decision-making process, enhancing interpretability crucial for clinical use [9,10]. Misclassification analysis showed that data augmentation notably improved classification accuracy, particularly by correcting previously misclassified images [11]. These findings highlight that training ResNet50 without dropout, combined with data augmentation, significantly enhances diagnostic accuracy from histopathological images. Original Contributions: This research offers novel insights by demonstrating that a ResNet50 model without dropout regularization, trained from scratch and with advanced data augmentation techniques, can achieve high diagnostic accuracy and interpretability, paving the way for more reliable AI-powered diagnostics.15 0Item Restricted Quantifying and Profiling Echo Chambers on Social Media(Arizona State University, 2024) Alatawi, Faisal; Liu, Huan; Sen, Arunabha; Davulcu, Hasan; Shu, KaiEcho chambers on social media have become a critical focus in the study of online behavior and public discourse. These environments, characterized by the ideological homogeneity of users and limited exposure to opposing viewpoints, contribute to polarization, the spread of misinformation, and the entrenchment of biases. While significant research has been devoted to proving the existence of echo chambers, less attention has been given to understanding their internal dynamics. This dissertation addresses this gap by developing novel methodologies for quantifying and profiling echo chambers, with the goal of providing deeper insights into how these communities function and how they can be measured. The first core contribution of this work is the introduction of the Echo Chamber Score (ECS), a new metric for measuring the degree of ideological segregation in social media interaction networks. The ECS captures both the cohesion within communities and the separation between them, offering a more nuanced approach to assessing polarization. By using a self-supervised Graph Auto-Encoder (EchoGAE), the ECS bypasses the need for explicit ideological labeling, instead embedding users based on their interactions and linguistic patterns. The second contribution is a Heterogeneous Information Network (HIN)-based framework for profiling echo chambers. This framework integrates social and linguistic features, allowing for a comprehensive analysis of the relationships between users, topics, and language within echo chambers. By combining community detection, topic modeling, and language analysis, the profiling method reveals how discourse and group behavior reinforce ideological boundaries. Through the application of these methods to real-world social media datasets, this dissertation demonstrates their effectiveness in identifying polarized communities and profiling their internal discourse. The findings highlight how linguistic homophily and social identity theory shape echo chambers and contribute to polarization. Overall, this research advances the understanding of echo chambers by moving beyond detection to explore their structural and linguistic complexities, offering new tools for measuring and addressing polarization on social media platforms.24 0