Saudi Cultural Missions Theses & Dissertations
Permanent URI for this communityhttps://drepo.sdl.edu.sa/handle/20.500.14154/10
Browse
7 results
Search Results
Item Restricted Measuring Human’s Trust in Robots in Real-time During Human-Robot Interaction(Swansea University, 2025) Alzahrani, Abdullah Saad; Muneeb, Imtiaz AhmadThis thesis presents a novel, holistic framework for understanding, measuring, and optimising human trust in robots, integrating cultural factors, mathematical modelling, physiological indicators, and behavioural analysis to establish foundational methodologies for trust-aware robotic systems. Through this comprehensive approach, we address the critical challenge of trust calibration in human-robot interaction (HRI) across diverse contexts. Trust is essential for effective HRI, impacting user acceptance, safety, and overall task performance in both collaborative and competitive settings. This thesis investigated a multi-faceted approach to understanding, modelling, and optimising human trust in robots across various HRI contexts. First, we explored cultural and contextual differences in trust, conducting cross-cultural studies in Saudi Arabia and the United Kingdom. Findings showed that trust factors such as controllability, usability, and risk perception vary significantly across cultures and HRI scenarios, highlighting the need for flexible, adaptive trust models that can accommodate these dynamics. Building on these cultural insights as a critical dimension of our holistic trust framework, we developed a mathematical model that emulates the layered framework of trust (initial, situational, and learned) to estimate trust in real-time. Experimental validation through repeated interactions demonstrated the model's ability to dynamically calibrate trust with both trust perception scores (TPS) and interaction sessions serving as significant predictors. This model showed promise for adaptive HRI systems capable of responding to evolving trust states. To further enhance our comprehensive trust measurement approach, this thesis explored physiological behaviours (PBs) as objective indicators. By using electrodermal activity (EDA), blood volume pulse (BVP), heart rate (HR), skin temperature (SKT), eye blinking rate (BR), and blinking duration (BD), we showed that specific PBs (HR, SKT) vary between trust and distrust states and can effectively predict trust levels in real-time. Extending this approach, we compared PB data across competitive and collaborative contexts and employed incremental transfer learning to improve predictive accuracy across different interaction settings. Recognising the potential of less intrusive trust indicators, we also examined vocal and non-vocal cues—such as pitch, speech rate, facial expressions, and blend shapes—as complementary measures of trust. Results indicated that these cues can reliably assess current trust states in real-time and predict trust development in subsequent interactions, with trust-related behaviours evolving over time in repeated HRI sessions. Our comprehensive analysis demonstrated that integrating these expressive behaviours provides quantifiable measurements for capturing trust, establishing them as reliable metrics within real-time assessment frameworks. As the final component of our integrated trust framework, this thesis explored reinforcement learning (RL) for trust optimisation in simulated environments. Integrating our trust model into an RL framework, we demonstrated that dynamically calibrated trust can enhance task performance and reduce the risks of both under and over-reliance on robotic systems. Together, these multifaceted contributions advance a holistic understanding of trust measurement and calibration in HRI, encompassing cultural insights, mathematical modelling, physiological and expressive behaviour analysis, and adaptive control. This integrated approach establishes foundational methodologies for developing trust-aware robots capable of enhancing collaborative outcomes and fostering sustained user trust in real-world applications. The framework presented in this thesis represents a significant advancement in creating robotic systems that can dynamically adapt to human trust states across diverse contexts and interaction scenarios.18 0Item Restricted Computational Intelligence Approaches for Energy-Aware Microservice Based SaaS Deployment in a Data Centre(Qeensland University of Technology, 2024) Alzahrani, Amal Saleh; Tang, MaolinMicroservice-based Software as a Service (SaaS) entails a software delivery approach in which software is constructed as a set of loosely coupled, independently deployable services known as microservices. Microservices are autonomous, small-scale services that collaborate to create a more extensive application. The microservice-based SaaS deployment problem refers to the challenge of efficiently deploying microservices within a SaaS to compute servers in a cloud data centre. This deployment leads to a significant increase in overall energy consumption, primarily due to the increased energy usage of the compute servers hosting the microservices. Moreover, the energy consumption of network devices that facilitate connections between in terconnected microservices also contributes to the overall energy usage. This increase in energy consumption raises concerns regarding sustainability, environmental impact and operational costs. In contrast to traditional SaaS deployment approaches, where energy considerations are frequently disregarded, this thesis addresses the energy increase associated with the deployment of microservice-based SaaS, focusing specifically on reducing the increase in energy consump tion in compute servers and network devices. To address this new microservice-based SaaS deployment problem, three Computational Intelligence (CI) approaches are designed and developed. First, an Adaptive Hybrid Genetic Algorithm (GA) is developed to tackle the problem. It achieves this by dynamically balancing the exploration-exploitation trade-off through an adaptive crossover rate. Furthermore, the adaptive hybrid GA incorporates a local optimiser, which refines the best solutions by improving the exploitation capacity of the adaptive hybrid GA. Second, a Hybrid Particle Swarm optimi sation (HPSO) approach is developed to address the new SaaS deployment problem. HPSO also integrates a local optimiser to enhance its exploitation capacity, thus further refining the best solutions. Additionally, HPSO has the capability to dynamically adjust the inertia weight and its cognitive and social parameters throughout the optimisation process. Third, an Ant Colony Optimisation (ACO) approach, equipped with new heuristic information, is developed to solve the new SaaS deployment problem. During the search for a compute server to host a microservice, this heuristic information aids the ants in selecting a compute server that will result in a lower increase in the energy consumption of both compute servers and network devices. In this thesis, a comparative study is conducted to evaluate the effectiveness, efficiency and scalability of the adaptive hybrid GA, HPSO and ACO approaches in solving the new SaaS deployment problem. The findings reveal that the adaptive hybrid GA is the most effective approach for minimising both total energy increase and the energy increase specifically in the compute servers. Its ability to provide energy-efficient solutions while maintaining good scala bility and fast execution times makes it the optimal choice for addressing the new microservice based SaaS deployment problem. The HPSO is identified as the second most effective and efficient approach, after the adap tive hybrid GA. It also demonstrates good scalability and faster execution times compared to ACO, efficiently generating optimal or near-optimal solutions as the problem size grows despite increasing complexity. Although the ACO is effective at minimising the increase in the energy consumption of network devices, it is the least effective approach for reducing the overall increase in total energy consumption. The cubic or quadratic increases observed in ACO’s execution times highlight its poor performance in scaling effectively with larger problem instances. The ACO’s poor scalability renders it impractical for handling larger problem sizes.15 0Item Restricted Reducing Type 1 Childhood Diabetes in Saudi Arabia by Identifying and Modelling Its Key Performance Indicators(Royal Melbourne Institute of Technology, 2024-06) Alazwari, Ahood; Johnstone, Alice; Abdollahain, Mali; Tafakori, LalehThe increasing incidence of type 1 diabetes (T1D) in children is a growing global health concern. Reducing the incidence of diabetes generally is one of the goals in the World Health Organisation’s (WHO) 2030 Agenda for Sustainable Development Goals. With an incidence rate of 31.4 cases per 100,000 children and an estimated 3,800 new cases per year, Saudi Arabia is ranked 8th in the world for number of T1D cases and 5th for incidence rate. Despite the remarkable increase in the incidence of childhood T1D in Saudi Arabia, there is a lack of meticulously carried out research on T1D in children when compared with developed countries. In addition, it is crucial to recognise the critical gaps in current understanding of diabetes in children, adolescents, and young adults, with recent research indicates significant global and sub-national variations in disease incidence. Better knowledge of the development of T1D in children and its associated factors would aid medical practitioners in developing intervention plans to prevent complications and address the incidence of T1D. This study employed statistical, machine learning and classification approaches to analyse and model different aspects of childhood T1D using local case and control data. In this study, secondary data from 1,142 individual medical records (359-377 cases and 765 controls) collected from three cities located in different regions of Saudi Arabia have been used in the analysis to represent the country’s diverse population. Case and control data matched by birth year, gender and location were used to control confounders and create a more robust and clinically relevant model. It is well documented that genetic and environmental factors contribute to childhood T1D so a wide range of potential key performance indicators (KPIs) from the literature were included in this study. The collected data included information on socioeconomic status, potential genetic and environmental factors, and demographic data such as city of residence, gender and birth year. Several techniques, such as cross-validation, hyperparameter tuning and bootstrapping, were used in this study to develop models. Common statistical metrics (coefficient of determination, R-squared, root mean squared error, mean absolute error) were used to evaluate performance for the regression models while for the classification models accuracy, sensitivity, precision, F score and area under the curve were utilised as performance measures. Multiple linear regression (MLR), artificial neural network (ANN) and random forest (RF) models were developed to predict the age at onset of T1D for all children 0-14 years old, as well as for the most common age group for onset, the 5-9 year olds. To improve the performance of the MLR models, interactions between variables were considered. Additionally, risk factors associated with the age at onset of T1D were identified. The results showed that MLR and RF outperformed ANN. The logarithm of age at onset was the most suitable dependent variable. RF outperformed the others for the 5-9 years age group. Birth weight, current weight and current height influenced the age at onset in both age groups. However, preterm birth was significant only in the 0-14 years cohort, while consanguineous parents and gender were significant in the 5-9 age group. Logistic regression (LR), random forest (RF), support vector machine (SVM), Naive Bayes (NB) and artificial neural network (ANN) models were utilised with case and control data to model the development of childhood T1D and to identify its key performance indicators. Full and reduced models were developed to determine the best model. The reduced models were built using the significant factors identified by the individual full model. The study found that full LR had the highest accuracy. Full RF and SVM with a linear kernel also performed well. Significant risk factors identified as being associated with developing childhood T1D include early exposure to cow’s milk, high birth weight, positive family history of T1D and maternal age over 25 years. Poisson regression (PR), RF, SVM and K-nearest neighbor (KNN) were then used to model the incidence of childhood T1D, taking in the identified significant risk factors. The interactions between variables were also considered to enhance the performance of the models. Both full and reduced models were created and compared to find the best models with the minimum number of variables. The full Poisson regression and machine learning models outperformed all other models, but reduced models with a combination of only two out of three independent variables (early exposure to cow’s milk, high birth weight and maternal age over 25 years) also performed relatively well. This study also deployed optimisation procedures with the reduced incidence models to develop upper and lower yearly profile limits for childhood T1D incidence to achieve the United Nations (UN) and Saudi recommended levels of 264 and 339 cases by 2030. The profile limits for childhood T1D then allowed us to model optimal yearly values for the number of children weighing more than 3.5kg at birth, the number of deliveries by older mothers and the number of children introduced early to cow’s milk. The results presented in this thesis will guide healthcare providers to collect data to monitor the most influential KPIs. This would enable the initiation of suitable intervention strategies to reduce the disease burden and potentially slow the incidence rate of childhood T1D in Saudi Arabia. The research outcomes lead to recommendations to establish early intervention strategies, such as educational campaigns and healthy lifestyle programs for mothers along with child health mentoring during and after pregnancy to reduce the incidence of childhood T1D. This thesis has contributed to new knowledge on childhood T1D in Saudi Arabia by: * developing a predictive model for age at onset of childhood T1D using statistical and machine learning models. * predicting the development of T1D in children using matched case-control data and identifying its KPIs using statistical and machine learning approaches. * modeling the incidence of childhood T1D using its associated significant KPIs. * developing three optimal profile limits for monitoring the yearly incidence of childhood T1D and its associated significant KPIs. * providing a list of recommendations to establish early intervention strategies to reduce the incidence of childhood T1D.29 0Item Restricted Optimising Computational Offloading and Resource Management in Online and Stochastic Fog Computing Systems(Saudi Digital Library, 2023-12-14) Alenizi, Faten; Omer, RanaFog computing is a potential solution to overcome the shortcomings of cloud-based processing of IoT tasks. These drawbacks can include high latency, location awareness, and security attributed to the distance between IoT devices and cloud-hosted servers. Although fog computing has evolved as a solution to address these challenges, it is known for having limited resources that need to be effectively utilised. This is because its advantages could be lost. Moreover, the increasing number of IoT devices and the amount of data they generate make optimising Quality of Service (QoS) in IoT applications, computational offloading, and managing fog resources more challenging. In this context, the problem of computational offloading and resource management is investigated in online and stochastic fog systems. To deal with dynamic online fog systems, we propose a combination of two algorithms: dynamic task scheduling (DTS) and dynamic energy control (DEC). These methods were applied with a fixed offloading threshold (i.e., the criteria by which a fog node decides whether tasks should be offloaded to a neighbour, and which neighbour, rather than executed locally) with the aim to minimise overall delay, improve the throughput of user tasks, and minimise energy consumption at the fog layer while maximising the use of resource-constrained fog nodes. The approach is further enhanced by applying a dynamic offloading threshold. Compared to other benchmarks, our approach could reduce latency by up to 95.4%, improve throughput by 41%, and reduce energy consumption by up to 55.7% in fog nodes. For stochastic fog systems, we address the computational offloading and resource management problem. This is with the aim to minimise the average energy consumption of fog nodes while meeting QoS requirements of tasks. We formulated the problem as a stochastic problem and decomposed it into two subproblems. In order to solve this problem, we have proposed a scheme called Joint Q-learning and Lyapunov Optimization (JQLLO). Using simulation results, we demonstrate that JQLLO outperforms a set of baselines.24 0Item Restricted Optimising the Sustainability of Blockchain-based Systems: Balancing Environmental Sustainability, Decentralisation and Trustworthiness(Saudi Digital Library, 2023) Alofi, Akram Mohammed Ali; Bahsoon, Rami; Hendley, RobertBlockchain technology is an emerging technology revolutionising information technology and represents a change in how information is shared. It has captured the interest of several disciplines because it promises to provide security, anonymity and data integrity without any third-party control. Although blockchain technology has great potential for the construction of the future of the digital world, it is facing a number of technical challenges. A most critical concern is related to its environmental sustainability. It has been acknowledged that blockchain-based systems' energy consumption and carbon emissions are massive and can affect their sustainability. Therefore, optimising the environmental sustainability of these systems is necessary. Several studies have been proposed to mitigate this issue. However, the literature needs to include models for optimising the environmental sustainability of blockchain-based systems without compromising the fundamental properties inherent in blockchain technology. In this context, this thesis aims to optimise the environmental sustainability of blockchain-based systems by balancing different conflicting objectives without compromising the decentralisation and trustworthiness of the systems. First of all, we reformulate the problem of the environmental sustainability of the systems as a search-based software engineering problem. We represent the problem as a subset selection problem that selects an optimal set of miners for mining blocks in terms of four conflicting objectives: energy consumption, carbon emissions, decentralisation and trustworthiness. Secondly, we propose a reputation model to determine reputable miners based on their behaviour in a blockchain-based system. The reputation model can support the enhancement of the environmental sustainability of the system. Moreover, it can improve the system's trustworthiness when the number of miners is reduced to minimise energy consumption and carbon emissions. Thirdly, we propose a self-adaptive model that optimises the environmental sustainability of blockchain-based systems taking into account environmental changes and decision-makers' requirements. We have conducted a series of experiments to evaluate the applicability and effectiveness of the proposed models. Finally, the results demonstrate that our models can enhance the environmental sustainability of blockchain-based systems without compromising the core properties of blockchain technology.44 0Item Restricted Optimising IDS configurations for IoT Networks Using AI approaches(Saudi Digital Library, 2023) Alshahrani, Abdulmonem; John A. ClarkThe number of internet-connected smart objects, known as the Internet of Things (IoT), has increased significantly in recent years. The low cost of manufacturing has enabled a proliferation of smart devices across many tasks and domains. Such devices, however, are typically resource constrained. This has led to the emergence of Low-Power and Lossy Networks (LLNs) which require efficient communication protocols. The Routing Protocol for Low-Power and Lossy Networks (RPL) has been designed for such a purpose. The RPL is the de-facto standard routing protocol for the IoT. Nevertheless, RPL-enabled networks are susceptible to many attacks as these devices are unattended, resource-constrained, and connected via unreliable networks. Deploying Intrusion Detection Systems (IDSs) in such a large and resource-constrained environment is a challenging task. The resource-constrained nature of many devices and nodes restricts what tasks those nodes can realistically expect to perform. There may be a great many choices as to what detection functionality is allocated and where. There are cost/benefit trade-offs between them and inappropriately favouring one over the another may cause an ineffective IDS deployment. In this research, we investigate the use of a metaheuristic- based optimisation method, namely a Genetic Algorithm (GA), to discover optimal IDS placements and configurations for the Low Power and Lossy Networks (LLNs). To the best of our knowledge, this is the first attempt to optimise IDS configurations for emerging and constrained networks while incorporating a wider set of aspects than currently considered. Our approach seeks to optimise and balance detection performance (either detection rate or F1 score), coverage (nodes are monitored by an appropriate number of probes), feasibility cost (nodes host detection functionality within their capability), and deployment cost (seeking to reduce the number of probes deployed). We propose a framework that makes trades-offs between these functional and non-functional constraints. A genetic algorithm-based optimisation approach is developed to address the IDS optimisation task. However, the fitness function is evaluated in part via a computationally expensive simulation. We show how a neural network can be used as a surrogate fitness function evaluation, providing better results more cheaply. Experimental results show that the proposed function approximation is more computationally efficient. Our approximation-based GA system is 1.6 times faster than the corresponding simulation-based GA system. It also gives better results. Furthermore, when used repeatedly to generate candidate placements and configurations the resource costs per generation reduce drastically. The surrogate model is valuable as it significantly reduces the evaluation time and computation. However, generality is still a limitation. Therefore, we propose a transfer-learning Deep Neural Networks (DNNs) approach, that harnesses the experience of previously trained neural networks, to develop a general proxy model for evaluating IDS configurations of variant newly-presented networks more accurately.27 0Item Restricted GaAs-Based Integration Photonics Waveguides and Splitting Elements(Cardiff University, 2023-05-07) AlBiladi, Tahani; Smowton, Peter; Beggs, DarylWith the development of long lived, epitaxially grown InAs quantum dot lasers in GaAs on silicon, GaAs-based photonics has become a promising system for integrating large numbers of small footprint active and passive components on the same substrate. To ensure high performance of a circuit, each component or building block, needs to be individually investigated so a library of optimised components can be developed. In this thesis, several GaAs-based passive integrated photonic components are proposed and analysed by employing commercially available multi-dimensional simulation tools with the aim of understanding the performance and tolerances of the important component functions prior to the manufacturing stage. These components will be useful as basic and composite building blocks in future GaAs-on-silicon photonic integrated circuits. Deeply etched waveguides are investigated, and single mode operation is shown to be maintained under -0.1 to +0.1 micrometre variation in width, etching depth, core height and variation in wavelength between 1.2 and 1.4 micrometre, making the waveguides tolerant against typical fabrication errors. Single mode operation is also mapped as a function of core width, height and etching depth for shallow etched waveguides. Efficient tapers are optimised for the propagation of light between multimode and single mode waveguides. The tapers are tolerant of width, etching depth and wavelength changes of -0.1 to +0.1 micrometre with transmission higher than 98%. Two compact splitters: multimode interferometer, MMI, and Y-branch are optimised at 1.3 micrometre. The multimode interferometer has efficiency up to 99% with reduction of about 4% over spectral and geometrical changes of width, length and etching depth of -0.1 to +0.1 micrometre. An efficient Y-branch is designed with efficiency up to 94% with less stability, compared to the multimode interferometer, against variations like etching depth and wavelength. It is found that waveguides of single mode operation at 1.3 micrometre are achievable and efficient tapers and splitters can be obtained within the fabrication capabilities available in a university facility. Hence, these components are appropriate for on-chip integration.31 0