Saudi Cultural Missions Theses & Dissertations

Permanent URI for this communityhttps://drepo.sdl.edu.sa/handle/20.500.14154/10

Browse

Search Results

Now showing 1 - 4 of 4
  • ItemRestricted
    Cloud computing efficiency: optimizing resource utilization, energy consumption, latency, availability, and reliability using intelligent algorithms
    (The Universit of Western Australia, 2024) Alelyani, Abdullah Hamed A; Datta, Amitava; Ghulam, Mubasher Hassan
    Cloud computing offers significant potential for transforming service delivery with a cost-efficient, pay-as-you-go model, which has led to a dramatic increase in demand. The advantages of virtual machine (VM) and container technologies further optimize resource utilization in cloud environments. Containers and VMs improve application reliability by distributing replicated tasks across different physical machines (PMs). However, several persistent issues in cloud computing remain, including energy consumption, resource management, network traffic costs, availability, latency, service level agreement (SLA) violations, and reliability. Addressing these issues is critical for ensuring QoS. This thesis proposes approaches to address these issues and improve cloud performance.
    17 0
  • ItemRestricted
    RESOURCE MANAGEMENT ALGORITHMS FOR SOFTWARE DEFINED NETWORKS-BASED EDGE-CLOUD COMPUTING
    (University Putra Malaysia, 2024) Alomari, Amirah Hassan; Subramaninam, Shamala A/P K
    The integration of Software-Defined Networking (SDN), Edge Computing, and Cloud Computing represents a transformative synergy in modern network and computing architectures. SDN enhances network flexibility by separating control and data planes, a concept that becomes particularly valuable when combined with edge computing, which places computational resources closer to data sources. Cloud computing complements these advantages by offering scalable and on-demand resources to a wide range of applications and workloads, and ensuring resource availability across the network. Recent advancements consider the adoption of SDN infrastructure to empower cloud and edge computing for dynamic controllability and manageability. However, the integration of SDN into cloud and edge poses key challenges, including suboptimal resource utilisation in heavily-loaded SDN-Cloud networks, which leads to network congestion, QoS violations, and increased power consumption. Additionally, controller congestion in SDN systems leads to delays, reduces scalability, and prevents the system to handle high traffic loads efficiently, posing a significant challenge for optimising network performance. While conflicts in prioritisation complicate the efficient allocation of resources, which can degrade QoS and network efficiency. To address these challenges, three algorithms are proposed for SDN-Cloud and SDN Edge-Cloud platforms. The Dual-Phase Virtual Machine (VM) allocation algorithm (D-Ph) optimises resources in SDN-Cloud networks, considering processing capacity and memory requirements, to enhance QoS and power efficiencies. The Queue Theory Model-based Adaptive Reinforcement Learning Algorithm (QTM-ARL) optimises load balancing in SDN Edge-Cloud platform while maintaining QoS constraints. Priority-Aware Scheduler (PASQ) based on QoS constraints and incorporated with rate limit mechanism, manages network traffic efficiently while prioritising VoIP traffic over video streaming to enhance network performance. The proposed algorithms are investigated for performance through eventdriven simulation (CloudSimSDN) and MATLAB, employing real workload datasets and delay-sensitive applications. Results demonstrate D-Ph's efficiency in balancing network performance and power consumption in heterogeneous heavily-load large-scale SDN-Cloud networks based on response time, network and CPU performance, QoS violation rate, and power consumption. Furthermore, QTM-ARL's effectiveness in maintaining QoS in hierarchical multi-controller system with fluctuating data flows, and PAS-Q's ability to prioritize low-latency VoIP traffic over video streaming while achieving the desired level of service quality for real-time communication applications and thus fair resource utilisation. Future research can explore advanced AI, emerging technologies, eco-friendly practices, and adaptive SDN architectures to enhance the efficiency, security, and sustainability of SDNbased Edge-Cloud systems.
    8 0
  • Thumbnail Image
    ItemRestricted
    Optimising Computational Offloading and Resource Management in Online and Stochastic Fog Computing Systems
    (Saudi Digital Library, 2023-12-14) Alenizi, Faten; Omer, Rana
    Fog computing is a potential solution to overcome the shortcomings of cloud-based processing of IoT tasks. These drawbacks can include high latency, location awareness, and security attributed to the distance between IoT devices and cloud-hosted servers. Although fog computing has evolved as a solution to address these challenges, it is known for having limited resources that need to be effectively utilised. This is because its advantages could be lost. Moreover, the increasing number of IoT devices and the amount of data they generate make optimising Quality of Service (QoS) in IoT applications, computational offloading, and managing fog resources more challenging. In this context, the problem of computational offloading and resource management is investigated in online and stochastic fog systems. To deal with dynamic online fog systems, we propose a combination of two algorithms: dynamic task scheduling (DTS) and dynamic energy control (DEC). These methods were applied with a fixed offloading threshold (i.e., the criteria by which a fog node decides whether tasks should be offloaded to a neighbour, and which neighbour, rather than executed locally) with the aim to minimise overall delay, improve the throughput of user tasks, and minimise energy consumption at the fog layer while maximising the use of resource-constrained fog nodes. The approach is further enhanced by applying a dynamic offloading threshold. Compared to other benchmarks, our approach could reduce latency by up to 95.4%, improve throughput by 41%, and reduce energy consumption by up to 55.7% in fog nodes. For stochastic fog systems, we address the computational offloading and resource management problem. This is with the aim to minimise the average energy consumption of fog nodes while meeting QoS requirements of tasks. We formulated the problem as a stochastic problem and decomposed it into two subproblems. In order to solve this problem, we have proposed a scheme called Joint Q-learning and Lyapunov Optimization (JQLLO). Using simulation results, we demonstrate that JQLLO outperforms a set of baselines.
    24 0
  • Thumbnail Image
    ItemRestricted
    Optimizing Task Allocation for Edge Compute Micro-Clusters
    (2023-07-24) Alhaizaey, Yousef; Singer, Jeremy
    There are over 30 billion devices at the network edge. This is largely driven by the unprecedented growth of the Internet-of-Things (IoT) and 5G technologies. These devices are being used in various applications and technologies, including but not limited to smart city systems, innovative agriculture management systems, and intelligent home systems. Deployment issues like networking and privacy problems dictate that computing should occur close to the data source at or near the network edge. Edge and fog computing are recent decentralised computing paradigms proposed to augment cloud services by extending computing and storage capabilities to the network's edge to enable executing computational workloads locally. The benefits can help to solve issues such as reducing the strain on networking backhaul, improving network latency and enhancing application responsiveness. Many edge and fog computing deployment solutions and infrastructures are being employed to deliver cloud resources and services at the edge of the network — for example, cloudless and mobile edge computing. This thesis focuses on edge micro-cluster platforms for edge computing. Edge computing micro-cluster platforms are small, compact, and decentralised groups of interconnected computing resources located close to the edge of a network. These micro-clusters can typically comprise a variety of heterogeneous but resource-constrained computing resources, such as small compute nodes like Single Board Computers (SBCs), storage devices, and networking equipment deployed in local area networks such as smart home management. The goal of edge computing micro-clusters is to bring computation and data storage closer to IoT devices and sensors to improve the performance and reliability of distributed systems. Resource management and workload allocation represent a substantial challenge for such resource-limited and heterogeneous micro-clusters because of diversity in system architecture. Therefore, task allocation and workload management are complex problems in such micro-clusters. This thesis investigates the feasibility of edge micro-cluster platforms for edge computation. Specifically, the thesis examines the performance of micro-clusters to execute IoT applications. Furthermore, the thesis involves the evaluation of various optimisation techniques for task allocation and workload management in edge compute micro-cluster platforms. This thesis involves the application of various optimisation techniques, including simple heuristics-based optimisations, mathematical-based optimisation and metaheuristic optimisation techniques, to optimise task allocation problems in reconfigurable edge computing micro-clusters. The implementation and performance evaluations take place in a configured edge realistic environment using a constructed micro-cluster system comprised of a group of heterogeneous computing nodes and utilising a set of edge-relevant applications benchmark. The research overall characterises and demonstrates a feasible use case for micro-cluster platforms for edge computing environments and provides insight into the performance of various task allocation optimisation techniques for such micro-cluster systems.
    13 0

Copyright owned by the Saudi Digital Library (SDL) © 2025