SACM - Australia

Permanent URI for this collectionhttps://drepo.sdl.edu.sa/handle/20.500.14154/9648

Browse

Search Results

Now showing 1 - 4 of 4
  • Thumbnail Image
    ItemRestricted
    iBFog: Intelligent Blockchain-based Methodology for Verifiable Fog Selection and Participation
    (University of Technology Sydney, 2024-05-28) Alshuaibi, Enaam Abdulmonem O; Hussain, Farookh Khadeer
    Fog computing has emerged as an important game-changing technology to address the resource challenges of the Internet of Things (IoT). However, the rapid increase in computational resource requirements at the edge of the network results in small-to-medium enterprises that provide fog services (FogSMEs) facing challenges in scalability, resource limitations, and network reliability. As a result of these challenges, FogSMEs are unable to meet modern data processing, security, and decision-making requirements. By exploring strategies that allow FogSMEs to maximize the benefits of the distributed nature of fog computing, in this thesis, we discuss volunteer computing as an innovative and cost-effective solution to improve their infrastructure. Leveraging idle computational resources from a global network of volunteer users, FogSMEs can achieve scalable, real-time services without significant investment in physical infrastructure. Research has identified significant gaps in the existing literature, including the absence of intelligent platforms to manage volunteer resources, dynamic selection mechanisms for volunteer nodes, and incentives to increase volunteer recruitment. To bridge the gaps in the recent literature, this thesis proposes an intelligent and reliable framework for selecting and verifying volunteer computing resources for fog scalability, named iBFog, by addressing three critical objectives: developing a trustworthy platform for managing volunteer nodes, designing an incentive mechanism to motivate participation, and implementing an intelligent selection mechanism for optimal node utilization. These objectives aim to overcome the challenges of fog scalability by ensuring efficient, secure, and reliable fog computing networks, especially for FogSMEs. This thesis contributes to the literature along three dimensions by including a systematic literature review to identify the need for an intelligent framework utilizing volunteer computing for fog scalability, the development of the iBFog framework that comprises a blockchain-based fog repository using Hyperledger Fabric, a game-based incentive module using Stackelberg game theory, and a ranking and selection module using three methods: a statistical method, a machine learning method, and a deep learning method. These components collectively address the identified research gaps, offering a comprehensive solution to the challenges of FogSME scalability. By intelligently managing, incentivizing and selecting volunteer computing resources, the iBFog framework advances the field of fog computing using a novel approach to enhancing its scalability. This framework not only addresses the immediate challenges of fog computing scalability but also sets the groundwork for future research and development in distributed computing environments.
    20 0
  • Thumbnail Image
    ItemRestricted
    Intelligent Context-aware Fog Node Discovery and Trust-based Fog Node Selection
    (University of Technology Sydney, 2024) Bukhari, Afnan; Hussain, Farookh Khadeer
    In today’s highly advanced technological age, edge devices are widely used. By 2030, Cisco predicts that more than 500 billion edge devices (also known in this research as fog consumers) will be in use [1]. Data from all these devices may experience significant delays when handled, processed and stored through cloud computing. To resolve this issue, fog computing is the best solution. With fog computing, processing, storage, and networking are brought to the edge of the network near fog consumers. This reduces latency, network bandwidth, and response times. Researchers have yet to address the critical challenge of identifying and selecting a reliable and relevant fog node to fog consumers. The existing approaches consider the discovery and selection of fog nodes based on the networking point of view. However, no approach addresses the use of AI-driven mechanisms for intelligent fog node discovery and selection. This research aims to propose an intelligent and distributed framework for context-aware fog node discovery and trust-based fog node selection. This research aims to discover the closest fog nodes in a context-aware manner and select a reliable fog node based on the trust value. The proposed approach is based on the distributed Fog Registry Consortium (FRC) between fog consumers and fog nodes that can facilitate the discovery and selection processes of fog nodes. To ensure that the tasks from the fog consumer are processed in a timely manner, one of the crucial aspects to consider for fog node discovery is the geographic distance between the fog node and the fog consumer as this directly impacts latency, response time, and bandwidth usage for fog consumers. Thus, location-based context awareness is one of the key decision criteria for fog node discovery to ensure that the QoS metrics are satisfied. In this research, we propose the Fog Node Discovery Engine (FNDE) within the Distributed Fog Registry (DFR), within FRC, as an intelligent and distributed fog discovery mechanism which enables a fog consumer to intelligently discover fog nodes in a context-aware manner. In this research, the KNN, K-d tree and brute force algorithms are used to discover fog nodes based on the location-based context-aware criteria of fog consumers and fog nodes. Fog node selection is a crucial aspect in the development of a fog computing system. It forms the foundation for other techniques such as resource allocation, task delegation, load balancing, and service placement. Fog consumers have the task of choosing the most suitable and reliable fog node(s) from the available options, based on specific criteria. This research presents the intelligent and reliable Fog Node Selection Engine (FNSE), which is an intelligent method to assist fog consumers to select appropriate and reliable fog nodes in a trustworthy manner. This intelligent mechanism predicts the trust value of fog nodes to help the user select a reliable fog node based on its trust value. Our selection approach is based on the trust value of the fog node based on the values of the QoS factors. If the fog node has historical information of the QoS factors provided to this fog node, then the Trust Evaluation Engine (TEE) in the FNSE is responsible to carry out the prediction of the trust value. With the trust value of fog nodes, the FNSE will be able to rank the fog node to select the most reliable fog node in the network. We propose three mechanisms: the TEE mechanism based on fuzzy logic, the TEE mechanism based on logistic regression, and the TEE mechanism based on a deep neural network. However, if the QoS values of the fog node are unknown, this means the FNSE is unable to make a meaningful selection of fog nodes. To solve the problem of the cold-start fog node, we propose the Bootstrapping Engine (BE) which is an intelligent trust-based fog node bootstrapping framework. This framework is designed to address the cold-start problem in fog computing environments which enables fog consumers to make informed and trustworthy decisions when selecting fog nodes for their applications. To address this challenge, the BE employs two key modules, namely the QoS prediction module and the reputation prediction module. The QoS prediction module utilizes the k-means clustering and KNN algorithms to predict the initial QoS values of new cold-start fog nodes. Additionally, within the reputation prediction module, we propose three AI methods to achieve the best performance and prediction results, namely fuzzy logic-based reputation prediction, regression-based reputation prediction, and deep learning-based reputation prediction to predict and evaluate the trust value of the new cold-start fog nodes. Finally, we present the simulation of the framework and the evaluation results of each proposed engine which highlight the best performance.
    26 0
  • Thumbnail Image
    ItemRestricted
    OPTIMISING RESOURCE ALLOCATION AND OFFLOADING FOR LONG-TERM LOAD BALANCING SOLUTIONS IN FOG COMPUTING NETWORKS
    (University of Technology Sydney, 2024-02-02) Sulimani, Hamza; Prasad, Mukesh
    Nowadays, most emerging critical IoT applications have unique requirements and restrictions to operate efficiently; otherwise, they could be useless. Latency is one of these requirements. Fog computing (FC) is the complement system for cloud computing, proving it is the ideal computing environment for critical IoT applications. Distributed computing systems, such as FC, have an inherent problem when the computing units have different computing loads, called load difference problems. Offloading and service placement are some techniques used to fix these problems. Although prevalent offloading is the appropriate technique for this research, its procedures generate hidden costs in a system, such as decision time, distant offloading, and network congestion. Many researchers attempt to reduce these costs to get the results of static offloading (in stable environments). However, this research seeks to overcome the hidden costs in the prevalent offloading techniques to balance the load in a fog environment by utilising the sustainability concept. This research believes that increasing physical resources is the only way to improve efficiency as a long-term solution. The study consists of two consecutive phases. The first phase attempts to find the optimum solution between task offloading and service placement. The solution must revive the low-cost offloading solution. A sustainable load-balancing monitoring system (SlbmS) represents the second phase of this research. It is the comprehensive solution for the optimum solution to release its limitations. SlbmS uses the sustainability concept to solve the problem of the limitation of resources in edge computing using reinforcement learning. The experiment results of the two phases show that hybrid offloading outperforms the service placement policy in the first stage and prevalent offloading in the second stage when utilising the behaviour of static offloading to reduce the offloading costs in unpredictable environments. The study aims to explore a new area of research that attempts to amend the network topology to improve resource provisioning to provide a free resource at the network's edge. This research paved the way for a new dimension of analysis. It is the first research to recommend the physical expansion in the fog layer using the sustainability concept.
  • Thumbnail Image
    ItemRestricted
    Software and Hardware Redundancy Approaches to Improve Performance and Service Availability in Fog Computing
    (Saudi Digital Library, 2023-12-28) Alraddady, Sara; Soh, Boon
    Fog computing is a new distributed computing paradigm. It was introduced to address the massive increase in the number of connected devices since cloud computing faces difficulties handling all requests placed simultaneously. This new paradigm , which is an extension of cloud computing, can increase the efficiency of services provided in many sectors including health care, industry, agriculture, environmental hazard management, smart cities, and autonomous transportation. Some sectors, such as health care and autonomous driving, are highly non-tolerant of delays. In such sectors, high response time and poorly available services can lead to fatal results endangering the lives of many. On the contrary, other sectors such as e-commerce and telecommunication companies can tolerate delays to a certain extent, yet there is always a cost. Delays in such systems do not result in fatalities, as can happen in non-delay tolerant sectors, although delays can cause degraded quality of service and financial loss. Hence, regardless of the level of delay tolerance, delays are not desired. Given the distributed and diverse nature of fog computing, there are some challenges such as device heterogeneity that need be addressed to prepare fog computing for commercial use. Because any device can be a fog node, energy constraints must be considered to maximise device utilisation while still delivering the required quality of service. Also, different devices have various connecting methods which increase complex network connectivity for fog computing. It is also important to consider preventing fog node from exploitation and ensuring that requests are not randomly processed by different fog nodes. This thesis incorporates a management layer in fog computing to address the identified challenges. The proposed model was evaluated using simulations in iFogSim. The results show improved performance in important metrics such as execution time and bandwidth consumption compared to several fog architectures. For higher availability, a duplex management system is proposed and designed using Petri nets. A Markov chain is used to calculate failure probabilities for each node in the management layer, and availability analysis is presented.

Copyright owned by the Saudi Digital Library (SDL) © 2024