Privacy-Preserving Models in Edge-Cloud Interplay for Smart Systems

No Thumbnail Available

Date

2024-07

Journal Title

Journal ISSN

Volume Title

Publisher

RMIT University

Abstract

The progress of leading technologies, including artificial intelligence (AI), the Internet of Things (IoT), edge and cloud computing, has catalysed the shift toward smart industries in various verticals, from intelligent banking to healthcare systems. In this regard, edge and cloud computing offer minimal-cost and on-demand computing resources. Service providers offer data analytics services to deliver precise insights to end-users (individuals and organisations), thus improving their business operations. Despite looking promising, data privacy and security issues pose obstacles to adopting such technologies in practice. Trained AI models often keep traces of training data (holds sensitive information), thereby leaking it in case of successful attacks. Considering these issues mentioned earlier, this research aims to develop privacy-preserving edge-cloud-based models while maintaining robust utility and efficiency. We began this research by designing privacy-preserving data analytical models based on Differential Privacy (DP). DP techniques such as DP-Stochastic Gradient Descent (DP-SGD) provide a robust mechanism to protect individual privacy during the training phase of AI models. The main challenge is the effect of DP noise on the model utility, especially when fine-tuning the deep models compared to the shallow models. In addition, we proposed two other distributed privacy-preserving models tailored to a specific model architecture under DP privacy constraints. We leveraged the service model concept to decompose the big task into several sub-tasks and abstract them as a service to ensure flexibility in the system design. In this context, intelligent analytical services can be divided into small services (microservices) and deployed at the edge server to decrease bandwidth usage and latency. In the next stage, we adopted hybrid approaches that combined multiple techniques to strengthen privacy by adding multiple layers of protection. In the first proposed Federated Deep Learning (FDL) framework, we leveraged the Generative Adversarial Network (GAN) technique to augment medical image data and ensure some privacy. We utilised proxy-server and Homomorphic Encryption (HE) powered model parameter encryption to introduce an extra layer of privacy and ensure the anonymity of participants in FDL. In the second proposed framework, we developed a privacy-preserving face recognition (FR) system by utilising the security features provided by Intel SGX (Software Guard Extensions) technology. First, we utilised a pre-trained Deep learning (DL) model to extract embedding vectors for facial image data. Then, we reduced their dimensions using the principal component analysis (PCA) to ensure some privacy and speed up the computational efficiency of the subsequent analyses. Further, we leveraged Intel SGX to achieve secure distributed training of the extreme learning machine (ELM) model based on the interaction between edge and cloud to reduce computation overhead associated with a centralised approach. By the end of this research, we focused on integrity to enhance the trustworthiness of the proposed framework through blockchain for better security besides considering privacy. In this respect, we introduced two frameworks. The first framework utilised stacked ensemble learning to train heterogeneous AI models, thus enhancing final detection model performance. Moreover, the integrity of some data exchanges within the framework is maintained using blockchain, while microservices add flexibility to the framework's design. In the second one, we introduced a privacy-preserving network threat detection framework within edge intelligence orchestrated by blockchain technology to address the privacy and security issues of industrial IoT networks. This framework focuses on detecting network attacks while preserving data privacy. We employed the deep autoencoder (DAE) model to mitigate the risk of privacy inference attacks by transforming the network data into a new format. This transformed data was leveraged later to train the machine learning (ML) model in a distributed manner to develop the intrusion detection model. The proposed frameworks have been evaluated on public benchmark datasets to assess their effectiveness compared to related studies.

Description

Keywords

privacy-preserving federated deep Learning, privacy-preserving AI models, privacy-preserving knowledge sharing

Citation

Bugshan, Neda (2024). Privacy-Preserving Models in Edge-Cloud Interplay for Smart Systems. RMIT University. Thesis. https://doi.org/10.25439/rmt.28004099

Collections

Endorsement

Review

Supplemented By

Referenced By

Copyright owned by the Saudi Digital Library (SDL) © 2025