Security of Distributed and Federated Deep Learning Systems
No Thumbnail Available
Date
2025
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Newcastle University
Abstract
Distributed and federated deep learning (DL) systems, operating across the client-edge
cloud continuum, have transformed real-time data processing in critical domains like smart
cities, healthcare, and industrial Internet of Things (IoT). By distributing DL training and
inference tasks across multiple nodes, these systems enhance scalability, reduce latency, and
improve efficiency. However, this decentralisation introduces significant security challenges,
particularly concerning the availability and integrity of DL systems during training and
inference.
This thesis tackles these challenges through three main contributions.
• Edge-based Detection of Early-stage IoT Botnets: The first contribution involves
employing Modular Neural Networks (MNN), a distributed DL approach, to develop
an edge-based system for detecting early-stage IoT botnet activities and preventing
DDoSattacks. By harnessing parallel computing on Multi-Access Edge Computing
(MEC)servers, the system delivers rapid and accurate detection, ensuring uninterrupted
service availability. This addresses the research gap in detecting early-stage IoT botnet
activities as faults in network communication, enabling preventive measures before
attacks escalate. Key findings include a significant reduction in false-negative rates
and faster detection times (as low as 16 milliseconds), enabling early intervention in
large-scale IoT environments.
• Security Assessment of Hierarchical Federated Learning (HFL): The second contri
bution is a security assessment of Hierarchical Federated Learning (HFL), evaluating
its resilience against data and model poisoning attacks during training and adversarial
data manipulation during inference. Defense mechanisms like Neural Cleanse (NC)
and Adversarial Training (AT) are explored to improve model integrity in privacy
sensitive environments. This addresses the gap in systematically assessing the security
vulnerabilities of HFL systems, particularly in detecting and mitigating targeted at
tacks in multi-level architectures. Key findings highlight that while HFL enhances
scalability and recovery from untargeted attacks, it remains vulnerable to targeted back
door attacks, especially in higher-level architectures, necessitating stronger defence
mechanisms.
• Analysis of HFL Dynamics Under Attack: The third contribution examines HFL
dynamics under attack using a Model Discrepancy score to analyse discrepancies in
model updates. This study sheds light on the impact of adversarial attacks and data
heterogeneity, providing insights for more robust aggregation methods in HFL. This
addresses the gap in understanding the dynamics of HFL under adversarial attacks
through model discrepancy phenomena. Key findings reveal that increased hierarchy
and data heterogeneity can obscure malicious activity detection, emphasising the need
for advanced aggregation methods tailored to complex, real-world scenarios.
Overall, this thesis enhances the security, availability, and integrity of Distributed and
Federated DL systems by proposing novel detection and assessment methods, ultimately
laying the foundation for more resilient DL-driven infrastructures.
Description
Keywords
Security, Federated Deep learning, Distributed deep learning