Entropy-guided meta-learning defense against data poisoning attacks in blockchain-assisted federated learning
No Thumbnail Available
Date
2026
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Saudi Digital Library
Abstract
Federated Learning (FL) enables decentralized model training but remains vulnerable to data
poisoning attacks that degrade the performance of the global model. Existing defenses often assume
a trusted server or introduce significant detection and communication overhead, limiting
their deployment in practical, trustless environments. To address these limitations, we propose
FLEML, a blockchain-enabled secure FL framework with entropy-based anomaly detection and
meta-learning-driven aggregation. In FLEML, each client’s model update is first committed to a
blockchain through hashed transactions and verified by smart contracts to ensure update provenance
and prevent tampering. For robustness, the server computes the entropy of client update
distributions to quantify behavioral uncertainty and detect abnormal contributions. A metalearning
module then learns an adaptive weighting function over client updates, dynamically
adjusting aggregation coefficients to suppress poisoned gradients while preserving benign updates.
This jointly optimized entropy-aware reweighting mechanism allows the global model to adapt to
evolving client behavior without assuming prior trust. Extensive experiments on MNIST, CIFAR-
10, and FashionMNIST under non-IID settings show that FLEML improves model accuracy over
FedAvg by 23.14%, 16.15%, and 18.53% on MNIST, 10.25%, 8.06%, and 14.66% on CIFAR-10,
and 1.12%, 8.55%, and 5.19% on FashionMNIST under untargeted, label-flipping, and backdoor
attacks, respectively. FLEML further achieves 92% precision in malicious update detection and reduces
communication overhead by 30% compared with AntidoteFL on CIFAR-10, demonstrating
superior robustness and efficiency in adversarial FL environments.
Description
This paper presents a novel secure federated learning framework titled FLEML (Federated Learning with Entropy-based Meta-Learning and Blockchain), designed to mitigate data poisoning attacks in decentralized environments. The proposed framework integrates three key components: blockchain-based verification, entropy-driven anomaly detection, and meta-learning-based adaptive aggregation.
FLEML leverages blockchain technology to ensure the integrity, traceability, and authenticity of client model updates through hashed transactions and smart contracts. This eliminates reliance on trusted central servers and enhances transparency in federated learning systems. To detect malicious updates, the framework employs entropy-based metrics, including von Neumann entropy, conditional entropy, and mutual information, to quantify uncertainty in client behavior and identify anomalous contributions.
Furthermore, a meta-learning module dynamically learns optimal aggregation weights based on historical client behavior, enabling adaptive suppression of poisoned gradients while preserving benign updates. This results in a robust and scalable defense mechanism capable of handling evolving adversarial strategies without prior trust assumptions.
Extensive experimental evaluations conducted on benchmark datasets such as MNIST, CIFAR-10, and FashionMNIST under non-IID settings demonstrate that FLEML significantly improves model accuracy and resilience against untargeted, label-flipping, and backdoor attacks, while also reducing communication overhead compared to state-of-the-art methods.
The proposed framework contributes to advancing secure and trustworthy federated learning by combining decentralized trust mechanisms with intelligent anomaly detection and adaptive learning strategies.
Keywords
Federated Learning (FL) Data Poisoning Attacks Blockchain-based Federated Learning Entropy-based Anomaly Detection Meta-Learning Secure Aggregation Trustless Systems Mutual Information Conditional Entropy Von Neumann Entropy Distributed Machine Learning Adversarial Machine Learning Backdoor Attacks Label-Flipping Attacks Non-IID Data Privacy-Preserving Learning Smart Contracts Decentralized AI Systems
Citation
https://doi.org/10.1016/j.ins.2026.123389
