Search for a command to run...
Adversarial poisoning attacks in federated learning systems can severely compromise model integrity, especiallywhen malicious nodes inject corrupted updates. Existing defenses often rely on a trusted centralaggregator, introducing a Single Point of Failure and limiting scalability. To overcome these challenges,we propose SwarmShield, a decentralized, trust-aware defense framework based on Swarm Learning.SwarmShield eliminates the need for a central coordinator by redistributing trust evaluation andmodel merging across peer nodes. It selectively transmits intermediate model layers, applies dimensionalityreduction, and clusters parameter vectors to assess similarity. Trust scores are dynamically computedfor each node based on its proximity to the cluster centroid, and nodes with low trust are excludedfrom aggregation. A secure, trust-weighted averaging mechanism is used for model updates, with integrityensured through cryptographic hashing and blockchain logging. Extensive experimentation with differenttypes of adversarial data poisoning attacks on CIFAR10 dataset with Resnet50 model demonstratean average improvement in accuracies by 24.8%. Additionally, its generalizability is robustly demonstratedthrough successful application to the real-world DermaMNIST medical imaging dataset, whereSwarmShield consistently maintained or improved model accuracy across diverse attack scenarios. Wealso evaluate SwarmShield on the TwoLeadECG time series dataset, highlighting its behavior under temporaladversarial settings. These results validate SwarmShield’s effectiveness, scalability, and resiliencein adversarial federated learning settings. Further analysis through ablation studies validates the framework’sdesign by quantifying the contribution of each component, while robustness tests demonstrate itsresilience across varying ratios of malicious nodes. Our experimental results demonstrate that the proposedapproach significantly outperforms existing state-of-the-art methods.