Search for a command to run...
Federated learning (FL) enables collaborative model training across distributed devices and robotic systems while preserving data privacy, making it well-suited for swarm robotics and edge-device-powered intelligence. However, FL remains vulnerable to adversarial behaviors such as data and model poisoning, particularly in real-world deployments where detection methods must operate under strict computational and communication constraints. This paper presents a practical, real-world federated learning framework that enhances robustness to adversarial agents in a ROS2-based edge-device swarm environment. The proposed system integrates the Federated Averaging (FedAvg) algorithm with a lightweight average cosine similarity-based filtering method to detect and suppress harmful model updates during aggregation. Unlike prior work that primarily evaluates poisoning defenses in simulated environments, this framework is implemented and evaluated on physical hardware, consisting of a laptop-based aggregator and multiple Raspberry Pi worker nodes. A convolutional neural network (CNN) based on the MobileNetV3-Small architecture is trained on the MNIST dataset, with one worker executing a sign-flipping model poisoning attack. Experimental results show that FedAvg alone fails to maintain meaningful model accuracy under adversarial conditions, resulting in near-random classification performance with a final global model accuracy of 11% and a loss of 2.3. In contrast, the integration of cosine similarity filtering demonstrates effective detection of sign-flipping model poisoning in the evaluated ROS2 swarm experiment, allowing the global model to maintain model accuracy of around 90% and loss around 0.37, which is close to baseline accuracy of 93% of the FedAvg algorithm only under no attack with a very minimal increase in loss, despite the presence of an attacker. The proposed method also maintains a false positive rate (FPR) of around 0.01 and a false negative rate (FNR) of around 0.10 of the global model in the presence of an attacker, which is a minimal difference from the baseline FedAvg-only results of around 0.008 for FPR and 0.07 for FNR. Additionally, the proposed method of FedAvg + cosine similarity filtering maintains computational statistics similar to baseline FedAvg with no attacker. Baseline results show an average runtime of about 34 min, while our proposed method shows an average runtime of about 35 min. Also, the average size of the global model being shared among workers remains consistent at around 7.15 megabytes, showing little to no increase in message payload sizes between baseline results and our proposed method. These results demonstrate that computationally lightweight cosine similarity-based detection methods can be effectively deployed in real-world, resource-constrained robotic swarm environments, providing a practical path toward improving robustness in real-world federated learning deployments beyond simulation-based evaluation.