Search for a command to run...
Abstract Federated Learning (FL) enables collaborative model training without exposing sensitive data, making it a cornerstone for privacy-aware AI. However, bringing FL from theory to practice in multi-tier architectures that contain hierarchical edge-fog-cloud systems remains difficult, challenged by security vulnerabilities, resource constraints, and system heterogeneity. This paper reviews secure and adaptive optimization techniques that address these challenges in hierarchical environments. We focus on three main areas: (1) security-enhanced FL, covering threat models, adversarial attacks, and defenses tailored to layered architectures; (2) adaptive optimization, including dynamic client selection, resource-aware aggregation, and context-sensitive privacy management; and (3) systems integration, examining architectural designs, communication protocols, and orchestration methods for scalable deployment. By synthesizing recent work, the survey highlights trade-offs between privacy, performance, and scalability, and proposes a taxonomy of current challenges and solutions. It also evaluates approaches to trustworthiness in FL, including fairness, accountability, and robustness, while considering practical issues such as regulation, benchmarking, and deployment constraints. The paper concludes with open research directions, emphasizing the need for secure, adaptive, and production-ready FL systems.
Published in: Journal of King Saud University - Computer and Information Sciences