Search for a command to run...
Cross-institutional collaboration in privacy-sensitive domains such as healthcare and finance requires machine learning frameworks that balance model utility, privacy protection, and communication efficiency. Federated learning (FL) enables decentralized model training without direct data sharing, yet existing approaches inadequately address vulnerabilities in Trusted Execution Environments (TEEs), which are increasingly adopted to safeguard local computations. TEE side-channel attacks (e.g., cache-timing leaks, speculative execution exploits) can expose sensitive gradient information even when cryptographic defenses are deployed. Furthermore, traditional FL methods treat privacy and communication as independent objectives, leading to suboptimal tradeoffs when both constraints are active. This paper proposes Confidential Computing-Aware Projected Gradient Descent (CC-PGD), a constrained multi-objective optimization framework that jointly minimizes model loss, privacy leakage risk (incorporating TEE vulnerability modeling), and communication overhead. We formulate privacy risk as a combination of gradient entropy and a binary indicator function for TEE exploit susceptibility, while communication cost accounts for model size and network latency. We prove that CC-PGD achieves $$\varvec{O}(1/\sqrt{\varvec{T}})$$convergence under non-convex objectives with Lipschitz-continuous gradients. Experiments on MNIST and CIFAR-10 under IID and non-IID data partitioning demonstrate that CC-PGD reduces privacy leakage by 23–31% and communication cost by 18–27% compared to baselines (FedAvg, DP-FL, FedProx), while maintaining competitive accuracy (within 2% of centralized training). Our work provides the first optimization framework explicitly accounting for TEE side-channel risks in federated learning, with theoretical guarantees and empirical validation.