Search for a command to run...
With the growing demand for trustworthy multi-party data sharing, federated learning has demonstrated broad potential in cross-entity collaborative modeling. However, it still faces challenges such as insufficient participant engagement, inaccurate contribution assessment, and the lack of dynamic profit-sharing mechanisms. Traditional incentive schemes, which typically rely on game-theoretic models or static rules, struggle to accommodate dynamic client participation and heterogeneous data distributions, thereby degrading the convergence efficiency and generalization performance of the global model. To address these issues, we propose a budget-aware closed-loop incentive allocation for federated learning with deep deterministic policy gradient (DDPG). The proposed approach constructs a DDPG-driven closed-loop framework in which the server manages system states, incentive decisions, and model aggregation, while clients autonomously adjust their data contribution levels. By formulating incentive allocation as a sequential decision-making problem, the mechanism jointly optimizes policy and value functions. A permutation method is introduced to ensure invariance to client ordering, and an Ornstein–Uhlenbeck process is employed to enhance exploration, thereby improving the adaptiveness and overall effectiveness of incentive allocation. Experimental results show that the proposed method significantly increases cumulative rewards and improves client data-sharing rates in high-dimensional dynamic environments. Compared with traditional fixed incentive schemes, the mechanism demonstrates clear advantages in adaptiveness, incentive effectiveness, and model performance.