Search for a command to run...
Machine Learning (ML) is poised to play a pivotal role in the development and operation of next-generation fusion devices. Fusion data shows non-stationary behavior with distribution drifts, resulted by both experimental evolution and machine wear-and-tear. ML models assume stationary distribution and fail to maintain performance when encountered with such non-stationary data streams. Online learning techniques have been leveraged in other domains, however it has been largely unexplored for fusion applications. In this paper, we investigate online learning for continuous adaptation to drifting data streams in the prediction of Toroidal Field (TF) coils deflection at the DIII-D fusion facility. We further address the short-term performance degradation inherent to standard online learning, which arises because ground truth is unavailable at prediction time. To mitigate this issue, we propose an uncertainty-guided online ensemble framework. The method leverages the Deep Gaussian Process Approximation (DGPA) for calibrated uncertainty estimation and uses these uncertainty measures to guide a meta-algorithm that aggregates predictions from learners trained over different historical horizons. Our results show that online learning reduces prediction error by 80% compared to a static model. The online ensemble and the proposed uncertainty-guided ensemble further reduce error by approximately 6%, and 10% respectively, relative to standard single-model online learning, while also providing calibrated uncertainty estimates to support operational decision-making. • First application of online learning on non-stationary data streams in Fusion Science. • Online adaptation significantly reduces ML error on drifting fusion tokamak data. • Providing actionable uncertainty estimates for operational decision-making. • Novel uncertainty-guided online ensemble method leveraging Deep Gaussian Process Approximation (DGPA) provides calibrated uncertainty and adaptive prediction. • Significantly better performance over standard online learning and naïve ensembles.
Published in: Machine Learning with Applications
Volume 24, pp. 100894-100894