Search for a command to run...
Accelerating computational simulations remains an active area of research, pursued from multiple directions across the CFD and numerical analysis communities. In all cases, the ability to generate results more rapidly directly strengthens the role of simulations in time-critical engineeringworkflows. One promising approach focuses on reducing the cost of core components of the computation, particularly the repeated solutions of nonlinear algebraic systems whose linearizations yield large sparse linear systems. Computational efficiency can be improved by initializing both the nonlinear and linear solver iterations with estimates that are closer to the eventual solution, thereby reducing the number of iterations required for convergence and shortening the overall time to solution. In the present work, this acceleration is achieved by leveraging data already available in memory during a simulation to train a neural network capable of predicting updates for the nonlinear and linear solvers. These predictions are then used to initialize the solvers. A critical requirement, however, is that the machine learning components themselves must execute in less time than the savings they provide. This balance can be maintained by training a lightweight neural network to achieve sufficient, but not necessarily high, accuracy. Such models are smaller, faster to train, and still effective at improving solver convergence. The approach demonstrated here employs in-situ graph neural network training and inference to initialize both nonlinear and linear solvers. Results from one-dimensional and two-dimensional test cases show improved convergence behavior and, importantly, an overall reduction in total simulation time.
DOI: 10.2514/6.2026-0494