Search for a command to run...
We propose a new family of multilevel methods for unconstrained minimization.\nThe resulting strategies are multilevel extensions of high-order optimization\nmethods based on q-order Taylor models (with q >= 1) that have been recently\nproposed in the literature. The use of high-order models, while decreasing the\nworst-case complexity bound, makes these methods computationally more\nexpensive. Hence, to counteract this effect, we propose a multilevel strategy\nthat exploits a hierarchy of problems of decreasing dimension, still\napproximating the original one, to reduce the global cost of the step\ncomputation. A theoretical analysis of the family of methods is proposed.\nSpecifically, local and global convergence results are proved and a complexity\nbound to reach first order stationary points is also derived. A multilevel\nversion of the well known adaptive method based on cubic regularization (ARC,\ncorresponding to q = 2 in our setting) has been implemented. Numerical\nexperiments clearly highlight the relevance of the new multilevel approach\nleading to considerable computational savings in terms of floating point\noperations compared to the classical one-level strategy.\n