Ask a Question

Prefer a chat interface with context about you and your work?

A sub-sampled tensor method for nonconvex optimization

A sub-sampled tensor method for nonconvex optimization

Abstract A significant theoretical advantage of high-order optimization methods is their superior convergence guarantees. For instance, third-order regularized methods reach an $(\epsilon _1,\epsilon _2,\epsilon _3)$third-order critical point in at most ${\mathcal {O}} (\max (\epsilon _1^{-4/3}, \epsilon _2^{-2}, \epsilon _3^{-4} ) )$ iterations. However, the cost of computing high-order derivatives is …