Computing local minimizers in polynomial optimization under genericity conditions

Abstract

Abstract In this paper, we focus on computing local minimizers of a multivariate polynomial optimization problem under certain genericity conditions. Using a technique from computer algebra and the second-order optimality condition, we provide a univariate representation for the set of local minimizers. In particular, for the unconstrained problem, i.e., the constraint set is $${{\,\mathrm{\mathbb {R}}\,}}^n$$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:msup> <mml:mrow> <mml:mrow> <mml:mspace/> <mml:mi>R</mml:mi> <mml:mspace/> </mml:mrow> </mml:mrow> <mml:mi>n</mml:mi> </mml:msup> </mml:math> , the coordinates of all local minimizers can be represented by the values of n univariate polynomials at the real solutions of a univariate system containing a polynomial equation and a polynomial matrix inequality. We also develop the technique for problems with equality/inequality constraints. Based on the above technique, we design algorithms to enumerate the local minimizers and provide some experimental examples based on hybrid symbolic-numerical computations. For the case that the genericity conditions fail, at the end of the paper we propose a perturbation technique to compute approximately a global minimizer, provided that the constraint set is compact.

Locations

  • Journal of Global Optimization
  • arXiv (Cornell University)

Ask a Question About This Paper

Summary

This paper presents a novel approach for computing local minimizers of polynomial optimization problems, focusing on scenarios where certain genericity conditions hold. Unlike much of the existing literature which primarily targets global minima, this work offers a method for the exact symbolic enumeration of all local minimizers, an aspect crucial for applications in fields like computational chemistry (e.g., identifying stable molecular structures) and machine learning (e.g., enumerating diverse solutions in non-convex models).

The key innovation lies in transforming the multivariate polynomial optimization problem into a univariate one under specific genericity assumptions. For unconstrained problems, the method provides a univariate representation of all local minimizers. This means that the coordinates of these minimizers are expressed as univariate polynomials evaluated at the real roots of a single polynomial equation. This representation is derived by leveraging the first and second-order optimality conditions (i.e., critical points and positive definiteness of the Hessian matrix) in conjunction with powerful tools from computer algebra. Specifically, the paper introduces a symbolic algorithm, GRULOM, that computes the radical of the gradient ideal (an ideal generated by partial derivatives of the objective function) and uses a Gröbner basis in “shape position” to achieve this univariate parameterization. A similar approach is extended to constrained problems involving equality and inequality constraints. For equality constraints, Lagrange multipliers are incorporated, forming a Lagrangian function, and the same algebraic techniques are applied to its gradient ideal. Inequality constraints are handled by introducing squared slack variables, converting them into equality constraints, which then fall under the same framework.

For cases where the strict genericity conditions (e.g., zero-dimensionality of the gradient ideal or non-singularity of the Hessian on the critical set) are not met, leading to infinitely many local minimizers or degenerate Hessians, the paper proposes a perturbation technique. This technique aims to approximate a global minimizer by introducing a small perturbation term, ensuring that the perturbed problem satisfies the genericity conditions and can then be solved using the developed methods. The final approximate global minimizer is obtained by taking the limit as the perturbation goes to zero.

The methodology relies heavily on several main prior ingredients from commutative algebra and optimization theory. Foremost among these are the first and second-order optimality conditions, which define local minimizers. Algebraic geometry forms the computational backbone, particularly the theory of ideals and varieties (complex and real algebraic sets). The concept of zero-dimensional ideals is critical, as it ensures a finite number of critical points, enabling exact enumeration. Gröbner bases are indispensable for computing radical ideals, transforming ideals into specific forms like “shape position” (which directly yields univariate representations), and performing symbolic manipulations. The Shape Lemma is a cornerstone, providing the theoretical guarantee for obtaining such univariate representations when the critical points have distinct coordinates in one dimension. The use of Lagrange multipliers for equality-constrained optimization and the squared slack variable technique for inequality constraints are standard optimization tools. The paper also builds upon prior work in polynomial optimization that established genericity conditions for finite sets of critical points, and on results from perturbation theory for handling non-generic scenarios and approximating global minima.

In this paper, we aim at computing all local minimizers of a polynomial optimization problem under genericity conditions. By using a technique in computer algebra, we provide a univariate representation … In this paper, we aim at computing all local minimizers of a polynomial optimization problem under genericity conditions. By using a technique in computer algebra, we provide a univariate representation for the set of local minimizers. In particular, for an unconstrained problem, the coordinates of all local minimizers can be represented by several univariate polynomial equalities and one univariate polynomial matrix inequality. We also develop the technique for constrained problems having equality constraints. Based on the above technique, we design algorithms to enumerate the local minimizers. At the end of the paper, we provide some experimental examples.
This paper reviews local and global optimality conditions in polynomial optimization. We summarize the relationship between them. This paper reviews local and global optimality conditions in polynomial optimization. We summarize the relationship between them.
We present an efficient framework for solving constrained global non-convex polynomial optimization problems. We prove the existence of an equivalent nonlinear reformulation of such problems that possesses essentially no spurious … We present an efficient framework for solving constrained global non-convex polynomial optimization problems. We prove the existence of an equivalent nonlinear reformulation of such problems that possesses essentially no spurious local minima. We show through numerical experiments that polynomial scaling in dimension and degree is achievable for computing the optimal value and location of previously intractable global constrained polynomial optimization problems in high dimension.
Convexification is a core technique in global polynomial optimization. Currently, two different approaches compete in practice and in the literature. First, general approaches rooted in nonlinear programming. They are comparitively … Convexification is a core technique in global polynomial optimization. Currently, two different approaches compete in practice and in the literature. First, general approaches rooted in nonlinear programming. They are comparitively cheap from a computational point of view, but typically do not provide good (tight) relaxations with respect to bounds for the original problem. Second, approaches based on sum-of-squares and moment relaxations. They are typically computationally expensive, but do provide tight relaxations. In this paper, we embed both kinds of approaches into a unified framework of monomial relaxations. We develop a convexification strategy that allows to trade off the quality of the bounds against computational expenses. Computational experiments show that a combination with a prototype cutting-plane algorithm gives very encouraging results.
The problem of minimizing a polynomial function in several variables over ${bf R^n$ is considered and an algorithm is given. When the polynomial has a minimum the algorithm returns the … The problem of minimizing a polynomial function in several variables over ${bf R^n$ is considered and an algorithm is given. When the polynomial has a minimum the algorithm returns the global minimum and finds at least one point in every connected component of the set of minimizers. A characterization of such points is given. When the polynomial does not have a minimum the algorithm can compute its infimum. No assumption is made on the polynomial. The algorithm can be applied for solving a system of polynomial equations.
GloptiPoly is a Matlab/SeDuMi add-on to build and solve convex linear matrix inequality relaxations of the (generally non-convex) global optimization problem of minimizing a multivariable polynomial function subject to polynomial … GloptiPoly is a Matlab/SeDuMi add-on to build and solve convex linear matrix inequality relaxations of the (generally non-convex) global optimization problem of minimizing a multivariable polynomial function subject to polynomial inequality, equality or integer constraints. It generates a series of lower bounds monotonically converging to the global optimum. Global optimality is detected and isolated optimal solutions are extracted automatically. Numerical experiments show that for most of the small- and medium-scale problems described in the literature, the global optimum is reached at low computational cost.
. From a theoretical viewpoint, the GPM has developments and impact in var-ious area of Mathematics like algebra, Fourier analysis, functional analysis, operator theory, probabilityand statistics, to cite a few. … . From a theoretical viewpoint, the GPM has developments and impact in var-ious area of Mathematics like algebra, Fourier analysis, functional analysis, operator theory, probabilityand statistics, to cite a few. In addition, and despite its rather simple and short formulation, the GPMhas a large number of important applications in various fields like optimization, probability, mathematicalfinance, optimal control, control and signal processing, chemistry, cristallography, tomography, quantumcomputing, etc.In its full generality, the GPM is untractable numerically. However when K is a compact basic semi-algebraic set, and the functions involved are polynomials (and in some cases piecewise polynomials orrational functions), then the situation is much nicer. Indeed, one can define a systematic numerical schemebased on a hierarchy of semidefinite programs, which provides a monotone sequence that converges tothe optimal value of the GPM. (A semidefinite program is a convex optimization problem which up toarbitrary fixed precision, can be solved in polynomial time.) Sometimes finite convergence may evenocccur.In the talk, we will present the semidefinite programming methodology to solve the GPM and describein detail several applications of the GPM (notably in optimization, probability, optimal control andmathematical finance).R´ef´erences[1] J.B. Lasserre, Moments, Positive Polynomials and their Applications, Imperial College Press, inpress.[2] J.B. Lasserre, A Semidefinite programming approach to the generalized problem of moments,Math. Prog. 112 (2008), pp. 65–92.
We consider the problem of finding the unconstrained global minimum of a real-valued polynomial p(x): {\mathbb{R}}^n\to {\mathbb{R}}$, as well as the global minimum of p(x), in a compact set K … We consider the problem of finding the unconstrained global minimum of a real-valued polynomial p(x): {\mathbb{R}}^n\to {\mathbb{R}}$, as well as the global minimum of p(x), in a compact set K defined by polynomial inequalities. It is shown that this problem reduces to solving an (often finite) sequence of convex linear matrix inequality (LMI) problems. A notion of Karush--Kuhn--Tucker polynomials is introduced in a global optimality condition. Some illustrative examples are provided.
This paper proposes a method for finding the global infimum of a multivariate polynomial f via sum of squares (SOS) relaxation over its truncated tangency variety. This variety is truncated … This paper proposes a method for finding the global infimum of a multivariate polynomial f via sum of squares (SOS) relaxation over its truncated tangency variety. This variety is truncated of the set of all points $x \in \mathbb{R}^n$ where the level sets of f are tangent to the sphere in $\mathbb{R}^n$ centered in the origin and with radius $\|x\|.$ It is demonstrated that: These facts imply that we can find a natural sequence of semidefinite programs whose optimal values converge monotonically, increasing to the infimum of $f.$ This opens up the possibility of solving previously intractable polynomial optimization problems.
Consider the polynomial optimization problem whose objective and constraints are all described by multivariate polynomials. Under some genericity assumptions, we prove that the optimality conditions always hold on optimizers, and … Consider the polynomial optimization problem whose objective and constraints are all described by multivariate polynomials. Under some genericity assumptions, we prove that the optimality conditions always hold on optimizers, and the coordinates of optimizers are algebraic functions of the coefficients of the input polynomials. We also give a general formula for the algebraic degree of the optimal coordinates. The derivation of the algebraic degree is equivalent to counting the number of all complex critical points. As special cases, we obtain the algebraic degrees of quadratically constrained quadratic programming (QCQP), second order cone programming (SOCP), and pth order cone programming (POCP), in analogy to the algebraic degree of semidefinite programming [J. Nie, K. Ranestad, and B. Sturmfels, The algebraic degree of semidefinite programming, Math. Programm., to appear].
Let IP {A, c} denote the family of integer programs of the form Min cx: Ax = b, x ∈ N n obtained by varying the right-hand side vector b … Let IP {A, c} denote the family of integer programs of the form Min cx: Ax = b, x ∈ N n obtained by varying the right-hand side vector b but keeping A and c fixed. A test set for IP A, c is a set of vectors in Z n such that for each nonoptimal solution α to a program in this family, there is at least one element g in this set such that α − g has an improved cost value as compared to α. We describe a unique minimal test set for this family called the reduced Gröbner basis of IP {A, c} . An algorithm for its construction is presented which we call a geometric Buchberger algorithm for integer programming and we show how an integer program may be solved using this test set. The reduced Gröbner basis is then compared with some other known test sets from the literature. We also indicate an easy procedure to construct test sets with respect to all cost functions for a matrix A ∈ Z (n−2)×n of full row rank.
In this paper a characterization of the local-nonglobal minimizes of a quadratic function defined on a Euclidean ball or a sphere is given. It is proven that there exists at … In this paper a characterization of the local-nonglobal minimizes of a quadratic function defined on a Euclidean ball or a sphere is given. It is proven that there exists at most one local-nonglobal minimizes and that the Lagrange multiplier that corresponds to this minimizes is the largest solution of a nonlinear scalar equation. An algorithm is proposed for computing the local-nonglobal minimizes.
We describe the results of a series of tests for a class of new methods of trust region type for solving the simple bound constrained minimization problem. The results are … We describe the results of a series of tests for a class of new methods of trust region type for solving the simple bound constrained minimization problem. The results are encouraging and lead us to believe that the methods will prove useful in solving large-scale problems.
article Free AccessTesting Unconstrained Optimization Software Authors: Jorge J. Moré Argonne National Labortory, 9700 South Cass Avenue, Argonne, IL Argonne National Labortory, 9700 South Cass Avenue, Argonne, ILView Profile , … article Free AccessTesting Unconstrained Optimization Software Authors: Jorge J. Moré Argonne National Labortory, 9700 South Cass Avenue, Argonne, IL Argonne National Labortory, 9700 South Cass Avenue, Argonne, ILView Profile , Burton S. Garbow Argonne National Labortory, 9700 South Cass Avenue, Argonne, IL Argonne National Labortory, 9700 South Cass Avenue, Argonne, ILView Profile , Kenneth E. Hillstrom Argonne National Labortory, 9700 South Cass Avenue, Argonne, IL Argonne National Labortory, 9700 South Cass Avenue, Argonne, ILView Profile Authors Info & Claims ACM Transactions on Mathematical SoftwareVolume 7Issue 1pp 17–41https://doi.org/10.1145/355934.355936Published:01 March 1981Publication History 1,046citation5,094DownloadsMetricsTotal Citations1,046Total Downloads5,094Last 12 Months516Last 6 weeks82 Get Citation AlertsNew Citation Alert added!This alert has been successfully added and will be sent to:You will be notified whenever a record that you have chosen has been cited.To manage your alert preferences, click on the button below.Manage my AlertsNew Citation Alert!Please log in to your account Save to BinderSave to BinderCreate a New BinderNameCancelCreateExport CitationPublisher SiteeReaderPDF
Let $f, f_1, \ldots, f_{s}$ be $n$-variate polynomials with rational coefficients of maximum degree $D$ and let $V$ be the set of common complex solutions of $\mathbf{F}=(f_1,\ldots, f_{s})$. We give … Let $f, f_1, \ldots, f_{s}$ be $n$-variate polynomials with rational coefficients of maximum degree $D$ and let $V$ be the set of common complex solutions of $\mathbf{F}=(f_1,\ldots, f_{s})$. We give an algorithm which, up to some regularity assumptions on $\mathbf{F}$, computes an exact representation of the global infimum $f^\star$ of the restriction of the map $x\to f(x)$ to ${V\cap\mathbb{R}^n}$, i.e., a univariate polynomial vanishing at $f^\star$ and an isolating interval for $f^\star$. Furthermore, it decides whether $f^\star$ is reached, and if so, it returns $x^\star\in V\cap\mathbb{R}^n$ such that $f(x^\star)=f^\star$. This algorithm is probabilistic. It makes use of the notion of polar varieties. Its complexity is essentially cubic in $(s D)^n$ and linear in the complexity of evaluating the input. This fits within the best known deterministic complexity class $D^{O(n)}$. We report on some practical experiments of a first implementation that is available as a Maple package. It appears that it can tackle global optimization problems that were unreachable by previous exact algorithms and can manage instances that are hard to solve with purely numeric techniques. As far as we know, even under the extra genericity assumptions on the input, it is the first probabilistic algorithm that combines practical efficiency with good control of complexity for this problem.
Let f be a polynomial in Q[X1, ..., Xn] of degree D. We provide an efficient algorithm in practice to compute the global supremum supx∈ Rn f(x) of f (or … Let f be a polynomial in Q[X1, ..., Xn] of degree D. We provide an efficient algorithm in practice to compute the global supremum supx∈ Rn f(x) of f (or its infimum inf{x∈ Rn}f(x)). The complexity of our method is bounded by DO(n)}. In a probabilistic model, a more precise result yields a complexity bounded by O(n7D4n) arithmetic operations in Q. Our implementation is more efficient by several orders of magnitude than previous ones based on quantifier elimination. Sometimes, it can tackle problems that numerical techniques do not reach. Our algorithm is based on the computation of generalized critical values of the mapping x-> f(x), i.e. the set of points {c∈ C mid exists (xll)ll∈ N}⊂ Cn ;f(xll)-> c, ;||xll||||dxll f||-> 0 { when }ll-> ∞}. We prove that the global optimum of f lies in its set of generalized critical values and provide an efficient way of deciding which value is the global optimum.
We propose a new algorithm for solving integer programming (IP) problems that is based on ideas from algebraic geometry. The method provides a natural generalization of the Farkas lemma for … We propose a new algorithm for solving integer programming (IP) problems that is based on ideas from algebraic geometry. The method provides a natural generalization of the Farkas lemma for IP, leads to a way of performing sensitivity analysis, offers a systematic enumeration of all feasible solutions, and gives structural information of the feasible set of a given IP. We provide several examples that offer insights on the algorithm and its properties.
We consider the problem of minimizing a polynomial over a semialgebraic set defined by polynomial equations and inequalities, which is NP-hard in general. Hierarchies of semidefinite relaxations have been proposed … We consider the problem of minimizing a polynomial over a semialgebraic set defined by polynomial equations and inequalities, which is NP-hard in general. Hierarchies of semidefinite relaxations have been proposed in the literature, involving positive semidefinite moment matrices and the dual theory of sums of squares of polynomials. We present these hierarchies of approximations and their main properties: asymptotic/finite convergence, optimality certificate, and extraction of global optimum solutions. We review the mathematical tools underlying these properties, in particular, some sums of squares representation results for positive polynomials, some results about moment matrices (in particular, of Curto and Fialkow), and the algebraic eigenvalue method for solving zero-dimensional systems of polynomial equations. We try whenever possible to provide detailed proofs and background.
Unconstrained and inequality constrained sparse polynomial optimization problems (POPs) are considered. A correlative sparsity pattern graph is defined to find a certain sparse structure in the objective and constraint polynomials … Unconstrained and inequality constrained sparse polynomial optimization problems (POPs) are considered. A correlative sparsity pattern graph is defined to find a certain sparse structure in the objective and constraint polynomials of a POP. Based on this graph, sets of the supports for sums of squares (SOS) polynomials that lead to efficient SOS and semidefinite program (SDP) relaxations are obtained. Numerical results from various test problems are included to show the improved performance of the SOS and SDP relaxations.
Simulating phase transformation of materials at the atomistic scale requires the knowledge of saddle points on the potential energy surface (PES). In the existing first-principles saddle point search methods, the … Simulating phase transformation of materials at the atomistic scale requires the knowledge of saddle points on the potential energy surface (PES). In the existing first-principles saddle point search methods, the requirement of a large number of expensive evaluations of potential energy, e.g. using density functional theory (DFT), limits the application of such algorithms to large systems. Thus, it is meaningful to minimize the number of functional evaluations as DFT simulations during the search process. Furthermore, model-form uncertainty and numerical errors are inherent in DFT and search algorithms. Robustness of the search results should be considered. In this paper, a new search algorithm based on Kriging is presented to search local minima and saddle points on a PES efficiently and robustly. Different from existing searching methods, the algorithm keeps a memory of searching history by constructing surrogate models and uses the search results on the surrogate models to provide the guidance of future search on the PES. The surrogate model is also updated with more DFT simulation results. The algorithm is demonstrated by the examples of Rastrigin and Schwefel functions with a multitude of minima and saddle points.
In constrained nonlinear optimization, the squared slack variables can be used to transform a problem with inequality constraints into a problem containing only equality constraints. This reformulation is usually not … In constrained nonlinear optimization, the squared slack variables can be used to transform a problem with inequality constraints into a problem containing only equality constraints. This reformulation is usually not considered in the modern literature, mainly because of possible numerical instabilities. However, this argument only concerns the development of algorithms, and nothing stops us in using the strategy to understand the theory behind these optimization problems. In this note, we clarify the relation between the Karush-Kuhn-Tucker points of the original and the reformulated problems. In particular, we stress that the second-order sufficient condition is the key to establish their equivalence.
In this paper we study genericity for the class of semialgebraic optimization problems with equality and inequality constraints, in which every problem of the class is obtained by linear perturbations … In this paper we study genericity for the class of semialgebraic optimization problems with equality and inequality constraints, in which every problem of the class is obtained by linear perturbations of the objective function, while the “core” objective function and the constraint functions are kept fixed. Assume that the linear independence constraint qualification is satisfied at every point in the constraint set. It is shown that almost all problems in the class are such that (i) the restriction of the objective function on the constraint set is coercive and regular at infinity; (ii) there is a unique optimal solution, lying on a unique active manifold, at which the strict complementarity and second order sufficiency conditions, the quadratic growth condition, and the Hölder type global error bound hold, and (iii) all minimizing sequences converge. Furthermore, the active manifold is constant, and the optimal solution and the optimal value function depend analytically under local perturbations of the objective function. These results are combined with a standard result about the existence of sums of squares certificates to prove that we can build a sequence of semidefinite programs whose solutions give rise to a sequence of points converging to the optimal solution of the original problem in finitely many steps. It is worth emphasizing that the results of this study hold globally and we do not require the problem to be convex or the constraint set to be compact.
We compare algorithms for global optimization of polynomial functions in many variables.It is demonstrated that existing algebraic methods (Gröbner bases, resultants, homotopy methods) are dramatically outperformed by a relaxation technique, … We compare algorithms for global optimization of polynomial functions in many variables.It is demonstrated that existing algebraic methods (Gröbner bases, resultants, homotopy methods) are dramatically outperformed by a relaxation technique, due to N.Z.Shor and the first author, which involves sums of squares and semidefinite programming.This opens up the possibility of using semidefinite programming relaxations arising from the Positivstellensatz for a wide range of computational problems in real algebraic geometry.
This work proposes a new moment-SOS hierarchy, called CS-TSSOS , for solving large-scale sparse polynomial optimization problems. Its novelty is to exploit simultaneously correlative sparsity and term sparsity by combining … This work proposes a new moment-SOS hierarchy, called CS-TSSOS , for solving large-scale sparse polynomial optimization problems. Its novelty is to exploit simultaneously correlative sparsity and term sparsity by combining advantages of two existing frameworks for sparse polynomial optimization. The former is due to Waki et al. [ 40 ] while the latter was initially proposed by Wang et al. [ 42 ] and later exploited in the TSSOS hierarchy [ 46 , 47 ]. In doing so we obtain CS-TSSOS—a two-level hierarchy of semidefinite programming relaxations with (i) the crucial property to involve blocks of SDP matrices and (ii) the guarantee of convergence to the global optimum under certain conditions. We demonstrate its efficiency and scalability on several large-scale instances of the celebrated Max-Cut problem and the important industrial optimal power flow problem, involving up to six thousand variables and tens of thousands of constraints.
Consider a semialgebraic function $f\colon\mathbb{R}^n \to {\mathbb{R}},$ which is continuous around a point $\bar{x} \in \mathbb{R}^n.$ Using the so-called tangency variety of $f$ at $\bar{x},$ we first provide necessary and … Consider a semialgebraic function $f\colon\mathbb{R}^n \to {\mathbb{R}},$ which is continuous around a point $\bar{x} \in \mathbb{R}^n.$ Using the so-called tangency variety of $f$ at $\bar{x},$ we first provide necessary and sufficient conditions for $\bar{x}$ to be a local minimizer of $f,$ and then in the case where $\bar{x}$ is an isolated local minimizer of $f,$ we define a “tangency exponent” $\alpha_* > 0$ so that for any $\alpha \in \mathbb{R}$ the following four conditions are always equivalent: (i) the inequality $\alpha \ge \alpha_*$ holds, (ii) the point $\bar{x}$ is an $\alpha$th order sharp local minimizer of $f$, (iii) the limiting subdifferential $\partial f$ of $f$ is $(\alpha - 1)$th order strongly metrically subregular at $\bar{x}$ for 0, and (iv) the function $f$ satisfies the Łojaseiwcz gradient inequality at $\bar{x}$ with the exponent $1 - \frac{1}{\alpha}.$ Besides, we also present a counterexample to a conjecture posed by Drusvyatskiy and Ioffe [Math. Program. Ser. A, 153 (2015), pp. 635--653].
Assessing nonnegativity of multivariate polynomials over the reals, through the computation of certificates of nonnegativity, is a topical issue in polynomial optimization. This is usually tackled through the computation of … Assessing nonnegativity of multivariate polynomials over the reals, through the computation of certificates of nonnegativity, is a topical issue in polynomial optimization. This is usually tackled through the computation of sum of squares decompositions which rely on efficient numerical solvers for semidefinite programming. This method faces two difficulties. The first one is that the certificates obtained this way are approximate and then nonexact. The second one is due to the fact that not all nonnegative polynomials are sums of squares. In this paper, we build on previous works by Parrilo and Nie, Demmel, and Sturmfels who introduced certificates of nonnegativity modulo gradient ideals. We prove that, actually, such certificates can be obtained exactly over the rationals if the polynomial under consideration has rational coefficients, and we provide exact algorithms to compute them. We analyze the bit complexity of these algorithms and deduce bitsize bounds of such certificates.
The potential energy surface (PES) describes the energy of a chemical system as a function of its geometry and is a fundamental concept in modern chemistry. A PES provides much … The potential energy surface (PES) describes the energy of a chemical system as a function of its geometry and is a fundamental concept in modern chemistry. A PES provides much useful information about the system, including the structures and energies of various stationary points, such as stable conformers (local minima) and transition states (first-order saddle points) connected by a minimum-energy path. Our group has previously produced surrogate reduced-dimensional PESs using sparse interpolation along chemically significant reaction coordinates, such as bond lengths, bond angles, and torsion angles. These surrogates used a single interpolation basis, either polynomials or trigonometric functions, in every dimension. However, relevant molecular dynamics (MD) simulations often involve some combination of both periodic and nonperiodic coordinates. Using a trigonometric basis on nonperiodic coordinates, such as bond lengths, leads to inaccuracies near the domain boundary. Conversely, polynomial interpolation on the periodic coordinates does not enforce the periodicity of the surrogate PES gradient, leading to nonconservation of total energy even in a microcanonical ensemble. In this work, we present an interpolation method that uses trigonometric interpolation on the periodic reaction coordinates and polynomial interpolation on the nonperiodic coordinates. We apply this method to MD simulations of possible isomerization pathways of azomethane between cis and trans conformers. This method is the only known interpolative method that appropriately conserves total energy in systems with both periodic and nonperiodic reaction coordinates. In addition, compared to all-polynomial interpolation, the mixed basis requires fewer electronic structure calculations to obtain a given level of accuracy, is an order of magnitude faster, and is freely available on GitHub.
The Rosenbrock function is a well-known benchmark for numerical optimization problems, which is frequently used to assess the performance of Evolutionary Algorithms. The classical Rosenbrock function, which is a two-dimensional … The Rosenbrock function is a well-known benchmark for numerical optimization problems, which is frequently used to assess the performance of Evolutionary Algorithms. The classical Rosenbrock function, which is a two-dimensional unimodal function, has been extended to higher dimensions in recent years. Many researchers take the high-dimensional Rosenbrock function as a unimodal function by instinct. In 2001 and 2002, Hansen and Deb found that the Rosenbrock function is not a unimodal function for higher dimensions although no theoretical analysis was provided. This paper shows that the n-dimensional (n = 4 ∼ 30) Rosenbrock function has 2 minima, and analysis is proposed to verify this. The local minima in some cases are presented. In addition, this paper demonstrates that one of the “local minima” for the 20-variable Rosenbrock function found by Deb might not in fact be a local minimum.
Consider a polynomial optimization problem. Adding polynomial equations generated by the Fritz John conditions to the constraint set does not change the optimal value. As proved in [arXiv:2205.04254 (2022)], the … Consider a polynomial optimization problem. Adding polynomial equations generated by the Fritz John conditions to the constraint set does not change the optimal value. As proved in [arXiv:2205.04254 (2022)], the objective polynomial has finitely many values on the new constraint set under some genericity assumption. Based on this, we provide an algorithm that allows us to compute exactly this optimal value. Our method depends on the computations of real radical generators and Gro¨bner basis. Finally, we apply our method to solve some instances of mathematical program with complementarity constraints.