Author Description

Login to generate an author description

Ask a Question About This Mathematician

This paper is an attempt to motivate and justify quasi-Newton methods as useful modifications of Newton’s method for general and gradient nonlinear systems of equations. References are given to ample … This paper is an attempt to motivate and justify quasi-Newton methods as useful modifications of Newton’s method for general and gradient nonlinear systems of equations. References are given to ample numerical justification; here we give an overview of many of the important theoretical results and each is accompanied by sufficient discussion to make the results and hence the methods plausible.
article Free AccessTesting Unconstrained Optimization Software Authors: Jorge J. Moré Argonne National Labortory, 9700 South Cass Avenue, Argonne, IL Argonne National Labortory, 9700 South Cass Avenue, Argonne, ILView Profile , … article Free AccessTesting Unconstrained Optimization Software Authors: Jorge J. Moré Argonne National Labortory, 9700 South Cass Avenue, Argonne, IL Argonne National Labortory, 9700 South Cass Avenue, Argonne, ILView Profile , Burton S. Garbow Argonne National Labortory, 9700 South Cass Avenue, Argonne, IL Argonne National Labortory, 9700 South Cass Avenue, Argonne, ILView Profile , Kenneth E. Hillstrom Argonne National Labortory, 9700 South Cass Avenue, Argonne, IL Argonne National Labortory, 9700 South Cass Avenue, Argonne, ILView Profile Authors Info & Claims ACM Transactions on Mathematical SoftwareVolume 7Issue 1pp 17–41https://doi.org/10.1145/355934.355936Published:01 March 1981Publication History 1,046citation5,094DownloadsMetricsTotal Citations1,046Total Downloads5,094Last 12 Months516Last 6 weeks82 Get Citation AlertsNew Citation Alert added!This alert has been successfully added and will be sent to:You will be notified whenever a record that you have chosen has been cited.To manage your alert preferences, click on the button below.Manage my AlertsNew Citation Alert!Please log in to your account Save to BinderSave to BinderCreate a New BinderNameCancelCreateExport CitationPublisher SiteeReaderPDF
We propose an algorithm for the problem of minimizing a quadratic function subject to an ellipsoidal constraint and show that this algorithm is guaranteed to produce a nearly optimal solution … We propose an algorithm for the problem of minimizing a quadratic function subject to an ellipsoidal constraint and show that this algorithm is guaranteed to produce a nearly optimal solution in a finite number of iterations. We also consider the use of this algorithm in a trust region Newton’s method. In particular, we prove that under reasonable assumptions the sequence generated by Newton’s method has a limit point which satisfies the first and second order necessary conditions for a minimizer of the objective function. Numerical results for GQTPAR, which is a Fortran implementaton of our algorithm, show that GQTPAR is quite successful in a trust region method. In our tests a call to GQTPAR only required 1.6 iterations on the average.
Let <italic>F</italic> be a mapping from real <italic>n</italic>-dimensional Euclidean space into itself. Most practical algorithms for finding a zero of <italic>F</italic> are of the form <disp-formula content-type="math/mathml"> \[ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" … Let <italic>F</italic> be a mapping from real <italic>n</italic>-dimensional Euclidean space into itself. Most practical algorithms for finding a zero of <italic>F</italic> are of the form <disp-formula content-type="math/mathml"> \[ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="x Subscript k plus 1 Baseline equals x Subscript k Baseline minus upper B Subscript k Superscript negative 1 Baseline upper F x Subscript k Baseline comma"> <mml:semantics> <mml:mrow> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:msub> <mml:mi>x</mml:mi> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:mi>k</mml:mi> <mml:mo>+</mml:mo> <mml:mn>1</mml:mn> </mml:mrow> </mml:msub> </mml:mrow> <mml:mo>=</mml:mo> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:msub> <mml:mi>x</mml:mi> <mml:mi>k</mml:mi> </mml:msub> </mml:mrow> <mml:mo>−<!-- − --></mml:mo> <mml:msubsup> <mml:mi>B</mml:mi> <mml:mi>k</mml:mi> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:mo>−<!-- − --></mml:mo> <mml:mn>1</mml:mn> </mml:mrow> </mml:msubsup> <mml:mi>F</mml:mi> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:msub> <mml:mi>x</mml:mi> <mml:mi>k</mml:mi> </mml:msub> </mml:mrow> <mml:mo>,</mml:mo> </mml:mrow> <mml:annotation encoding="application/x-tex">{x_{k + 1}} = {x_k} - B_k^{ - 1}F{x_k},</mml:annotation> </mml:semantics> </mml:math> \] </disp-formula> where <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="left-brace upper B Subscript k Baseline right-brace"> <mml:semantics> <mml:mrow> <mml:mo fence="false" stretchy="false">{</mml:mo> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:msub> <mml:mi>B</mml:mi> <mml:mi>k</mml:mi> </mml:msub> </mml:mrow> <mml:mo fence="false" stretchy="false">}</mml:mo> </mml:mrow> <mml:annotation encoding="application/x-tex">\{ {B_k}\}</mml:annotation> </mml:semantics> </mml:math> </inline-formula> is a sequence of nonsingular matrices. The main result of this paper is a characterization theorem for the superlinear convergence to a zero of <italic>F</italic> of sequences of the above form. This result is then used to give a unified treatment of the results on the superlinear convergence of the Davidon-Fletcher-Powell method obtained by Powell for the case in which exact line searches are used, and by Broyden, Dennis, and Moré for the case without line searches. As a by-product, several results on the asymptotic behavior of the sequence <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="left-brace upper B Subscript k Baseline right-brace"> <mml:semantics> <mml:mrow> <mml:mo fence="false" stretchy="false">{</mml:mo> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:msub> <mml:mi>B</mml:mi> <mml:mi>k</mml:mi> </mml:msub> </mml:mrow> <mml:mo fence="false" stretchy="false">}</mml:mo> </mml:mrow> <mml:annotation encoding="application/x-tex">\{ {B_k}\}</mml:annotation> </mml:semantics> </mml:math> </inline-formula> are obtained. An interesting aspect of these results is that superlinear convergence is obtained without any consistency conditions; i.e., without requiring that the sequence <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="left-brace upper B Subscript k Baseline right-brace"> <mml:semantics> <mml:mrow> <mml:mo fence="false" stretchy="false">{</mml:mo> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:msub> <mml:mi>B</mml:mi> <mml:mi>k</mml:mi> </mml:msub> </mml:mrow> <mml:mo fence="false" stretchy="false">}</mml:mo> </mml:mrow> <mml:annotation encoding="application/x-tex">\{ {B_k}\}</mml:annotation> </mml:semantics> </mml:math> </inline-formula> converge to the Jacobian matrix of <italic>F</italic> at the zero. In fact, a modification of an example due to Powell shows that most of the known quasi-Newton methods are not, in general, consistent. Finally, it is pointed out that the above-mentioned characterization theorem applies to other single and double rank quasi-Newton methods, and that the results of this paper can be used to obtain their superlinear convergence.
We propose data profiles as a tool for analyzing the performance of derivative-free optimization solvers when there are constraints on the computational budget. We use performance and data profiles, together … We propose data profiles as a tool for analyzing the performance of derivative-free optimization solvers when there are constraints on the computational budget. We use performance and data profiles, together with a convergence test that measures the decrease in function value, to analyze the performance of three solvers on sets of smooth, noisy, and piecewise-smooth problems. Our results provide estimates for the performance difference between these solvers, and show that on these problems, the model-based solver tested performs better than the two direct search solvers tested.
The development of software for minimization problems is often based on a line search method. We consider line search methods that satisfy sufficient decrease and curvature conditions, and formulate the … The development of software for minimization problems is often based on a line search method. We consider line search methods that satisfy sufficient decrease and curvature conditions, and formulate the problem of determining a point that satisfies these two conditions in terms of finding a point in a set T(μ) . We describe a search algorithm for this problem that produces a sequence of iterates that converge to a point in T(μ) and that, except for pathological cases, terminates in a finite number of steps. Numerical results for an implementation of the search algorithm on a set of test functions show that the algorithm terminates within a small number of iterations.
Journal Article On the Local and Superlinear Convergence of Quasi-Newton Methods Get access C. G. BROYDEN, C. G. BROYDEN Department of Computer Science, Cornell UniversityIthaca, N. Y. 15850, U.S.A. Search … Journal Article On the Local and Superlinear Convergence of Quasi-Newton Methods Get access C. G. BROYDEN, C. G. BROYDEN Department of Computer Science, Cornell UniversityIthaca, N. Y. 15850, U.S.A. Search for other works by this author on: Oxford Academic Google Scholar J. E. DENNIS, Jr., J. E. DENNIS, Jr. Department of Computer Science, Cornell UniversityIthaca, N. Y. 15850, U.S.A. Search for other works by this author on: Oxford Academic Google Scholar JORGE J. MORÉ JORGE J. MORÉ Department of Computer Science, Cornell UniversityIthaca, N. Y. 15850, U.S.A. Search for other works by this author on: Oxford Academic Google Scholar IMA Journal of Applied Mathematics, Volume 12, Issue 3, December 1973, Pages 223–245, https://doi.org/10.1093/imamat/12.3.223 Published: 01 December 1973 Article history Received: 26 March 1973 Published: 01 December 1973
Given a mapping with a sparse Jacobian matrix, we investigate the problem of minimizing the number of function evaluations needed to estimate the Jacobian matrix by differences. We show that … Given a mapping with a sparse Jacobian matrix, we investigate the problem of minimizing the number of function evaluations needed to estimate the Jacobian matrix by differences. We show that this problem can be attacked as a graph coloring problem and that this approach leads to very efficient algorithms. The behavior of these algorithms is studied and, in particular, we prove that two of the algorithms are optimal for band graphs. We also present numerical evidence which indicates that these two algorithms are nearly optimal on practical problems.
An algorithm is proposed that uses the conjugate gradient method to explore the face of the feasible region defined by the current iterate, and the gradient projection method to move … An algorithm is proposed that uses the conjugate gradient method to explore the face of the feasible region defined by the current iterate, and the gradient projection method to move to a different face. It is proved that for strictly convex problems the algorithm converges to the solution, and that if the solution is nondegenerate, then the algorithm terminates at the solution in a finite number of steps. Numerical results are presented for the obstacle problem, the elastic-plastic torsion problem, and the journal bearing problems. On a selection of these problems with dimensions ranging from 5000 to 15,000, the algorithm determines the solution in fewer than 15 iterations, and with a small number of function-gradient evaluations and Hessian-vector products per iteration.
We analyze a trust region version of Newton's method for bound-constrained problems. Our approach relies on the geometry of the feasible set, not on the particular representation in terms of … We analyze a trust region version of Newton's method for bound-constrained problems. Our approach relies on the geometry of the feasible set, not on the particular representation in terms of constraints. The convergence theory holds for linearly constrained problems and yields global and superlinear convergence without assuming either strict complementarity or linear independence of the active constraints. We also show that the convergence theory leads to an efficient implementation for large bound-constrained problems.
The trust region problem requires the global minimum of a general quadratic function subject to an ellipsoidal constraint. The development of algorithms for the solution of this problem has found … The trust region problem requires the global minimum of a general quadratic function subject to an ellipsoidal constraint. The development of algorithms for the solution of this problem has found applications in nonlinear and combinatorial optimization. In this paper we generalize the trust region problem by allowing a general quadratic constraint. The main results are a characterization of the global minimizer of the generalized trust region problem, and the development of an algorithm that finds an approximate global minimizer in a finite number of iterations.
Distance geometry problems arise in the determination of protein structure. We consider the case where only a subset of the distances between atoms is given and formulate this distance geometry … Distance geometry problems arise in the determination of protein structure. We consider the case where only a subset of the distances between atoms is given and formulate this distance geometry problem as a global minimization problem with special structure. We show that global smoothing techniques and a continuation approach for global optimization can be used to determine global solutions of this problem reliably and efficiently. The global continuation approach determines a global solution with less computational effort than is required by a standard multistart algorithm. Moreover, the continuation approach usually finds the global solution from any given starting point, while the multistart algorithm tends to fail.
Nondegeneracy conditions that guarantee that the optimal active constraints are identified in a finite number of iterations are studied. Results of this type have only been established for a few … Nondegeneracy conditions that guarantee that the optimal active constraints are identified in a finite number of iterations are studied. Results of this type have only been established for a few algorithms, and then under restrictive hypothesis. The main result is a characterization of those algorithms that identify the optimal constraints in a finite number of iterations. This result is obtained with a nondegeneracy assumption which is equivalent, in the standard nonlinear programming problem, to the assumption that there is a set of strictly complementary Lagrange multipliers. As an important consequence of the authors' results the way that this characterization applies to gradient projection and sequential quadratic programming algorithms is shown.
We propose an incomplete Cholesky factorization for the solution of large-scale trust region subproblems and positive definite systems of linear equations. This factorization depends on a parameter p that specifies … We propose an incomplete Cholesky factorization for the solution of large-scale trust region subproblems and positive definite systems of linear equations. This factorization depends on a parameter p that specifies the amount of additional memory (in multiples of n, the dimension of the problem) that is available; there is no need to specify a drop tolerance. Our numerical results show that the number of conjugate gradient iterations and the computing time are reduced dramatically for small values of p. We also show that in contrast with drop tolerance strategies, the new approach is more stable in terms of number of iterations and memory requirements.
If $F:R^n \to R^n $ is a continuous mapping on a closed, convex cone K of $R^n $, it is shown that very weak restrictions on the growth of F … If $F:R^n \to R^n $ is a continuous mapping on a closed, convex cone K of $R^n $, it is shown that very weak restrictions on the growth of F (coercivity conditions) guarantee the existence of a solution to the nonlinear complementarily problem : for each z in $R^n $, find an $x^ * $ in K such that $Fx^ * - z$ belongs to the polar of K and $x^ * $ is orthogonal to $Fx^ * - z$. If K is $R^n $, then this problem reduces to finding conditions on F which guarantee that F maps $R^n $ onto $R^n $.
Newton's method plays a central role in the development of numerical techniques for optimization. In fact, most of the current practical methods for optimization can be viewed as variations on … Newton's method plays a central role in the development of numerical techniques for optimization. In fact, most of the current practical methods for optimization can be viewed as variations on Newton's method. It is therefore important to understand Newton's method as an algorithm in its own right and as a key introduction to the most recent ideas in this area. One of the aims of this expository paper is to present and analyze two main approaches to Newton's method for unconstrained minimization: the line search approach and the trust region approach. The other aim is to present some of the recent developments in the optimization field which are related to Newton's method. In particular, we explore several variations on Newton's method which are appropriate for large scale problems, and we also show how quasi-Newton methods can be derived quite naturally from Newton's method.
article Numerical Solution of Nonlinear Equations Share on Authors: Jorge J. Moré Applied Mathematics Division, Argonne National Laboratory, 9700, South Cass Ave., Argonne, IL Applied Mathematics Division, Argonne National Laboratory, … article Numerical Solution of Nonlinear Equations Share on Authors: Jorge J. Moré Applied Mathematics Division, Argonne National Laboratory, 9700, South Cass Ave., Argonne, IL Applied Mathematics Division, Argonne National Laboratory, 9700, South Cass Ave., Argonne, ILView Profile , Michel Y. Cosnard Mathématiques Appliquées-Informatique, Universite Scientifique et Médicale de Grenoble, Boite Postale 53, 38041 Grenoble Cédex, France Mathématiques Appliquées-Informatique, Universite Scientifique et Médicale de Grenoble, Boite Postale 53, 38041 Grenoble Cédex, FranceView Profile Authors Info & Claims ACM Transactions on Mathematical SoftwareVolume 5Issue 1March 1979 pp 64–85https://doi.org/10.1145/355815.355820Published:01 March 1979 87citation2,098DownloadsMetricsTotal Citations87Total Downloads2,098Last 12 Months37Last 6 weeks6 Get Citation AlertsNew Citation Alert added!This alert has been successfully added and will be sent to:You will be notified whenever a record that you have chosen has been cited.To manage your alert preferences, click on the button below.Manage my AlertsNew Citation Alert!Please log in to your account Save to BinderSave to BinderCreate a New BinderNameCancelCreateExport CitationPublisher SiteGet Access
The authors describe version 3.0 of the COPS set of nonlinearly constrained optimization problems. They have added new problems, as well as streamlined and improved most of the problems. They … The authors describe version 3.0 of the COPS set of nonlinearly constrained optimization problems. They have added new problems, as well as streamlined and improved most of the problems. They also provide a comparison of the FILTER, KNITRO, LOQO, MINOS, and SNOPT solvers on these problems.
article Free AccessSoftware for estimating sparse Jacobian matrices Authors: Thomas F. Coleman Cornell University Cornell UniversityView Profile , Burton S. Garbow Argonne National Laboratory Argonne National LaboratoryView Profile , Jorge … article Free AccessSoftware for estimating sparse Jacobian matrices Authors: Thomas F. Coleman Cornell University Cornell UniversityView Profile , Burton S. Garbow Argonne National Laboratory Argonne National LaboratoryView Profile , Jorge J. More Argonne National Laboratory Argonne National LaboratoryView Profile Authors Info & Claims ACM Transactions on Mathematical SoftwareVolume 10Issue 3Sept. 1984 pp 329–345https://doi.org/10.1145/1271.1610Published:28 August 1984Publication History 64citation808DownloadsMetricsTotal Citations64Total Downloads808Last 12 Months45Last 6 weeks22 Get Citation AlertsNew Citation Alert added!This alert has been successfully added and will be sent to:You will be notified whenever a record that you have chosen has been cited.To manage your alert preferences, click on the button below.Manage my AlertsNew Citation Alert!Please log in to your account Save to BinderSave to BinderCreate a New BinderNameCancelCreateExport CitationPublisher SiteeReaderPDF
Global methods for nonlinear complementarity problems formulate the problem as a system of nonsmooth nonlinear equations, or use continuation to trace a path defined by a smooth system of nonlinear … Global methods for nonlinear complementarity problems formulate the problem as a system of nonsmooth nonlinear equations, or use continuation to trace a path defined by a smooth system of nonlinear equations. We formulate the nonlinear complementarity problem as a bound-constrained nonlinear least squares problem. Algorithms based on this formulation are applicable to general nonlinear complementarity problems, can be started from any nonnegative starting point, and each iteration only requires the solution of systems of linear equations. Convergence to a solution of the nonlinear complementarity problem is guaranteed under reasonable regularity assumptions. The converge rate is Q-linear, Q-superlinear, or Q-quadratic, depending on the tolerances used to solve the subproblems.
Let F be a mapping from real «-dimensional Euclidean space into itself.Most practical algorithms for finding a zero of F are of the formwhere {Bk\ is a sequence of nonsingular … Let F be a mapping from real «-dimensional Euclidean space into itself.Most practical algorithms for finding a zero of F are of the formwhere {Bk\ is a sequence of nonsingular matrices.The main result of this paper is a characterization theorem for the superlinear convergence to a zero of F of sequences of the above form.This result is then used to give a unified treatment of the results on the superlinear convergence of the Davidon-Fletcher-Powell method obtained by Powell for the case in which exact line searches are used, and by Broyden, Dennis, and Moré for the case without line searches.As a by-product, several results on the asymptotic behavior of the sequence \Bk) are obtained.An interesting aspect of these results is that superlinear convergence is obtained without any consistency conditions; i.e., without requiring that the sequence \Bk\ converge to the Jacobian matrix of F at the zero.In fact, a modification of an example due to Powell shows that most of the known quasi-Newton methods are not, in general, consistent.Finally, it is pointed out that the above-mentioned characterization theorem applies to other single and double rank quasi-Newton methods, and that the results of this paper can be used to obtain their superlinear convergence.
We consider Broyden’s 1965 method for solving nonlinear equations. If the mapping is linear, then a simple modification of this method guarantees global and <italic>Q</italic>-superlinear convergence. For nonlinear mappings it … We consider Broyden’s 1965 method for solving nonlinear equations. If the mapping is linear, then a simple modification of this method guarantees global and <italic>Q</italic>-superlinear convergence. For nonlinear mappings it is shown that the hybrid strategy for nonlinear equations due to Powell leads to <italic>R</italic>-superlinear convergence provided the search directions form a uniformly linearly independent sequence. We then explore this last concept and its connection with Broyden’s method. Finally, we point out how the above results extend to Powell’s symmetric version of Broyden’s method.
The computation of large sparse Jacobian matrices is required in many important large-scale scientific problems. Three approaches to computing such matrices are considered: hand-coding, difference approximations, and automatic differentiation using … The computation of large sparse Jacobian matrices is required in many important large-scale scientific problems. Three approaches to computing such matrices are considered: hand-coding, difference approximations, and automatic differentiation using the ADIFOR (automatic differentiation in Fortran) tool. The authors compare the numerical reliability and computational efficiency of these approaches on applications from the MINPACK-2 test problem collection. The conclusion is that ADIFOR is the method of choice, leading to results that are as accurate as hand-coded derivatives, while at the same time outperforming difference approximations in both accuracy and speed.
The Toolkit for Advanced Optimization (TAO) focuses on the design and implementation of component-based optimization software for the solution of large-scale optimization applications on high-performance architectures. Their approach is motivated … The Toolkit for Advanced Optimization (TAO) focuses on the design and implementation of component-based optimization software for the solution of large-scale optimization applications on high-performance architectures. Their approach is motivated by the scattered support for parallel computations and lack of reuse of linear algebra software in currently available optimization software. The TAO design allows the reuse of toolkits that provide lower-level support (parallel sparse matrix data structures, preconditioners, solvers), and thus they are able to build on top of these toolkits instead of having to redevelop code. The advantages in terms of efficiency and development time are significant. The TAO design philosophy uses object-oriented techniques of data and state encapsulation, abstract classes, and limited inheritance to create a flexible optimization toolkit. This chapter provides a short introduction to the design philosophy by describing the objectives in TAO and the importance of this design. Since a major concern in the TAO project is the performance and scalability of optimization algorithms on large problems, they also present some performance results.
The development of algorithms and software for the solution of large-scale optimization problems has been the main motivation behind the research on the identification properties of optimization algorithms. The aim … The development of algorithms and software for the solution of large-scale optimization problems has been the main motivation behind the research on the identification properties of optimization algorithms. The aim of an identification result for a linearly constrained problem is to show that if the sequence generated by an optimization algorithm converges to a stationary point, then there is a nontrivial face F of the feasible set such that after a finite number of iterations, the iterates enter and remain in the face F. This paper develops the identification properties of linearly constrained optimization algorithms without any nondegeneracy or linear independence assumptions. The main result shows that the projected gradient converges to zero if and only if the iterates enter and remain in the face exposed by the negative gradient. This result generalizes results of Burke and Moré obtained for nondegenerate cases.
We develop a quasi-Newton method which preserves known bounds on the Jacobian matrix. We show that this update can be computed with the same amount of work as competitive methods. … We develop a quasi-Newton method which preserves known bounds on the Jacobian matrix. We show that this update can be computed with the same amount of work as competitive methods. In particular, we prove that the number of operations required to obtain this update is proportional to the number of nonzeros in the sparsity pattern of the Jacobian matrix. The method is also shown to share the local convergence properties of Broyden’s and Schubert’s method.
We examine the importance of optimality measures when benchmarking a set of solvers and show that the scale-invariance requirements we impose lead to a convergence test for nonlinearly constrained optimization … We examine the importance of optimality measures when benchmarking a set of solvers and show that the scale-invariance requirements we impose lead to a convergence test for nonlinearly constrained optimization solvers that uses a mixture of absolute and relative error measures. We demonstrate that this convergence test is well behaved at any point where the constraints satisfy the Mangasarian--Fromovitz constraint qualification and also avoids the explicit use of a complementarity measure. Computational experiments explore the impact of this convergence test on the benchmarking process with performance profiles.
article Free AccessArtifacts AvailableArtifacts Evaluated & Reusable Share on Algorithm 554: BRENTM, A Fortran Subroutine for the Numerical Solution of Nonlinear Equations [C5] Authors: Jorge J. Moré Applied Mathematics Division, … article Free AccessArtifacts AvailableArtifacts Evaluated & Reusable Share on Algorithm 554: BRENTM, A Fortran Subroutine for the Numerical Solution of Nonlinear Equations [C5] Authors: Jorge J. Moré Applied Mathematics Division, Argonne National Laboratory, Argonne, IL Applied Mathematics Division, Argonne National Laboratory, Argonne, ILView Profile , Michael Y. Cosnard Mathematiques Appliquées-Informatique, Université Scientifique et Médicale de Grenoble, Boite Postal, Grenoble Cédex, France Mathematiques Appliquées-Informatique, Université Scientifique et Médicale de Grenoble, Boite Postal, Grenoble Cédex, FranceView Profile Authors Info & Claims ACM Transactions on Mathematical SoftwareVolume 6Issue 2pp 240–251https://doi.org/10.1145/355887.355898Published:01 June 1980Publication History 28citation1,069DownloadsMetricsTotal Citations28Total Downloads1,069Last 12 Months72Last 6 weeks13 Get Citation AlertsNew Citation Alert added!This alert has been successfully added and will be sent to:You will be notified whenever a record that you have chosen has been cited.To manage your alert preferences, click on the button below.Manage my AlertsNew Citation Alert!Please log in to your account Save to BinderSave to BinderCreate a New BinderNameCancelCreateExport CitationPublisher SiteeReaderPDF
The accurate and efficient computation of gradients for partially separable functions is central to the solution of large-scale optimization problems, because these functions are ubiquitous in large-scale problems. We describe … The accurate and efficient computation of gradients for partially separable functions is central to the solution of large-scale optimization problems, because these functions are ubiquitous in large-scale problems. We describe two approaches for computing gradients of partially separable functions via automatic differentiation. In our experiments we employ the ADIFOR (automatic differentiation of Fortran) tool and the SparsLinC (sparse linear combination) library. We use applications from the MINPACK-2 test problem collection to compare the numerical reliability and computational efficiency of these approaches with hand-coded derivatives and approximations based on differences of function values. Our conclusion is that automatic differentiation is the method of choice, providing code for the efficient computation of the gradient without the need for tedious hand-coding.
We analyze the performance and scalabilty of algorithms for the solution of large optimization problems on high-performance parallel architectures. Our case study uses the GPCG (gradient projection, conjugate gradient) algorithm … We analyze the performance and scalabilty of algorithms for the solution of large optimization problems on high-performance parallel architectures. Our case study uses the GPCG (gradient projection, conjugate gradient) algorithm for solving bound-constrained convex quadratic problems. Our implementation of the GPCG algorithm within the Toolkit for Advanced Optimization (TAO) is available for a wide range of high-performance architectures and has been tested on problems with over 2.5 million variables. We analyze the performance as a function of the number of variables, the number of free variables, and the preconditioner. In addition, we discuss how the software design facilitates algorithmic comparisons.
The authors examine the importance of problem formulation for the solution of large-scale optimization problems on high-performance architectures. Limited memory variable metric methods are used to illustrate performance issues. It … The authors examine the importance of problem formulation for the solution of large-scale optimization problems on high-performance architectures. Limited memory variable metric methods are used to illustrate performance issues. It is shown that the performance of these algorithms is drastically affected by application implementation. Model applications are drawn from the MINPACK-2 test problem collection, with numerical results from a super-scalar architecture (IBM RS6000/370), a vector architecture (CRAY-2), and a massively parallel architecture (Intel DELTA).
In this paper, we study the global convergence of iterative schemes of the form \[ x^{k + 1} = x^k - P_k {\left( x^k \right)}^{ - 1} Fx^k \] with … In this paper, we study the global convergence of iterative schemes of the form \[ x^{k + 1} = x^k - P_k {\left( x^k \right)}^{ - 1} Fx^k \] with special emphasis on the Newton–Gauss–Seidel methods. If F is continuously differentiable and convex on all of $R^n $, and $P_k l( x )^{ - 1} $ is a nonnegative right subinverse of $F'(x)$ for each x in $R^n $, a general theorem is obtained which generalizes results of Varga for linear mappings with weak splittings and improves on results of Greenspan and Parter for discrete approximations of elliptic partial differential equations and of Ortega and Rheinboldt for general nonlinear mappings. We also present a counter-example which shows that the theorems obtained are, in a sense, the best possible. Finally, we prove a result in which the above hypotheses are only assumed to hold on a portion of $R^n $.
We discuss the formulation of optimization problems that arise in the study of distance geometry, ionic systems, and molecular clusters. We show that continuation techniques based on global smoothing are … We discuss the formulation of optimization problems that arise in the study of distance geometry, ionic systems, and molecular clusters. We show that continuation techniques based on global smoothing are applicable to these molecular optimization problems, and we outline the issues that must be resolved in the solution of large-scale molecular optimization problems.
MINPACK-1 is a package of Fortran subprograms for the numerical solution of systems of nonlinear equations and nonlinear least-squares problems. This report describes how to implement the package from the … MINPACK-1 is a package of Fortran subprograms for the numerical solution of systems of nonlinear equations and nonlinear least-squares problems. This report describes how to implement the package from the tape on which it is transmitted. 3 tables.
Distance geometry problems arise in the interpretation of NMR data and in the determination of protein structure. The authors formulate the distance geometry problem as a global minimization problem with … Distance geometry problems arise in the interpretation of NMR data and in the determination of protein structure. The authors formulate the distance geometry problem as a global minimization problem with special structure, and show the global smoothing techniques and a continuation approach for global optimization can be used to determine solutions of distance geometry problems with a nearly 100% probability of success.
Global methods for nonlinear complementarity problems formulate the problem as a system of nonsmooth nonlinear equations approach, or use continuation to trace a path defined by a smooth system of … Global methods for nonlinear complementarity problems formulate the problem as a system of nonsmooth nonlinear equations approach, or use continuation to trace a path defined by a smooth system of nonlinear equations. We formulate the nonlinear complementarity problem as a bound-constrained nonlinear least squares problem. Algorithms based on this formulation are applicable to general nonlinear complementarity problems, can be started from any nonnegative starting point, and each iteration only requires the solution of systems of linear equations. Convergence to a solution of the nonlinear complementarity problem is guaranteed under reasonable regularity assumptions. The converge rate is Q-linear, Q-superlinear, or Q-quadratic, depending on the tolerances used to solve the subproblems.
We propose performance profiles-distribution functions for a performance metric-as a tool for benchmarking and comparing optimization software. We show that performance profiles combine the best features of other tools for … We propose performance profiles-distribution functions for a performance metric-as a tool for benchmarking and comparing optimization software. We show that performance profiles combine the best features of other tools for performance evaluation.
We survey techniques in the Toolkit for Advanced Optimization (TAO) for developing scalable algorithms for mesh-based optimization problems on distributed architectures. We discuss the distribution of the mesh, the computation … We survey techniques in the Toolkit for Advanced Optimization (TAO) for developing scalable algorithms for mesh-based optimization problems on distributed architectures. We discuss the distribution of the mesh, the computation of the gradient and the Hessian matrix, and the use of preconditioners. We show that these techniques, together with mesh sequencing, can produce results that scale with mesh size.
Reliable calculations of the structure of heavy elements are crucial to address fundamental science questions such as the origin of the elements in the universe. Applications relevant for energy production, … Reliable calculations of the structure of heavy elements are crucial to address fundamental science questions such as the origin of the elements in the universe. Applications relevant for energy production, medicine, or national security also rely on theoretical predictions of basic properties of atomic nuclei. Heavy elements are best described within the nuclear density functional theory (DFT) and its various extensions. While relatively mature, DFT has never been implemented in its full power, as it relies on a very large number (~ 10^9-10^12) of expensive calculations (~ day). The advent of leadership-class computers, as well as dedicated large-scale collaborative efforts such as the SciDAC 2 UNEDF project, have dramatically changed the field. This article gives an overview of the various computational challenges related to the nuclear DFT, as well as some of the recent achievements.
We propose data profiles as a tool for analyzing the performance of derivative-free optimization solvers when there are constraints on the computational budget. We use performance and data profiles, together … We propose data profiles as a tool for analyzing the performance of derivative-free optimization solvers when there are constraints on the computational budget. We use performance and data profiles, together with a convergence test that measures the decrease in function value, to analyze the performance of three solvers on sets of smooth, noisy, and piecewise-smooth problems. Our results provide estimates for the performance difference between these solvers, and show that on these problems, the model-based solver tested performs better than the two direct search solvers tested.
We examine the importance of optimality measures when benchmarking a set of solvers and show that the scale-invariance requirements we impose lead to a convergence test for nonlinearly constrained optimization … We examine the importance of optimality measures when benchmarking a set of solvers and show that the scale-invariance requirements we impose lead to a convergence test for nonlinearly constrained optimization solvers that uses a mixture of absolute and relative error measures. We demonstrate that this convergence test is well behaved at any point where the constraints satisfy the Mangasarian--Fromovitz constraint qualification and also avoids the explicit use of a complementarity measure. Computational experiments explore the impact of this convergence test on the benchmarking process with performance profiles.
We survey techniques in the Toolkit for Advanced Optimization (TAO) for developing scalable algorithms for mesh-based optimization problems on distributed architectures. We discuss the distribution of the mesh, the computation … We survey techniques in the Toolkit for Advanced Optimization (TAO) for developing scalable algorithms for mesh-based optimization problems on distributed architectures. We discuss the distribution of the mesh, the computation of the gradient and the Hessian matrix, and the use of preconditioners. We show that these techniques, together with mesh sequencing, can produce results that scale with mesh size.
The authors describe version 3.0 of the COPS set of nonlinearly constrained optimization problems. They have added new problems, as well as streamlined and improved most of the problems. They … The authors describe version 3.0 of the COPS set of nonlinearly constrained optimization problems. They have added new problems, as well as streamlined and improved most of the problems. They also provide a comparison of the FILTER, KNITRO, LOQO, MINOS, and SNOPT solvers on these problems.
The Toolkit for Advanced Optimization (TAO) focuses on the design and implementation of component-based optimization software for the solution of large-scale optimization applications on high-performance architectures. Their approach is motivated … The Toolkit for Advanced Optimization (TAO) focuses on the design and implementation of component-based optimization software for the solution of large-scale optimization applications on high-performance architectures. Their approach is motivated by the scattered support for parallel computations and lack of reuse of linear algebra software in currently available optimization software. The TAO design allows the reuse of toolkits that provide lower-level support (parallel sparse matrix data structures, preconditioners, solvers), and thus they are able to build on top of these toolkits instead of having to redevelop code. The advantages in terms of efficiency and development time are significant. The TAO design philosophy uses object-oriented techniques of data and state encapsulation, abstract classes, and limited inheritance to create a flexible optimization toolkit. This chapter provides a short introduction to the design philosophy by describing the objectives in TAO and the importance of this design. Since a major concern in the TAO project is the performance and scalability of optimization algorithms on large problems, they also present some performance results.
We analyze the performance and scalabilty of algorithms for the solution of large optimization problems on high-performance parallel architectures. Our case study uses the GPCG (gradient projection, conjugate gradient) algorithm … We analyze the performance and scalabilty of algorithms for the solution of large optimization problems on high-performance parallel architectures. Our case study uses the GPCG (gradient projection, conjugate gradient) algorithm for solving bound-constrained convex quadratic problems. Our implementation of the GPCG algorithm within the Toolkit for Advanced Optimization (TAO) is available for a wide range of high-performance architectures and has been tested on problems with over 2.5 million variables. We analyze the performance as a function of the number of variables, the number of free variables, and the preconditioner. In addition, we discuss how the software design facilitates algorithmic comparisons.
GPCG is an algorithm within the Toolkit for Advanced Optimization (TAO) for solving bound constrained, convex quadratic problems. Originally developed by More' and Toraldo, this algorithm was designed for large-scale … GPCG is an algorithm within the Toolkit for Advanced Optimization (TAO) for solving bound constrained, convex quadratic problems. Originally developed by More' and Toraldo, this algorithm was designed for large-scale problems but had been implemented only for a single processor. The TAO implementation is available for a wide range of high-performance architecture, and has been tested on up to 64 processors to solve problems with over 2.5 million variables.
We discuss the role of automatic differentiation tools in optimization software. We emphasize issues that are important to large-scale optimization and that have proved useful in the installation of nonlinear … We discuss the role of automatic differentiation tools in optimization software. We emphasize issues that are important to large-scale optimization and that have proved useful in the installation of nonlinear solvers in the NEOS Server. Our discussion centers on the computation of the gradient and Hessian matrix for partially separable functions and shows that the gradient and Hessian matrix can be computed with guaranteed bounds in time and memory requirements
We propose performance profiles-distribution functions for a performance metric-as a tool for benchmarking and comparing optimization software. We show that performance profiles combine the best features of other tools for … We propose performance profiles-distribution functions for a performance metric-as a tool for benchmarking and comparing optimization software. We show that performance profiles combine the best features of other tools for performance evaluation.
We analyze a trust region version of Newton's method for bound-constrained problems. Our approach relies on the geometry of the feasible set, not on the particular representation in terms of … We analyze a trust region version of Newton's method for bound-constrained problems. Our approach relies on the geometry of the feasible set, not on the particular representation in terms of constraints. The convergence theory holds for linearly constrained problems and yields global and superlinear convergence without assuming either strict complementarity or linear independence of the active constraints. We also show that the convergence theory leads to an efficient implementation for large bound-constrained problems.
We propose an incomplete Cholesky factorization for the solution of large-scale trust region subproblems and positive definite systems of linear equations. This factorization depends on a parameter p that specifies … We propose an incomplete Cholesky factorization for the solution of large-scale trust region subproblems and positive definite systems of linear equations. This factorization depends on a parameter p that specifies the amount of additional memory (in multiples of n, the dimension of the problem) that is available; there is no need to specify a drop tolerance. Our numerical results show that the number of conjugate gradient iterations and the computing time are reduced dramatically for small values of p. We also show that in contrast with drop tolerance strategies, the new approach is more stable in terms of number of iterations and memory requirements.
Distance geometry problems arise in the determination of protein structure. We consider the case where only a subset of the distances between atoms is given and formulate this distance geometry … Distance geometry problems arise in the determination of protein structure. We consider the case where only a subset of the distances between atoms is given and formulate this distance geometry problem as a global minimization problem with special structure. We show that global smoothing techniques and a continuation approach for global optimization can be used to determine global solutions of this problem reliably and efficiently. The global continuation approach determines a global solution with less computational effort than is required by a standard multistart algorithm. Moreover, the continuation approach usually finds the global solution from any given starting point, while the multistart algorithm tends to fail.
The accurate and efficient computation of gradients for partially separable functions is central to the solution of large-scale optimization problems, because these functions are ubiquitous in large-scale problems. We describe … The accurate and efficient computation of gradients for partially separable functions is central to the solution of large-scale optimization problems, because these functions are ubiquitous in large-scale problems. We describe two approaches for computing gradients of partially separable functions via automatic differentiation. In our experiments we employ the ADIFOR (automatic differentiation of Fortran) tool and the SparsLinC (sparse linear combination) library. We use applications from the MINPACK-2 test problem collection to compare the numerical reliability and computational efficiency of these approaches with hand-coded derivatives and approximations based on differences of function values. Our conclusion is that automatic differentiation is the method of choice, providing code for the efficient computation of the gradient without the need for tedious hand-coding.
We discuss the formulation of optimization problems that arise in the study of distance geometry, ionic systems, and molecular clusters. We show that continuation techniques based on global smoothing are … We discuss the formulation of optimization problems that arise in the study of distance geometry, ionic systems, and molecular clusters. We show that continuation techniques based on global smoothing are applicable to these molecular optimization problems, and we outline the issues that must be resolved in the solution of large-scale molecular optimization problems.
Global methods for nonlinear complementarity problems formulate the problem as a system of nonsmooth nonlinear equations, or use continuation to trace a path defined by a smooth system of nonlinear … Global methods for nonlinear complementarity problems formulate the problem as a system of nonsmooth nonlinear equations, or use continuation to trace a path defined by a smooth system of nonlinear equations. We formulate the nonlinear complementarity problem as a bound-constrained nonlinear least squares problem. Algorithms based on this formulation are applicable to general nonlinear complementarity problems, can be started from any nonnegative starting point, and each iteration only requires the solution of systems of linear equations. Convergence to a solution of the nonlinear complementarity problem is guaranteed under reasonable regularity assumptions. The converge rate is Q-linear, Q-superlinear, or Q-quadratic, depending on the tolerances used to solve the subproblems.
Distance geometry problems arise in the interpretation of NMR data and in the determination of protein structure. The authors formulate the distance geometry problem as a global minimization problem with … Distance geometry problems arise in the interpretation of NMR data and in the determination of protein structure. The authors formulate the distance geometry problem as a global minimization problem with special structure, and show the global smoothing techniques and a continuation approach for global optimization can be used to determine solutions of distance geometry problems with a nearly 100% probability of success.
The authors examine the importance of problem formulation for the solution of large-scale optimization problems on high-performance architectures. Limited memory variable metric methods are used to illustrate performance issues. It … The authors examine the importance of problem formulation for the solution of large-scale optimization problems on high-performance architectures. Limited memory variable metric methods are used to illustrate performance issues. It is shown that the performance of these algorithms is drastically affected by application implementation. Model applications are drawn from the MINPACK-2 test problem collection, with numerical results from a super-scalar architecture (IBM RS6000/370), a vector architecture (CRAY-2), and a massively parallel architecture (Intel DELTA).
The development of software for minimization problems is often based on a line search method. We consider line search methods that satisfy sufficient decrease and curvature conditions, and formulate the … The development of software for minimization problems is often based on a line search method. We consider line search methods that satisfy sufficient decrease and curvature conditions, and formulate the problem of determining a point that satisfies these two conditions in terms of finding a point in a set T(μ) . We describe a search algorithm for this problem that produces a sequence of iterates that converge to a point in T(μ) and that, except for pathological cases, terminates in a finite number of steps. Numerical results for an implementation of the search algorithm on a set of test functions show that the algorithm terminates within a small number of iterations.
The development of algorithms and software for the solution of large-scale optimization problems has been the main motivation behind the research on the identification properties of optimization algorithms. The aim … The development of algorithms and software for the solution of large-scale optimization problems has been the main motivation behind the research on the identification properties of optimization algorithms. The aim of an identification result for a linearly constrained problem is to show that if the sequence generated by an optimization algorithm converges to a stationary point, then there is a nontrivial face F of the feasible set such that after a finite number of iterations, the iterates enter and remain in the face F. This paper develops the identification properties of linearly constrained optimization algorithms without any nondegeneracy or linear independence assumptions. The main result shows that the projected gradient converges to zero if and only if the iterates enter and remain in the face exposed by the negative gradient. This result generalizes results of Burke and Moré obtained for nondegenerate cases.
Global methods for nonlinear complementarity problems formulate the problem as a system of nonsmooth nonlinear equations approach, or use continuation to trace a path defined by a smooth system of … Global methods for nonlinear complementarity problems formulate the problem as a system of nonsmooth nonlinear equations approach, or use continuation to trace a path defined by a smooth system of nonlinear equations. We formulate the nonlinear complementarity problem as a bound-constrained nonlinear least squares problem. Algorithms based on this formulation are applicable to general nonlinear complementarity problems, can be started from any nonnegative starting point, and each iteration only requires the solution of systems of linear equations. Convergence to a solution of the nonlinear complementarity problem is guaranteed under reasonable regularity assumptions. The converge rate is Q-linear, Q-superlinear, or Q-quadratic, depending on the tolerances used to solve the subproblems.
The computation of large sparse Jacobian matrices is required in many important large-scale scientific problems. Three approaches to computing such matrices are considered: hand-coding, difference approximations, and automatic differentiation using … The computation of large sparse Jacobian matrices is required in many important large-scale scientific problems. Three approaches to computing such matrices are considered: hand-coding, difference approximations, and automatic differentiation using the ADIFOR (automatic differentiation in Fortran) tool. The authors compare the numerical reliability and computational efficiency of these approaches on applications from the MINPACK-2 test problem collection. The conclusion is that ADIFOR is the method of choice, leading to results that are as accurate as hand-coded derivatives, while at the same time outperforming difference approximations in both accuracy and speed.
The trust region problem requires the global minimum of a general quadratic function subject to an ellipsoidal constraint. The development of algorithms for the solution of this problem has found … The trust region problem requires the global minimum of a general quadratic function subject to an ellipsoidal constraint. The development of algorithms for the solution of this problem has found applications in nonlinear and combinatorial optimization. In this paper we generalize the trust region problem by allowing a general quadratic constraint. The main results are a characterization of the global minimizer of the generalized trust region problem, and the development of an algorithm that finds an approximate global minimizer in a finite number of iterations.
An algorithm is proposed that uses the conjugate gradient method to explore the face of the feasible region defined by the current iterate, and the gradient projection method to move … An algorithm is proposed that uses the conjugate gradient method to explore the face of the feasible region defined by the current iterate, and the gradient projection method to move to a different face. It is proved that for strictly convex problems the algorithm converges to the solution, and that if the solution is nondegenerate, then the algorithm terminates at the solution in a finite number of steps. Numerical results are presented for the obstacle problem, the elastic-plastic torsion problem, and the journal bearing problems. On a selection of these problems with dimensions ranging from 5000 to 15,000, the algorithm determines the solution in fewer than 15 iterations, and with a small number of function-gradient evaluations and Hessian-vector products per iteration.
Nondegeneracy conditions that guarantee that the optimal active constraints are identified in a finite number of iterations are studied. Results of this type have only been established for a few … Nondegeneracy conditions that guarantee that the optimal active constraints are identified in a finite number of iterations are studied. Results of this type have only been established for a few algorithms, and then under restrictive hypothesis. The main result is a characterization of those algorithms that identify the optimal constraints in a finite number of iterations. This result is obtained with a nondegeneracy assumption which is equivalent, in the standard nonlinear programming problem, to the assumption that there is a set of strictly complementary Lagrange multipliers. As an important consequence of the authors' results the way that this characterization applies to gradient projection and sequential quadratic programming algorithms is shown.
We develop a quasi-Newton method which preserves known bounds on the Jacobian matrix. We show that this update can be computed with the same amount of work as competitive methods. … We develop a quasi-Newton method which preserves known bounds on the Jacobian matrix. We show that this update can be computed with the same amount of work as competitive methods. In particular, we prove that the number of operations required to obtain this update is proportional to the number of nonzeros in the sparsity pattern of the Jacobian matrix. The method is also shown to share the local convergence properties of Broyden’s and Schubert’s method.
article Free AccessSoftware for estimating sparse Jacobian matrices Authors: Thomas F. Coleman Cornell University Cornell UniversityView Profile , Burton S. Garbow Argonne National Laboratory Argonne National LaboratoryView Profile , Jorge … article Free AccessSoftware for estimating sparse Jacobian matrices Authors: Thomas F. Coleman Cornell University Cornell UniversityView Profile , Burton S. Garbow Argonne National Laboratory Argonne National LaboratoryView Profile , Jorge J. More Argonne National Laboratory Argonne National LaboratoryView Profile Authors Info & Claims ACM Transactions on Mathematical SoftwareVolume 10Issue 3Sept. 1984 pp 329–345https://doi.org/10.1145/1271.1610Published:28 August 1984Publication History 64citation808DownloadsMetricsTotal Citations64Total Downloads808Last 12 Months45Last 6 weeks22 Get Citation AlertsNew Citation Alert added!This alert has been successfully added and will be sent to:You will be notified whenever a record that you have chosen has been cited.To manage your alert preferences, click on the button below.Manage my AlertsNew Citation Alert!Please log in to your account Save to BinderSave to BinderCreate a New BinderNameCancelCreateExport CitationPublisher SiteeReaderPDF
We propose an algorithm for the problem of minimizing a quadratic function subject to an ellipsoidal constraint and show that this algorithm is guaranteed to produce a nearly optimal solution … We propose an algorithm for the problem of minimizing a quadratic function subject to an ellipsoidal constraint and show that this algorithm is guaranteed to produce a nearly optimal solution in a finite number of iterations. We also consider the use of this algorithm in a trust region Newton’s method. In particular, we prove that under reasonable assumptions the sequence generated by Newton’s method has a limit point which satisfies the first and second order necessary conditions for a minimizer of the objective function. Numerical results for GQTPAR, which is a Fortran implementaton of our algorithm, show that GQTPAR is quite successful in a trust region method. In our tests a call to GQTPAR only required 1.6 iterations on the average.
Given a mapping with a sparse Jacobian matrix, we investigate the problem of minimizing the number of function evaluations needed to estimate the Jacobian matrix by differences. We show that … Given a mapping with a sparse Jacobian matrix, we investigate the problem of minimizing the number of function evaluations needed to estimate the Jacobian matrix by differences. We show that this problem can be attacked as a graph coloring problem and that this approach leads to very efficient algorithms. The behavior of these algorithms is studied and, in particular, we prove that two of the algorithms are optimal for band graphs. We also present numerical evidence which indicates that these two algorithms are nearly optimal on practical problems.
Newton's method plays a central role in the development of numerical techniques for optimization. In fact, most of the current practical methods for optimization can be viewed as variations on … Newton's method plays a central role in the development of numerical techniques for optimization. In fact, most of the current practical methods for optimization can be viewed as variations on Newton's method. It is therefore important to understand Newton's method as an algorithm in its own right and as a key introduction to the most recent ideas in this area. One of the aims of this expository paper is to present and analyze two main approaches to Newton's method for unconstrained minimization: the line search approach and the trust region approach. The other aim is to present some of the recent developments in the optimization field which are related to Newton's method. In particular, we explore several variations on Newton's method which are appropriate for large scale problems, and we also show how quasi-Newton methods can be derived quite naturally from Newton's method.
article Free AccessTesting Unconstrained Optimization Software Authors: Jorge J. Moré Argonne National Labortory, 9700 South Cass Avenue, Argonne, IL Argonne National Labortory, 9700 South Cass Avenue, Argonne, ILView Profile , … article Free AccessTesting Unconstrained Optimization Software Authors: Jorge J. Moré Argonne National Labortory, 9700 South Cass Avenue, Argonne, IL Argonne National Labortory, 9700 South Cass Avenue, Argonne, ILView Profile , Burton S. Garbow Argonne National Labortory, 9700 South Cass Avenue, Argonne, IL Argonne National Labortory, 9700 South Cass Avenue, Argonne, ILView Profile , Kenneth E. Hillstrom Argonne National Labortory, 9700 South Cass Avenue, Argonne, IL Argonne National Labortory, 9700 South Cass Avenue, Argonne, ILView Profile Authors Info & Claims ACM Transactions on Mathematical SoftwareVolume 7Issue 1pp 17–41https://doi.org/10.1145/355934.355936Published:01 March 1981Publication History 1,046citation5,094DownloadsMetricsTotal Citations1,046Total Downloads5,094Last 12 Months516Last 6 weeks82 Get Citation AlertsNew Citation Alert added!This alert has been successfully added and will be sent to:You will be notified whenever a record that you have chosen has been cited.To manage your alert preferences, click on the button below.Manage my AlertsNew Citation Alert!Please log in to your account Save to BinderSave to BinderCreate a New BinderNameCancelCreateExport CitationPublisher SiteeReaderPDF
This paper discusses research concerning the numerical solution of optimization problems. The introduction provides a non-technical overview of the field, and gives a general picture of the form and attributes … This paper discusses research concerning the numerical solution of optimization problems. The introduction provides a non-technical overview of the field, and gives a general picture of the form and attributes of optimization problems. The subsequent sections include our view of several crucial areas in which further research is worthwhile, as well as a brief indication of the current state of the art.
MINPACK-1 is a package of Fortran subprograms for the numerical solution of systems of nonlinear equations and nonlinear least-squares problems. This report describes how to implement the package from the … MINPACK-1 is a package of Fortran subprograms for the numerical solution of systems of nonlinear equations and nonlinear least-squares problems. This report describes how to implement the package from the tape on which it is transmitted. 3 tables.
article Free AccessArtifacts AvailableArtifacts Evaluated & Reusable Share on Algorithm 554: BRENTM, A Fortran Subroutine for the Numerical Solution of Nonlinear Equations [C5] Authors: Jorge J. Moré Applied Mathematics Division, … article Free AccessArtifacts AvailableArtifacts Evaluated & Reusable Share on Algorithm 554: BRENTM, A Fortran Subroutine for the Numerical Solution of Nonlinear Equations [C5] Authors: Jorge J. Moré Applied Mathematics Division, Argonne National Laboratory, Argonne, IL Applied Mathematics Division, Argonne National Laboratory, Argonne, ILView Profile , Michael Y. Cosnard Mathematiques Appliquées-Informatique, Université Scientifique et Médicale de Grenoble, Boite Postal, Grenoble Cédex, France Mathematiques Appliquées-Informatique, Université Scientifique et Médicale de Grenoble, Boite Postal, Grenoble Cédex, FranceView Profile Authors Info & Claims ACM Transactions on Mathematical SoftwareVolume 6Issue 2pp 240–251https://doi.org/10.1145/355887.355898Published:01 June 1980Publication History 28citation1,069DownloadsMetricsTotal Citations28Total Downloads1,069Last 12 Months72Last 6 weeks13 Get Citation AlertsNew Citation Alert added!This alert has been successfully added and will be sent to:You will be notified whenever a record that you have chosen has been cited.To manage your alert preferences, click on the button below.Manage my AlertsNew Citation Alert!Please log in to your account Save to BinderSave to BinderCreate a New BinderNameCancelCreateExport CitationPublisher SiteeReaderPDF
article Numerical Solution of Nonlinear Equations Share on Authors: Jorge J. Moré Applied Mathematics Division, Argonne National Laboratory, 9700, South Cass Ave., Argonne, IL Applied Mathematics Division, Argonne National Laboratory, … article Numerical Solution of Nonlinear Equations Share on Authors: Jorge J. Moré Applied Mathematics Division, Argonne National Laboratory, 9700, South Cass Ave., Argonne, IL Applied Mathematics Division, Argonne National Laboratory, 9700, South Cass Ave., Argonne, ILView Profile , Michel Y. Cosnard Mathématiques Appliquées-Informatique, Universite Scientifique et Médicale de Grenoble, Boite Postale 53, 38041 Grenoble Cédex, France Mathématiques Appliquées-Informatique, Universite Scientifique et Médicale de Grenoble, Boite Postale 53, 38041 Grenoble Cédex, FranceView Profile Authors Info & Claims ACM Transactions on Mathematical SoftwareVolume 5Issue 1March 1979 pp 64–85https://doi.org/10.1145/355815.355820Published:01 March 1979 87citation2,098DownloadsMetricsTotal Citations87Total Downloads2,098Last 12 Months37Last 6 weeks6 Get Citation AlertsNew Citation Alert added!This alert has been successfully added and will be sent to:You will be notified whenever a record that you have chosen has been cited.To manage your alert preferences, click on the button below.Manage my AlertsNew Citation Alert!Please log in to your account Save to BinderSave to BinderCreate a New BinderNameCancelCreateExport CitationPublisher SiteGet Access
This paper is an attempt to motivate and justify quasi-Newton methods as useful modifications of Newton’s method for general and gradient nonlinear systems of equations. References are given to ample … This paper is an attempt to motivate and justify quasi-Newton methods as useful modifications of Newton’s method for general and gradient nonlinear systems of equations. References are given to ample numerical justification; here we give an overview of many of the important theoretical results and each is accompanied by sufficient discussion to make the results and hence the methods plausible.
We consider Broyden’s 1965 method for solving nonlinear equations. If the mapping is linear, then a simple modification of this method guarantees global and <italic>Q</italic>-superlinear convergence. For nonlinear mappings it … We consider Broyden’s 1965 method for solving nonlinear equations. If the mapping is linear, then a simple modification of this method guarantees global and <italic>Q</italic>-superlinear convergence. For nonlinear mappings it is shown that the hybrid strategy for nonlinear equations due to Powell leads to <italic>R</italic>-superlinear convergence provided the search directions form a uniformly linearly independent sequence. We then explore this last concept and its connection with Broyden’s method. Finally, we point out how the above results extend to Powell’s symmetric version of Broyden’s method.
Given a mapping with a sparse Jacobian matrix, we investigate the problem of minimizing the number of function evaluations needed to estimate the Jacobian matrix by differences. We show that … Given a mapping with a sparse Jacobian matrix, we investigate the problem of minimizing the number of function evaluations needed to estimate the Jacobian matrix by differences. We show that this problem can be attacked as a graph coloring problem and that this approach leads to very efficient algorithms. The behavior of these algorithms is studied and, in particular, we prove that two of the algorithms are optimal for band graphs. We also present numerical evidence which indicates that these two algorithms are nearly optimal on practical problems.
We show how to use known constant elements in a Jacobian matrix to reduce the work required to estimate the remaining elements by finite differences. We show how to use known constant elements in a Jacobian matrix to reduce the work required to estimate the remaining elements by finite differences.
We consider the problem $\min \{ f(x)|x \geqq 0\} $, and propose algorithms of the form $x_{k + 1} = [x_k - \alpha _k D_k \nabla f(x_k )]^ + $, … We consider the problem $\min \{ f(x)|x \geqq 0\} $, and propose algorithms of the form $x_{k + 1} = [x_k - \alpha _k D_k \nabla f(x_k )]^ + $, where $[ \cdot ]^ + $ denotes projection on the positive orthant, $\alpha _k $ is a stepsize chosen by an Armijo-like rule and $D_k $ is a positive definite symmetric matrix which is partly diagonal. We show that $D_k $ can be calculated simply on the basis of second derivatives of f so that the resulting Newton-like algorithm has a typically superlinear rate of convergence. With other choices of $D_k $ convergence at a typically linear rate is obtained. The algorithms are almost as simple as their unconstrained counterparts. They are well suited for problems of large dimension such as those arising in optimal control while being competitive with existing methods for low-dimensional problems. The effectiveness of the Newton-like algorithm is demonstrated via computational examples involving as many as 10,000 variables. Extensions to general linearly constrained problems are also provided. These extensions utilize a notion of an active generalized rectangle patterned after the notion of an active manifold used in manifold suboptimization methods. By contrast with these methods, many constraints can be added or subtracted from the binding set at each iteration without the need to solve a quadratic programming problem.
We propose an algorithm for the problem of minimizing a quadratic function subject to an ellipsoidal constraint and show that this algorithm is guaranteed to produce a nearly optimal solution … We propose an algorithm for the problem of minimizing a quadratic function subject to an ellipsoidal constraint and show that this algorithm is guaranteed to produce a nearly optimal solution in a finite number of iterations. We also consider the use of this algorithm in a trust region Newton’s method. In particular, we prove that under reasonable assumptions the sequence generated by Newton’s method has a limit point which satisfies the first and second order necessary conditions for a minimizer of the objective function. Numerical results for GQTPAR, which is a Fortran implementaton of our algorithm, show that GQTPAR is quite successful in a trust region method. In our tests a call to GQTPAR only required 1.6 iterations on the average.
article Free AccessSoftware for estimating sparse Jacobian matrices Authors: Thomas F. Coleman Cornell University Cornell UniversityView Profile , Burton S. Garbow Argonne National Laboratory Argonne National LaboratoryView Profile , Jorge … article Free AccessSoftware for estimating sparse Jacobian matrices Authors: Thomas F. Coleman Cornell University Cornell UniversityView Profile , Burton S. Garbow Argonne National Laboratory Argonne National LaboratoryView Profile , Jorge J. More Argonne National Laboratory Argonne National LaboratoryView Profile Authors Info & Claims ACM Transactions on Mathematical SoftwareVolume 10Issue 3Sept. 1984 pp 329–345https://doi.org/10.1145/1271.1610Published:28 August 1984Publication History 64citation808DownloadsMetricsTotal Citations64Total Downloads808Last 12 Months45Last 6 weeks22 Get Citation AlertsNew Citation Alert added!This alert has been successfully added and will be sent to:You will be notified whenever a record that you have chosen has been cited.To manage your alert preferences, click on the button below.Manage my AlertsNew Citation Alert!Please log in to your account Save to BinderSave to BinderCreate a New BinderNameCancelCreateExport CitationPublisher SiteeReaderPDF
An algorithm is proposed that uses the conjugate gradient method to explore the face of the feasible region defined by the current iterate, and the gradient projection method to move … An algorithm is proposed that uses the conjugate gradient method to explore the face of the feasible region defined by the current iterate, and the gradient projection method to move to a different face. It is proved that for strictly convex problems the algorithm converges to the solution, and that if the solution is nondegenerate, then the algorithm terminates at the solution in a finite number of steps. Numerical results are presented for the obstacle problem, the elastic-plastic torsion problem, and the journal bearing problems. On a selection of these problems with dimensions ranging from 5000 to 15,000, the algorithm determines the solution in fewer than 15 iterations, and with a small number of function-gradient evaluations and Hessian-vector products per iteration.
Nondegeneracy conditions that guarantee that the optimal active constraints are identified in a finite number of iterations are studied. Results of this type have only been established for a few … Nondegeneracy conditions that guarantee that the optimal active constraints are identified in a finite number of iterations are studied. Results of this type have only been established for a few algorithms, and then under restrictive hypothesis. The main result is a characterization of those algorithms that identify the optimal constraints in a finite number of iterations. This result is obtained with a nondegeneracy assumption which is equivalent, in the standard nonlinear programming problem, to the assumption that there is a set of strictly complementary Lagrange multipliers. As an important consequence of the authors' results the way that this characterization applies to gradient projection and sequential quadratic programming algorithms is shown.
solution. The functions that require zeroing are real functions of real variables and it will be assumed that they are continuous and differentiable with respect to these variables. In many … solution. The functions that require zeroing are real functions of real variables and it will be assumed that they are continuous and differentiable with respect to these variables. In many practical examples they are extremely complicated anld hence laborious to compute, an-d this fact has two important immediate consequences. The first is that it is impracticable to compute any derivative that may be required by the evaluation of the algebraic expression of this derivative. If derivatives are needed they must be obtained by differencing. The second is that during any iterative solution process the bulk of the computing time will be spent in evaluating the functions. Thus, the most efficient process will tenid to be that which requires the smallest number of function evaluations. This paper discusses certain modificatioins to Newton's method designed to reduce the number of function evaluations required. Results of various numerical experiments are given and conditions under which the modified versions are superior to the original are tentatively suggested.
We describe the results of a series of tests for a class of new methods of trust region type for solving the simple bound constrained minimization problem. The results are … We describe the results of a series of tests for a class of new methods of trust region type for solving the simple bound constrained minimization problem. The results are encouraging and lead us to believe that the methods will prove useful in solving large-scale problems.
This paper studies automatic procedures for estimating second derivatives of a real valued function of several variables. The estimates are obtained from differences in first derivative vectors, and it is … This paper studies automatic procedures for estimating second derivatives of a real valued function of several variables. The estimates are obtained from differences in first derivative vectors, and it is supposed that the required matrix is sparse and that its sparsity structure is known. Our main purpose is to find ways of taking advantage of the sparsity structure and the symmetry of the second derivative matrix, in order to make small the number of first derivative vectors that have to be calculated. Two new algorithms are proposed, which seem to be very successful in practice and which do not require much computer arithmetic. One is a direct method and the other is a substitution method, these terms being explained in the paper. Some examples show, however, that the given methods may not minimize the number of first derivative vector calculations.
This paper extends the known excellent global convergence properties of trust region algorithms for unconstrained optimization to the case where bounds on the variables are present. Weak conditions on the … This paper extends the known excellent global convergence properties of trust region algorithms for unconstrained optimization to the case where bounds on the variables are present. Weak conditions on the accuracy of the Hessian approximations are considered. It is also shown that, when the strict complementarily condition holds, the proposed algorithms reduce to an unconstrained calculation after finitely many iterations, allowing a fast asymptotic rate of convergence.
This paper considers some aspects of a gradient projection method proposed by Goldstein [1], Levitin and Polyak [3], and more recently, in a less general context, by McCormick [10]. We … This paper considers some aspects of a gradient projection method proposed by Goldstein [1], Levitin and Polyak [3], and more recently, in a less general context, by McCormick [10]. We propose and analyze some convergent step-size rules to be used in conjunction with the method. These rules are similar in spirit to the efficient Armijo rule for the method of steepest descent and under mild assumptions they have the desirable property that they identify the set of active inequality constraints in a finite number of iterations. As a result the method may be converted towards the end of the process to a conjugate direction, quasi-Newton or Newton's method, and achieve the attendant superlinear convergence rate. As an example we propose some quadratically convergent combinations of the method with Newton's method. Such combined methods appear to be very efficient for large-scale problems with many simple constraints such as those often appearing in optimal control.
Consider the sequence obtained by applying the gradient projection method to the problem of minimizing a continuously differentiable functional over a closed convex subset of a real Hilbert space. In … Consider the sequence obtained by applying the gradient projection method to the problem of minimizing a continuously differentiable functional over a closed convex subset of a real Hilbert space. In this paper we show that any cluster point of this sequence must be a constrained stationary point.
Projected gradient processes of the Goldstein–Levitin–Polyak type are considered for constrained minimization problems, $\min _\Omega F$, with $\Omega $ a convex set in a Hilbert space X and $F:X \to … Projected gradient processes of the Goldstein–Levitin–Polyak type are considered for constrained minimization problems, $\min _\Omega F$, with $\Omega $ a convex set in a Hilbert space X and $F:X \to \mathbb{R}^1 $ a differentiable functional. Global and local convergence theorems are established for a large class of these processes, including those generated with implicit step length rules proposed by Bertsekas and Goldstein. In this analysis, traditional uniform strong positivity conditions on the Hessian $\nabla ^2 F$ are replaced by weaker pseudoconvexity conditions and growth conditions on F. When F has a unique minimizes in $\Omega $, convergence rates are shown to depend on how rapidly the function $\gamma (\sigma ) = \inf \{ r = F(x) - F(\xi )\mid x . \in \Omega \| {x - \xi } \| \geqq \sigma \} $ grows with increasing $\sigma > 0$. If $\gamma (\sigma ) \geqq B\sigma ^\nu $ for some $B > 0$, the processes $\{ F_n \} $ in question converge to $\inf F$ like $O(n^{{{ - \nu } / {(\nu - 2)}}} )$, linearly, superlinearly, or in finitely many steps according to whether $\nu > 2$, $\nu = 2$, $2 > \nu > 1$, or $\nu = 1$. The growth properties of $\gamma (\sigma )$ are in turn dependent upon the structure of F, $\Omega $ and the norm on X. Close connections also exist here with a hierarchy of extremal types constructed in a recent study of conditional gradient algorithms, and with long-standing notions of singularity for constrained optimal control problems and unconstrained minimization problems on $\mathbb{R}^n $.
Journal Article On the Local and Superlinear Convergence of Quasi-Newton Methods Get access C. G. BROYDEN, C. G. BROYDEN Department of Computer Science, Cornell UniversityIthaca, N. Y. 15850, U.S.A. Search … Journal Article On the Local and Superlinear Convergence of Quasi-Newton Methods Get access C. G. BROYDEN, C. G. BROYDEN Department of Computer Science, Cornell UniversityIthaca, N. Y. 15850, U.S.A. Search for other works by this author on: Oxford Academic Google Scholar J. E. DENNIS, Jr., J. E. DENNIS, Jr. Department of Computer Science, Cornell UniversityIthaca, N. Y. 15850, U.S.A. Search for other works by this author on: Oxford Academic Google Scholar JORGE J. MORÉ JORGE J. MORÉ Department of Computer Science, Cornell UniversityIthaca, N. Y. 15850, U.S.A. Search for other works by this author on: Oxford Academic Google Scholar IMA Journal of Applied Mathematics, Volume 12, Issue 3, December 1973, Pages 223–245, https://doi.org/10.1093/imamat/12.3.223 Published: 01 December 1973 Article history Received: 26 March 1973 Published: 01 December 1973
In seeking to solve an unconstrained minimization problem, one often computes steps based on a quadratic approximation q to the objective function. A reasonable way to choose such steps is … In seeking to solve an unconstrained minimization problem, one often computes steps based on a quadratic approximation q to the objective function. A reasonable way to choose such steps is by minimizing q constrained to a neighborhood of the current iterate. This paper considers ellipsoidal neighborhoods and presents a new way to handle certain computational details when the Hessian of q is indefinite, paying particular attention to a special case which may then arise. The proposed step computing algorithm provides an attractive way to deal with negative curvature. Implementations of this algorithm have proved very satisfactory in the nonlinear least-squares solves NL2SOL.
Algorithms based on trust regions have been shown to be robust methods for unconstrained optimization problems. All existing methods, either based on the dogleg strategy or Hebden-More iterations, require solution … Algorithms based on trust regions have been shown to be robust methods for unconstrained optimization problems. All existing methods, either based on the dogleg strategy or Hebden-More iterations, require solution of system of linear equations. In large scale optimization this may be prohibitively expensive. It is shown in this paper that an approximate solution of the trust region problem may be found by the preconditioned conjugate gradient method. This may be regarded as a generalized dogleg technique where we asymptotically take the inexact quasi-Newton step. We also show that we have the same convergence properties as existing methods based on the dogleg strategy using an approximate Hessian.
In [3], it is constructively demonstrated that if M is a square (not necessarily symmetric) matrix all of whose principal minors are positive, the quadratic program\[ (A1) \qquad {\text{minimize}}\quad z^T … In [3], it is constructively demonstrated that if M is a square (not necessarily symmetric) matrix all of whose principal minors are positive, the quadratic program\[ (A1) \qquad {\text{minimize}}\quad z^T ( {Mz + q} )\quad {\text{subject to}}\quad Mz + q\geqq 0,\quad z\geqq 0, \] has an optimal solution satisfying the equation\[ ({\text{A2}})\qquad z^T ( {Mz + q} ) = 0. \]A different prooff is offered here. The analysis is then extended to programs of the form \[ ({\text{A3}})\qquad {\text{minimize}}\quad z^T W( z )\quad {\text{subject to}}\quad W( z )\geqq 0,\quad z\geqq 0, \] where W is a continuously differentiable mapping of real N-space into itself. The condition used to insure the existence of an optimal solution to (A3) is positive boundedness of the Jacobian matrix of the mapping W. Definition: A differentiable mapping $W:R^N \to R^N $ has a positively bounded Jacobian matrix, $J_w ( z )$, if there exists a real number $\delta $ such that $0 < \delta < 1$ and such that for every $z \in R^N $ each principal minor of $J_W ( z )$ lies between $\delta $ and $\delta ^{ - 1} $. Mappings of the form $W( z ) = Mz + q$ have positively bounded Jacobian matrices if and only if M has positive principal minors, hence the programs (Al) are subsumed by (A3). Elementary examples show that it is not enough to assume in the general case, (A3), that $J_W ( z )$ has positive principal minors for all z. A consequence of the main result is the Minimax Theorem: If $K( {x,y} )$ is a twice continuously differentiable real-valued function on $R^n \times R^m $, and if $[ {\nabla _x K( {x,y} ), - \nabla _y K( {x,y} )} ]$ has a positively bounded Jacobian matrix, then\[ \mathop {\max }\limits_{y\geqq 0} \mathop {\min }\limits_{x\geqq 0} K( {x,y} ) = \mathop {\min }\limits_{x\geqq 0} \mathop {\max }\limits_{y\geqq 0} K( {x,y} ). \]
If $F:R^n \to R^n $ is a continuous mapping on a closed, convex cone K of $R^n $, it is shown that very weak restrictions on the growth of F … If $F:R^n \to R^n $ is a continuous mapping on a closed, convex cone K of $R^n $, it is shown that very weak restrictions on the growth of F (coercivity conditions) guarantee the existence of a solution to the nonlinear complementarily problem : for each z in $R^n $, find an $x^ * $ in K such that $Fx^ * - z$ belongs to the polar of K and $x^ * $ is orthogonal to $Fx^ * - z$. If K is $R^n $, then this problem reduces to finding conditions on F which guarantee that F maps $R^n $ onto $R^n $.
An approach to variable metric algorithms has been investigated in which the linear search sub-problem no longer becomes necessary. The property of quadratic termination has been replaced by one of … An approach to variable metric algorithms has been investigated in which the linear search sub-problem no longer becomes necessary. The property of quadratic termination has been replaced by one of monotonic convergence of the eigenvalues of the approximating matrix to the inverse hessian. A convex class of updating formulae which possess this property has been established, and a strategy has been indicated for choosing a member of the class so as to keep the approximation away from both singularity and unboundedness. A FORTRAN program has been tested extensively with encouraging results.
This paper examines the numerical performances of two methods for large-scale optimization: a limited memory quasi-Newton method (L-BFGS), and a discrete truncated-Newton method (TN). Various ways of classifying test problems … This paper examines the numerical performances of two methods for large-scale optimization: a limited memory quasi-Newton method (L-BFGS), and a discrete truncated-Newton method (TN). Various ways of classifying test problems are discussed in order to better understand the types of problems that each algorithm solves well. The L-BFGS and TN methods are also compared with the Polak–Ribière conjugate gradient method.
article Free Access Share on Solving Sparse Symmetric Sets of Linear Equations by Preconditioned Conjugate Gradients Author: N. Munksgaard CE-Data Byggeteknisk Regnecenter, Teknikerbyen 32, 2830 Virum, Denmark CE-Data Byggeteknisk Regnecenter, … article Free Access Share on Solving Sparse Symmetric Sets of Linear Equations by Preconditioned Conjugate Gradients Author: N. Munksgaard CE-Data Byggeteknisk Regnecenter, Teknikerbyen 32, 2830 Virum, Denmark CE-Data Byggeteknisk Regnecenter, Teknikerbyen 32, 2830 Virum, DenmarkView Profile Authors Info & Claims ACM Transactions on Mathematical SoftwareVolume 6Issue 2pp 206–219https://doi.org/10.1145/355887.355893Published:01 June 1980Publication History 113citation998DownloadsMetricsTotal Citations113Total Downloads998Last 12 Months76Last 6 weeks17 Get Citation AlertsNew Citation Alert added!This alert has been successfully added and will be sent to:You will be notified whenever a record that you have chosen has been cited.To manage your alert preferences, click on the button below.Manage my AlertsNew Citation Alert!Please log in to your account Save to BinderSave to BinderCreate a New BinderNameCancelCreateExport CitationPublisher SiteeReaderPDF
A collection of subroutines and examples of their uses, as well as the underlying numerical methods, are described for generating orthogonal polynomials relative to arbitrary weight functions. The object of … A collection of subroutines and examples of their uses, as well as the underlying numerical methods, are described for generating orthogonal polynomials relative to arbitrary weight functions. The object of these routines is to produce the coefficients in the three-term recurrence relation satisfied by the orthogonal polynomials. Once these are known, additional data can be generated, such as zeros of orthogonal polynomials and Gauss-type quadrature rules, for which routines are also provided.
This paper discusses a generalization of the function transformation scheme used in Coleman, Shalloway, and Wu [Comput. Optim. Appl., 2 (1993), pp. 145–170; J. Global Optim., 4 (1994), pp. 171–185] … This paper discusses a generalization of the function transformation scheme used in Coleman, Shalloway, and Wu [Comput. Optim. Appl., 2 (1993), pp. 145–170; J. Global Optim., 4 (1994), pp. 171–185] and Shalloway [Global Optimization, C. Floudas and P. Pardalos, eds., Princeton University Press, 1992, pp. 433–477; Global Optim., 2 (1992), pp. 281–311] for global energy minimization applied to the molecular conformation problem. A mathematical theory for the method as a special continuation approach to global optimization is established. We show that the method can transform a nonlinear objective function into a class of gradually deformed, but “smoother” or “easier” functions. An optimization procedure can then be successively applied to the new functions to trace their solutions back to the original function. Two types of transformation are defined: isotropic and anisotropic. We show that both transformations can be applied to a large class of nonlinear partially separable functions, including energy functions for molecular conformation. Methods to compute the transformation for these functions are given.
We compare the Ostrowski efficiency of some methods for solving systems of nonlinear equations without explicitly using derivatives. The methods considered include the discrete Newton method, Shamanskii’s method, the two-point … We compare the Ostrowski efficiency of some methods for solving systems of nonlinear equations without explicitly using derivatives. The methods considered include the discrete Newton method, Shamanskii’s method, the two-point secant method, and Brown’s methods. We introduce a class of secant methods and a class of methods related to Brown’s methods, but using orthogonal rather than stabilized elementary transformations. The idea of these methods is to avoid finding a new approximation to the Jacobian matrix of the system at each step, and thus increase the efficiency. Local convergence theorems are proved, and the efficiencies of the methods are calculated. Numerical results are given, and some possible extensions are mentioned.
We analyze a trust region version of Newton's method for bound-constrained problems. Our approach relies on the geometry of the feasible set, not on the particular representation in terms of … We analyze a trust region version of Newton's method for bound-constrained problems. Our approach relies on the geometry of the feasible set, not on the particular representation in terms of constraints. The convergence theory holds for linearly constrained problems and yields global and superlinear convergence without assuming either strict complementarity or linear independence of the active constraints. We also show that the convergence theory leads to an efficient implementation for large bound-constrained problems.
A modified Newton method for unconstrained minimization is presented and analyzed. The modification is based upon the model trust region approach. This report contains a thorough analysis of the locally … A modified Newton method for unconstrained minimization is presented and analyzed. The modification is based upon the model trust region approach. This report contains a thorough analysis of the locally constrained quadratic minimizations that arise as subproblems in the modified Newton iteration. Several promising alternatives are presented for solving these subproblems in ways that overcome certain theoretical difficulties exposed by this analysis. Very strong convergence results are presented concerning the minimization algorithm. In particular, the explicit use of second order information is justified by demonstrating that the iterates converge to a point which satisfies the second order necessary conditions for minimization. With the exception of very pathological cases this occurs whenever the algorithm is applied to problems with continuous second partial derivatives.
article Free Access Share on TNPACK—A truncated Newton minimization package for large-scale problems: I. Algorithm and usage Authors: Tamar Schlick New York Univ., New York, NY New York Univ., New … article Free Access Share on TNPACK—A truncated Newton minimization package for large-scale problems: I. Algorithm and usage Authors: Tamar Schlick New York Univ., New York, NY New York Univ., New York, NYView Profile , Aaron Fogelson Univ. of Utah, Salt Lake City Univ. of Utah, Salt Lake CityView Profile Authors Info & Claims ACM Transactions on Mathematical SoftwareVolume 18Issue 1pp 46–70https://doi.org/10.1145/128745.150973Published:01 March 1992Publication History 87citation735DownloadsMetricsTotal Citations87Total Downloads735Last 12 Months32Last 6 weeks6 Get Citation AlertsNew Citation Alert added!This alert has been successfully added and will be sent to:You will be notified whenever a record that you have chosen has been cited.To manage your alert preferences, click on the button below.Manage my AlertsNew Citation Alert!Please log in to your account Save to BinderSave to BinderCreate a New BinderNameCancelCreateExport CitationPublisher SiteeReaderPDF
Conn, Gould, and Toint [SIAM J. Numer. Anal., 25 (1988), pp. 433–460; 26 (1989), p. 764] have proposed a class of trust region algorithms for minimizing nonlinear functions whose variables … Conn, Gould, and Toint [SIAM J. Numer. Anal., 25 (1988), pp. 433–460; 26 (1989), p. 764] have proposed a class of trust region algorithms for minimizing nonlinear functions whose variables are subjected to simple bound constraints. In their convergence analysis, they show that if the strict complementarily condition holds, the considered algorithms reduce to an unconstrained calculation after finitely many iterations, allowing fast asymptotic rates of convergence. This paper analyses the behaviour of these iterative processes in the case where the strict complementarily condition is violated. It is proved that inexact Newton methods lead to superlinear or quadratic rates of convergence, even if the set of active bounds at the solution is not entirely detected. Practical criteria for stopping the inner iterations of the algorithms are deduced, ensuring these rates of convergence.
Abstract Automatic differentiation (AD) is a technique that augments computer codes with statements for the computation of derivatives. The computational workhorse of AD-generated codes for first-order derivatives is the linear … Abstract Automatic differentiation (AD) is a technique that augments computer codes with statements for the computation of derivatives. The computational workhorse of AD-generated codes for first-order derivatives is the linear combination of vectors. For many large-scale problems, the vectors involved in this operation are inherently sparse. If the underlying function is a partially separable one (e.g., if its Hessian is sparse), many of the intermediate gradient vectors computed by AD will also be sparse, even though the final gradient is likely to be dense. For large Jacobians computations, every intermediate derivative vector is usually at least as sparse as the least sparse row of the final Jacobian. In this paper, we show that dynamic exploitation of the sparsity inherent in derivative computation can result in dramatic gains in runtime and memory savings. For a set of gradient problems exhibiting implicit sparsity, we report on the runtime and memory requirements of computing the gradients with the ADIFOR (Automatic Differentiation of FORtran) tool, both with and without employing the SparsLinC (Sparse Linear Combinations) library, and show that SparsLinC can reduce runtime and memory costs by orders of magnitude. We also compute sparse Jacobians using the SparsLinC-based approach — in the process, automatically detecting the sparsity structure of the Jacobian — and show that these Jacobian results compare favorably with those of previous techniques that require a priori knowledge of the sparsity structure of the Jacobian Keywords: Automatic DifferentiationSparsityPartial SeparabilitySparse JacobiansLargescale OptimizationMINPACK-2ADIFORSparsLinC *This work was supported by the Mathematical, Information, and Computational Sciences Division subprogram of the Office of Computational and Technology Research, U.S. Department of Energy, under Contract W-31-109-Eng-38, by the National Aerospace Agency under Purchase Order L25935D and Cooperative Agreement No. NCCW-0027, and by the National Science Foundation, through the Center for Research on Parallel Computation, under Cooperative Agreement No. CCR-9120008 *This work was supported by the Mathematical, Information, and Computational Sciences Division subprogram of the Office of Computational and Technology Research, U.S. Department of Energy, under Contract W-31-109-Eng-38, by the National Aerospace Agency under Purchase Order L25935D and Cooperative Agreement No. NCCW-0027, and by the National Science Foundation, through the Center for Research on Parallel Computation, under Cooperative Agreement No. CCR-9120008 Notes *This work was supported by the Mathematical, Information, and Computational Sciences Division subprogram of the Office of Computational and Technology Research, U.S. Department of Energy, under Contract W-31-109-Eng-38, by the National Aerospace Agency under Purchase Order L25935D and Cooperative Agreement No. NCCW-0027, and by the National Science Foundation, through the Center for Research on Parallel Computation, under Cooperative Agreement No. CCR-9120008
This paper is an attempt to motivate and justify quasi-Newton methods as useful modifications of Newton’s method for general and gradient nonlinear systems of equations. References are given to ample … This paper is an attempt to motivate and justify quasi-Newton methods as useful modifications of Newton’s method for general and gradient nonlinear systems of equations. References are given to ample numerical justification; here we give an overview of many of the important theoretical results and each is accompanied by sufficient discussion to make the results and hence the methods plausible.
Journal Article On the Convergence of the Variable Metric Algorithm Get access M. J. D. POWELL M. J. D. POWELL Theoretical Physics Division, U.K.A.E.A. Research Group, Atomic Energy Research EstablishmentHarwell … Journal Article On the Convergence of the Variable Metric Algorithm Get access M. J. D. POWELL M. J. D. POWELL Theoretical Physics Division, U.K.A.E.A. Research Group, Atomic Energy Research EstablishmentHarwell Search for other works by this author on: Oxford Academic Google Scholar IMA Journal of Applied Mathematics, Volume 7, Issue 1, February 1971, Pages 21–36, https://doi.org/10.1093/imamat/7.1.21 Published: 01 February 1971 Article history Received: 10 November 1969 Published: 01 February 1971
Previous article Next article Convergence Conditions for Ascent Methods. II: Some CorrectionsPhilip WolfePhilip Wolfehttps://doi.org/10.1137/1013035PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstractSome corrections and amplifications are appended to Convergence conditions for ascent … Previous article Next article Convergence Conditions for Ascent Methods. II: Some CorrectionsPhilip WolfePhilip Wolfehttps://doi.org/10.1137/1013035PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstractSome corrections and amplifications are appended to Convergence conditions for ascent methods, this Review, 11 (1969), pp. 226–235.[1] Philip Wolfe, Convergence conditions for ascent methods, SIAM Rev., 11 (1969), 226–235 10.1137/1011036 MR0250453 0177.20603 LinkISIGoogle Scholar[2] R. Fletcher and , M. J. D. Powell, A rapidly convergent descent method for minimization, Comput. J., 6 (1963/1964), 163–168 MR0152116 0132.11603 CrossrefISIGoogle Scholar[3] Haskell B. Curry, The method of steepest descent for non-linear minimization problems, Quart. Appl. Math., 2 (1944), 258–261 MR0010667 0061.26801 CrossrefGoogle Scholar[4] Philip Wolfe, J. Abadie, Convergence theory in nonlinear programmingInteger and nonlinear programming, North-Holland, Amsterdam, 1970, 1–36 MR0437080 0336.90045 Google Scholar[5] Konrad Knopp, The Theory and Application of Infinite Series, Hafner, New York, 1950 Google Scholar[6] Rickard M. Elkin, Convergence theorems for Gauss-Seidel and other minimization algorithms, Tech. Rep., 68–59, Computer Science Center, University of Maryland, College Park, 1968, 121 pp. Google Scholar[7] James W. Daniel, Convergence of the conjugate gradient method with computationally convenient modifications, Numer. Math., 10 (1967), 125–131 10.1007/BF02174144 MR0219232 0178.18302 CrossrefISIGoogle Scholar[8] Ju. I. Ljubič, General theorems on quadratic relaxation, Dokl. Akad. Nauk SSSR, 161 (1965), 1274–1277, Soviet Math. Dokl., 6 (1965), pp. 588–591 MR0198654 0179.20802 Google Scholar[9] B. T. Poljak, Existence theorems and convergence of minimizing sequences for extremal problems with constraints, Dokl. Akad. Nauk SSSR, 166 (1966), 287–290, Soviet Math. Dokl., 7 (1966), pp. 72–75 MR0198307 0171.09501 Google Scholar Previous article Next article FiguresRelatedReferencesCited ByDetails Two efficient modifications of AZPRP conjugate gradient method with sufficient descent propertyJournal of Inequalities and Applications, Vol. 2022, No. 1 | 10 January 2022 Cross Ref A robust BFGS algorithm for unconstrained nonlinear optimization problemsOptimization, Vol. 17 | 19 September 2022 Cross Ref Adaptive three-term PRP algorithms without gradient Lipschitz continuity condition for nonconvex functionsNumerical Algorithms, Vol. 91, No. 1 | 20 January 2022 Cross Ref A Hybrid Stochastic Deterministic Algorithm for Solving Unconstrained Optimization ProblemsMathematics, Vol. 10, No. 17 | 23 August 2022 Cross Ref Pseudospectral methods and iterative solvers for optimization problems from multiscale particle dynamicsBIT Numerical Mathematics, Vol. 48 | 11 August 2022 Cross Ref Advancing Three-Dimensional Coupled Water Quality Model of Marine Ranches: Model Development, Global Sensitivity Analysis, and Optimization Based on Observation SystemJournal of Marine Science and Engineering, Vol. 10, No. 8 | 27 July 2022 Cross Ref A modified conjugate gradient method based on the self-scaling memoryless BFGS updateNumerical Algorithms, Vol. 90, No. 3 | 30 October 2021 Cross Ref New iterative conjugate gradient method for nonlinear unconstrained optimizationRAIRO - Operations Research, Vol. 56, No. 4 | 29 July 2022 Cross Ref Accelerated memory-less SR1 method with generalized secant equation for unconstrained optimizationCalcolo, Vol. 59, No. 2 | 11 March 2022 Cross Ref Coupled support tensor machine classification for multimodal neuroimaging dataStatistical Analysis and Data Mining: The ASA Data Science Journal, Vol. 10 | 23 May 2022 Cross Ref A link between the steepest descent method and fixed-point iterationsOptimization Letters, Vol. 69 | 18 March 2022 Cross Ref A hybrid-line-and-curve search globalization technique for inexact Newton methodsApplied Numerical Mathematics, Vol. 173 | 1 Mar 2022 Cross Ref Nonlinear System Identification: Learning While Respecting Physical Models Using a Sequential Monte Carlo MethodIEEE Control Systems, Vol. 42, No. 1 | 1 Feb 2022 Cross Ref A Hierarchical Prestack Seismic Inversion Scheme for VTI media based on the Exact Reflection CoefficientIEEE Transactions on Geoscience and Remote Sensing | 1 Jan 2022 Cross Ref A q-Polak–Ribière–Polyak conjugate gradient algorithm for unconstrained optimization problemsJournal of Inequalities and Applications, Vol. 2021, No. 1 | 28 January 2021 Cross Ref A new CG algorithm based on a scaled memoryless BFGS update with adaptive search strategy, and its application to large-scale unconstrained optimization problemsJournal of Computational and Applied Mathematics, Vol. 398 | 1 Dec 2021 Cross Ref Demand modelling for emergency medical service system with multiple casualties cases: k-inflated mixture regression modelFlexible Services and Manufacturing Journal, Vol. 33, No. 4 | 25 February 2021 Cross Ref Constrained neural network training and its application to hyperelastic material modelingComputational Mechanics, Vol. 68, No. 5 | 3 August 2021 Cross Ref Neural network method for solving parabolic two-temperature microscale heat conduction in double-layered thin films exposed to ultrashort-pulsed lasersInternational Journal of Heat and Mass Transfer, Vol. 178 | 1 Oct 2021 Cross Ref Accuracy and efficient solution of helical coiled once-through steam generator model using JFNK methodAnnals of Nuclear Energy, Vol. 159 | 1 Sep 2021 Cross Ref Modifications of Hestenes and Stiefel CG Method for Solving Unconstrained Optimization Problems2021 7th International Conference on Contemporary Information Technology and Mathematics (ICCITM) | 25 Aug 2021 Cross Ref A Modified Sufficient Descent Spectral Conjugate Gradient Method for Minimization2021 7th International Conference on Contemporary Information Technology and Mathematics (ICCITM) | 25 Aug 2021 Cross Ref A note on memory-less SR1 and memory-less BFGS methods for large-scale unconstrained optimizationNumerical Algorithms, Vol. 11 | 17 August 2021 Cross Ref A Descent Four-Term Conjugate Gradient Method with Global Convergence Properties for Large-Scale Unconstrained Optimisation ProblemsMathematical Problems in Engineering, Vol. 2021 | 14 Aug 2021 Cross Ref A modified conjugate gradient-based Elman neural networkCognitive Systems Research, Vol. 68 | 1 Aug 2021 Cross Ref A Modified Liu and Storey Conjugate Gradient Method for Large Scale Unconstrained Optimization ProblemsAlgorithms, Vol. 14, No. 8 | 28 July 2021 Cross Ref ASLR: An Adaptive Scheduler for Learning Rate2021 International Joint Conference on Neural Networks (IJCNN) | 18 Jul 2021 Cross Ref A Convex Combination between Two Different Search Directions of Conjugate Gradient Method and Application in Image RestorationMathematical Problems in Engineering, Vol. 2021 | 13 Jul 2021 Cross Ref Direct Energy Minimization Based on Exponential Transformation in Density Functional Calculations of Finite and Extended SystemsComputer Physics Communications | 1 Jun 2021 Cross Ref A new three-term spectral conjugate gradient algorithm with higher numerical performance for solving large scale optimization problems based on Quasi-Newton equationInternational Journal of Modeling, Simulation, and Scientific Computing | 31 May 2021 Cross Ref Stochastic quasi-Newton with line-search regularisationAutomatica, Vol. 127 | 1 May 2021 Cross Ref An Efficient Modified AZPRP Conjugate Gradient Method for Large-Scale Unconstrained Optimization ProblemJournal of Mathematics, Vol. 2021 | 26 Apr 2021 Cross Ref A new hybrid conjugate gradient algorithm as a convex combination of MMWU and RMIL nonlinear problemsJournal of Interdisciplinary Mathematics, Vol. 24, No. 3 | 18 March 2021 Cross Ref Estimation of discrete choice models with hybrid stochastic adaptive batch size algorithmsJournal of Choice Modelling, Vol. 38 | 1 Mar 2021 Cross Ref Learning to Optimize Molecular Geometries Using Reinforcement LearningJournal of Chemical Theory and Computation, Vol. 17, No. 2 | 20 January 2021 Cross Ref Numerical optimization of a multiphysics calculation scheme based on partial convergenceAnnals of Nuclear Energy, Vol. 151 | 1 Feb 2021 Cross Ref A new accelerated diagonal quasi-Newton updating method with scaled forward finite differences directional derivative for unconstrained optimizationOptimization, Vol. 70, No. 2 | 16 January 2020 Cross Ref A Three-Term Gradient Descent Method with Subspace TechniquesMathematical Problems in Engineering, Vol. 2021 | 7 Jan 2021 Cross Ref Numerical ResultsA Derivative-free Two Level Random Search Method for Unconstrained Optimization | 1 April 2021 Cross Ref A decent three term conjugate gradient method with global convergence properties for large scale unconstrained optimization problemsAIMS Mathematics, Vol. 6, No. 10 | 1 Jan 2021 Cross Ref Adaptive Learning Rate and Momentum for Training Deep Neural NetworksMachine Learning and Knowledge Discovery in Databases. Research Track | 11 September 2021 Cross Ref A Quasi‐Newton Reformulated Geostatistical Approach on Reduced Dimensions for Large‐Dimensional Inverse ProblemsWater Resources Research, Vol. 57, No. 1 | 22 January 2021 Cross Ref Approaching the full configuration interaction ground state from an arbitrary wavefunction with gradient descent and quasi-Newton algorithmsThe Journal of Chemical Physics, Vol. 153, No. 23 | 21 Dec 2020 Cross Ref Hybrid Riemannian conjugate gradient methods with global convergence propertiesComputational Optimization and Applications, Vol. 77, No. 3 | 5 September 2020 Cross Ref A one-parameter class of three-term conjugate gradient methods with an adaptive parameter choiceOptimization Methods and Software, Vol. 35, No. 6 | 13 September 2018 Cross Ref Gaussian Process Regression for Maximum Entropy DistributionJournal of Computational Physics, Vol. 418 | 1 Oct 2020 Cross Ref A double parameter self-scaling memoryless BFGS method for unconstrained optimizationComputational and Applied Mathematics, Vol. 39, No. 3 | 2 June 2020 Cross Ref Semi-discrete optimal transport: a solution procedure for the unsquared Euclidean distance caseMathematical Methods of Operations Research, Vol. 92, No. 1 | 12 February 2020 Cross Ref The application of nonlinear least-squares estimation algorithms in atmospheric density model calibrationAircraft Engineering and Aerospace Technology, Vol. 92, No. 7 | 20 May 2020 Cross Ref A Modified Bat Algorithm with Conjugate Gradient Method for Global OptimizationInternational Journal of Mathematics and Mathematical Sciences, Vol. 2020 | 4 Jun 2020 Cross Ref Decreasing the uncertainty of classical laser flash analysis using numerical algorithms robust to noise and systematic errorsReview of Scientific Instruments, Vol. 91, No. 6 | 1 Jun 2020 Cross Ref Diagonal Approximation of the Hessian by Finite Differences for Unconstrained OptimizationJournal of Optimization Theory and Applications, Vol. 185, No. 3 | 10 May 2020 Cross Ref New conjugate gradient algorithms based on self-scaling memoryless Broyden–Fletcher–Goldfarb–Shanno methodCalcolo, Vol. 57, No. 2 | 18 May 2020 Cross Ref A class of globally convergent three-term Dai-Liao conjugate gradient methodsApplied Numerical Mathematics, Vol. 151 | 1 May 2020 Cross Ref A recalling-enhanced recurrent neural network: Conjugate gradient learning algorithm and its convergence analysisInformation Sciences, Vol. 519 | 1 May 2020 Cross Ref Dynamic search trajectory methods for global optimizationAnnals of Mathematics and Artificial Intelligence, Vol. 88, No. 1-3 | 27 August 2019 Cross Ref A Study on Optimization Algorithms in MPCJournal of Physics: Conference Series, Vol. 1490, No. 1 | 1 Mar 2020 Cross Ref SNR enhancement in brillouin microspectroscopy using spectrum reconstructionBiomedical Optics Express, Vol. 11, No. 2 | 22 January 2020 Cross Ref An Efficient Single-Parameter Scaling Memoryless Broyden-Fletcher-Goldfarb-Shanno Algorithm for Solving Large Scale Unconstrained Optimization ProblemsIEEE Access, Vol. 8 | 1 Jan 2020 Cross Ref Spectral Three-Term Constrained Conjugate Gradient Algorithm for Function MinimizationsJournal of Applied Mathematics, Vol. 2019 | 25 Dec 2019 Cross Ref A new accelerated conjugate gradient method for large-scale unconstrained optimizationJournal of Inequalities and Applications, Vol. 2019, No. 1 | 20 November 2019 Cross Ref A New Diagonal Quasi-Newton Updating Method With Scaled Forward Finite Differences Directional Derivative for Unconstrained OptimizationNumerical Functional Analysis and Optimization, Vol. 40, No. 13 | 20 May 2019 Cross Ref Conjugate gradient-based Takagi-Sugeno fuzzy neural network parameter identification and its convergence analysisNeurocomputing, Vol. 364 | 1 Oct 2019 Cross Ref New Dual Parameter Quasi-Newton Methods for Unconstrained Nonlinear ProgramsInternational Journal of Strategic Decision Sciences, Vol. 10, No. 3 | 1 Jul 2019 Cross Ref A diagonal quasi-Newton updating method for unconstrained optimizationNumerical Algorithms, Vol. 81, No. 2 | 29 June 2018 Cross Ref Convergence of a two-parameter family of conjugate gradient methods with a fixed formula of stepsizeBoletim da Sociedade Paranaense de Matemática, Vol. 38, No. 6 | 25 May 2019 Cross Ref Efficient Ensemble Refinement by ReweightingJournal of Chemical Theory and Computation, Vol. 15, No. 5 | 2 April 2019 Cross Ref Simultaneous reconstruction of the perfusion coefficient and initial temperature from time-average integral temperature measurementsApplied Mathematical Modelling, Vol. 68 | 1 Apr 2019 Cross Ref Interface inversion of gravitational data using spherical triangular tessellation: an application for the estimation of the Moon's crustal thicknessGeophysical Journal International, Vol. 217, No. 1 | 17 January 2019 Cross Ref A New Hybrid Algorithm for Convex Nonlinear Unconstrained OptimizationJournal of Applied Mathematics, Vol. 2019 | 1 Apr 2019 Cross Ref An Advanced Estimation Algorithm for Ground‐Motion Models with Spatial CorrelationBulletin of the Seismological Society of America, Vol. 109, No. 2 | 5 March 2019 Cross Ref A new hybrid approach for reliability-based design optimization of structural componentsMaterials Testing, Vol. 61, No. 2 | 24 April 2019 Cross Ref A New Dai-Liao Conjugate Gradient Method with Optimal Parameter ChoiceNumerical Functional Analysis and Optimization, Vol. 40, No. 2 | 20 November 2018 Cross Ref A Structured Quasi-Newton Algorithm for Optimizing with Incomplete Hessian InformationCosmin G. Petra, Naiyuan Chiang, and Mihai AnitescuSIAM Journal on Optimization, Vol. 29, No. 2 | 11 April 2019AbstractPDF (588 KB)Using hybrid PSO algorithm with modified conjugate gradient method for some image processing1 Jan 2019 Cross Ref Cubic regularization in symmetric rank-1 quasi-Newton methodsMathematical Programming Computation, Vol. 10, No. 4 | 12 February 2018 Cross Ref A New Correntropy-Based Conjugate Gradient Backpropagation Algorithm for Improving Training in Neural NetworksIEEE Transactions on Neural Networks and Learning Systems, Vol. 29, No. 12 | 1 Dec 2018 Cross Ref Full waveform inversion in time and frequency domain of velocity modeling in seismic imaging: FWISIMAT a Matlab codeEarth Sciences Research Journal, Vol. 22, No. 4 | 1 October 2018 Cross Ref A diagonal quasi-Newton updating method based on minimizing the measure function of Byrd and Nocedal for unconstrained optimizationOptimization, Vol. 67, No. 9 | 17 June 2018 Cross Ref Optimal control of bioprocess systems using hybrid numerical optimization algorithmsOptimization, Vol. 67, No. 8 | 2 May 2018 Cross Ref Toward Fast Dynamic Optimization: An Indirect Algorithm That Uses Parsimonious Input ParameterizationIndustrial & Engineering Chemistry Research, Vol. 57, No. 30 | 17 June 2018 Cross Ref Toward interactive scanning tunneling microscopy simulations of large-scale molecular systems in real timeJournal of Applied Physics, Vol. 124, No. 4 | 28 Jul 2018 Cross Ref A Double-Parameter Scaling Broyden–Fletcher–Goldfarb–Shanno Method Based on Minimizing the Measure Function of Byrd and Nocedal for Unconstrained OptimizationJournal of Optimization Theory and Applications, Vol. 178, No. 1 | 4 May 2018 Cross Ref Designing stellarator coils by a modified Newton method using FOCUSPlasma Physics and Controlled Fusion, Vol. 60, No. 6 | 18 April 2018 Cross Ref Wavelet-based joint CT-MRI reconstructionJournal of X-Ray Science and Technology, Vol. 26, No. 3 | 25 May 2018 Cross Ref An adaptive three-term conjugate gradient method based on self-scaling memoryless BFGS matrixJournal of Computational and Applied Mathematics, Vol. 332 | 1 Apr 2018 Cross Ref A double parameter scaled BFGS method for unconstrained optimizationJournal of Computational and Applied Mathematics, Vol. 332 | 1 Apr 2018 Cross Ref An adaptive scaled BFGS method for unconstrained optimizationNumerical Algorithms, Vol. 77, No. 2 | 3 April 2017 Cross Ref On obtaining optimal well rates and placement for CO2 storageComputational Geosciences, Vol. 21, No. 5-6 | 22 March 2017 Cross Ref Accelerated adaptive Perry conjugate gradient algorithms based on the self-scaling memoryless BFGS updateJournal of Computational and Applied Mathematics, Vol. 325 | 1 Dec 2017 Cross Ref Splitting methods for tensor equationsNumerical Linear Algebra with Applications, Vol. 24, No. 5 | 18 April 2017 Cross Ref Ranking and categorizing large-scale saline aquifer formations based on optimized CO 2 storage potentials and economic factorsInternational Journal of Greenhouse Gas Control, Vol. 65 | 1 Oct 2017 Cross Ref Smoothed $$\ell _1$$ ℓ 1 -regularization-based line search for sparse signal recoverySoft Computing, Vol. 21, No. 16 | 9 November 2016 Cross Ref Numeric modelling and risk assessment of pollutions in the Chinese Bohai SeaScience China Earth Sciences, Vol. 60, No. 8 | 17 July 2017 Cross Ref Categorization of Norwegian Continental Shelf Formations in Terms of Geological CO2 Storage PotentialsEnergy Procedia, Vol. 114 | 1 Jul 2017 Cross Ref A Poisson Log-Normal Model for Constructing Gene Covariation Network Using RNA-seq DataJournal of Computational Biology, Vol. 24, No. 7 | 1 Jul 2017 Cross Ref Eigenvalues versus singular values study in conjugate gradient algorithms for large-scale unconstrained optimizationOptimization Methods and Software, Vol. 32, No. 3 | 28 September 2016 Cross Ref A new spectral conjugate gradient method for large-scale unconstrained optimizationOptimization Methods and Software, Vol. 32, No. 3 | 26 September 2016 Cross Ref An optimal control framework for dynamic induction control of wind farms and their interaction with the atmospheric boundary layerPhilosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, Vol. 375, No. 2091 | 6 March 2017 Cross Ref Computational method for optimal machine scheduling problem with maintenance and productionInternational Journal of Production Research, Vol. 55, No. 6 | 21 October 2016 Cross Ref Introduction to Nonlinear ProgrammingHybrid Systems, Optimal Control and Hybrid Vehicles | 3 February 2017 Cross Ref Simple Bound Constraints OptimizationContinuous Nonlinear Optimization for Engineering Applications in GAMS Technology | 9 November 2017 Cross Ref Sparse-Prior-Based Projection Distance Optimization Method for Joint CT-MRI ReconstructionIEEE Access, Vol. 5 | 1 Jan 2017 Cross Ref ESTIMATION OF NATURAL VENTILATION PARAMETERS BY A BAYESIAN APPROACHJournal of Environmental Engineering (Transactions of AIJ), Vol. 82, No. 734 | 1 Jan 2017 Cross Ref A New Search Direction for Broyden’s Family Method in Solving Unconstrained Optimization ProblemsRecent Advances on Soft Computing and Data Mining | 1 Jan 2017 Cross Ref An efficient modification of the Hestenes-Stiefel nonlinear conjugate gradient method with restart propertyJournal of Inequalities and Applications, Vol. 2016, No. 1 | 6 April 2016 Cross Ref Line search fixed point algorithms based on nonlinear conjugate gradient directions: application to constrained smooth convex optimizationFixed Point Theory and Applications, Vol. 2016, No. 1 | 8 July 2016 Cross Ref Some modified conjugate gradient methods for unconstrained optimizationJournal of Computational and Applied Mathematics, Vol. 305 | 1 Oct 2016 Cross Ref On the global convergence rate of the gradient descent method for functions with Hölder continuous gradientsOptimization Letters, Vol. 10, No. 6 | 26 August 2015 Cross Ref Constrained optimal control problems of nonlinear systems based on improved Newton algorithms2016 3rd International Conference on Informative and Cybernetics for Computational Social Systems (ICCSS) | 1 Aug 2016 Cross Ref Quantitative reconstruction of thermal and dynamic characteristics of lava flow from surface thermal measurementsGeophysical Journal International, Vol. 205, No. 3 | 3 April 2016 Cross Ref Modified Newton-Raphson GRAPE methods for optimal control of spin systemsThe Journal of Chemical Physics, Vol. 144, No. 20 | 28 May 2016 Cross Ref New data assimilation system DNDAS for high-dimensional modelsChinese Physics B, Vol. 25, No. 5 | 6 May 2016 Cross Ref A Ranking Approach on Large-Scale Graph With Multidimensional Heterogeneous InformationIEEE Transactions on Cybernetics, Vol. 46, No. 4 | 1 Apr 2016 Cross Ref Variable metric random pursuitMathematical Programming, Vol. 156, No. 1-2 | 24 May 2015 Cross Ref Amplitude variation with angle inversion using the exact Zoeppritz equations — Theory and methodologyGEOPHYSICS, Vol. 81, No. 2 | 1 Mar 2016 Cross Ref Projection onto a Polyhedron that Exploits SparsityWilliam W. Hager and Hongchao ZhangSIAM Journal on Optimization, Vol. 26, No. 3 | 1 September 2016AbstractPDF (439 KB)Computational Approaches in Large-Scale Unconstrained OptimizationBig Data Optimization: Recent Developments and Challenges | 27 May 2016 Cross Ref A modification of classical conjugate gradient method using strong Wolfe line search1 Jan 2016 Cross Ref Inversion study on pollutant discharges in the Bohai Sea with the adjoint methodJournal of Ocean University of China, Vol. 14, No. 6 | 7 November 2015 Cross Ref Phase equilibrium calculations with quasi-Newton methodsFluid Phase Equilibria, Vol. 406 | 1 Nov 2015 Cross Ref A globally and quadratically convergent algorithm with efficient implementation for unconstrained optimizationComputational and Applied Mathematics, Vol. 34, No. 3 | 12 August 2014 Cross Ref A sufficient descent Liu–Storey conjugate gradient method and its global convergenceOptimization, Vol. 64, No. 9 | 10 March 2014 Cross Ref Global Trajectory Optimization on Multilane Roads2015 IEEE 18th International Conference on Intelligent Transportation Systems | 1 Sep 2015 Cross Ref Convergence of Rprop and variantsNeurocomputing, Vol. 159 | 1 Jul 2015 Cross Ref Interpretation of disturbed data in thermal response tests using the infinite line source model and numerical parameter estimation methodApplied Energy, Vol. 148 | 1 Jun 2015 Cross Ref A three-term conjugate gradient algorithm for large-scale unconstrained optimization problemsApplied Numerical Mathematics, Vol. 92 | 1 Jun 2015 Cross Ref A new three-term conjugate gradient algorithm for unconstrained optimizationNumerical Algorithms, Vol. 68, No. 2 | 12 April 2014 Cross Ref On Matrix Nearness Problems: Distance to DelocalizationVladimir R. Kostić, Agnieszka Miȩdlar, and Jeroen J. StolwijkSIAM Journal on Matrix Analysis and Applications, Vol. 36, No. 2 | 16 April 2015AbstractPDF (530 KB)A new reliability analysis method based on the conjugate gradient directionStructural and Multidisciplinary Optimization, Vol. 51, No. 1 | 5 June 2014 Cross Ref A review on accelerating scientific computations using the Conjugate Gradient method2015 International Conference on Electronic Design, Computer Networks & Automated Verification (EDCAV) | 1 Jan 2015 Cross Ref ISAR Cross-Range Scaling by Using Sharpness MaximizationIEEE Geoscience and Remote Sensing Letters, Vol. 12, No. 1 | 1 Jan 2015 Cross Ref An Efficient Hybrid Conjugate Gradient Method with the Strong Wolfe-Powell Line SearchMathematical Problems in Engineering, Vol. 2015 | 1 Jan 2015 Cross Ref A New Conjugate Gradient Algorithm with Sufficient Descent Property for Unconstrained OptimizationMathematical Problems in Engineering, Vol. 2015 | 1 Jan 2015 Cross Ref Maximum Likelihood Detection of Nonlinearly Distorted OFDM Signal2015 IEEE Global Communications Conference (GLOBECOM) | 1 Dec 2014 Cross Ref Total variation minimization-based multimodality medical image reconstruction11 Sep 2014 Cross Ref A subclass of generating set search with convergence to second-order stationary pointsOptimization Methods and Software, Vol. 29, No. 5 | 27 June 2013 Cross Ref Constrained optimal control of switched systems based on modified BFGS algorithm and filled function methodInternational Journal of Computer Mathematics, Vol. 91, No. 8 | 16 January 2014 Cross Ref Spectral method and its application to the conjugate gradient methodApplied Mathematics and Computation, Vol. 240 | 1 Aug 2014 Cross Ref AN ADAPTIVE CONJUGACY CONDITION AND RELATED NONLINEAR CONJUGATE GRADIENT METHODSInternational Journal of Computational Methods, Vol. 11, No. 04 | 19 August 2014 Cross Ref Some sufficient descent conjugate gradient methods and their global convergenceComputational and Applied Mathematics, Vol. 33, No. 2 | 10 August 2013 Cross Ref Two hybrid nonlinear conjugate gradient methods based on a modified secant equationOptimization, Vol. 63, No. 7 | 13 June 2012 Cross Ref A descent family of Dai–Liao conjugate gradient methodsOptimization Methods and Software, Vol. 29, No. 3 | 9 September 2013 Cross Ref Two modified scaled nonlinear conjugate gradient methodsJournal of Computational and Applied Mathematics, Vol. 261 | 1 May 2014 Cross Ref An accelerated subspace minimization three-term conjugate gradient algorithm for unconstrained optimizationNumerical Algorithms, Vol. 65, No. 4 | 19 May 2013 Cross Ref Oblique projections, Broyden restricted class and limited-memory quasi-Newton methodsOptimization, Vol. 63, No. 1 | 8 May 2013 Cross Ref The Hybrid BFGS-CG Method in Solving Unconstrained Optimization ProblemsAbstract and Applied Analysis, Vol. 2014 | 1 Jan 2014 Cross Ref Several Guaranteed Descent Conjugate Gradient Methods for Unconstrained OptimizationJournal of Applied Mathematics, Vol. 2014 | 1 Jan 2014 Cross Ref Conjugate Gradient Trained Neural Network for Intelligent Sensing of Manhole Gases to Avoid Human FatalityAdvances in Secure Computing, Internet Services, and Applications | 1 Jan 2014 Cross Ref Convergence and Stability of Line Search Methods for Unconstrained OptimizationActa Applicandae Mathematicae, Vol. 127, No. 1 | 10 January 2013 Cross Ref BibliographyComputational Methods for Data Evaluation and Assimilation | 25 July 2013 Cross Ref P-norm regularized SVM classifier by non-convex conjugate gradient algorithm2013 25th Chinese Control and Decision Conference (CCDC) | 1 May 2013 Cross Ref On the sufficient descent property of the Shanno’s conjugate gradient methodOptimization Letters, Vol. 7, No. 4 | 8 March 2012 Cross Ref A simple three-term conjugate gradient algorithm for unconstrained optimizationJournal of Computational and Applied Mathematics, Vol. 241 | 1 Mar 2013 Cross Ref On three-term conjugate gradient algorithms for unconstrained optimizationApplied Mathematics and Computation, Vol. 219, No. 11 | 1 Feb 2013 Cross Ref A Nonlinear Conjugate Gradient Algorithm with an Optimal Property and an Improved Wolfe Line SearchYu-Hong Dai and Cai-Xia KouSIAM Journal on Optimization, Vol. 23, No. 1 | 21 February 2013AbstractPDF (397 KB)The Limited Memory Conjugate Gradient MethodWilliam W. Hager and Hongchao ZhangSIAM Journal on Optimization, Vol. 23, No. 4 | 5 November 2013AbstractPDF (490 KB)Trading Off Subtask Dispersion and Response Time in Split-Merge SystemsAnalytical and Stochastic Modeling Techniques and Applications | 1 Jan 2013 Cross Ref Estimation of Open Boundary Conditions for an Internal Tidal Model with Adjoint Method: A Comparative Study on Optimization MethodsMathematical Problems in Engineering, Vol. 2013 | 1 Jan 2013 Cross Ref Full-waveform Velocity Inversion Based on the Acoustic Wave EquationAmerican Journal of Computational Mathematics, Vol. 03, No. 03 | 1 Jan 2013 Cross Ref An accelerated conjugate gradient algorithm with guaranteed descent and conjugacy conditions for unconstrained optimizationOptimization Methods and Software, Vol. 27, No. 4-5 | 1 Oct 2012 Cross Ref A Quadratic Hybridization of Polak–Ribière–Polyak and Fletcher–Reeves Conjugate Gradient MethodsJournal of Optimization Theory and Applications, Vol. 154, No. 3 | 13 March 2012 Cross Ref A note on the global convergence theorem of the scaled conjugate gradient algorithms proposed by AndreiComputational Optimization and Applications, Vol. 52, No. 2 | 21 May 2011 Cross Ref A Modified DY Conjugate Gradient Algorithm with Sufficient Descent2012 Fifth International Joint Conference on Computational Sciences and Optimization | 1 Jun 2012 Cross Ref A new general form of conjugate gradient methods with guaranteed descent and strong global convergence propertiesNumerical Algorithms, Vol. 60, No. 1 | 12 November 2011 Cross Ref A sufficient descent LS conjugate gradient method for unconstrained optimization problemsApplied Mathematics and Computation, Vol. 218, No. 5 | 1 Nov 2011 Cross Ref ACGHSIM: A Simulation Tool for Accelerated Conjugate Hessian Gradient Optimization Algorithm2011 Third International Conference on Computational Intelligence, Modelling & Simulation | 1 Sep 2011 Cross Ref STUDYING THE BASIN OF CONVERGENCE OF METHODS FOR COMPUTING PERIODIC ORBITSInternational Journal of Bifurcation and Chaos, Vol. 21, No. 08 | 20 November 2011 Cross Ref Multi-objective reinforcement learning algorithm and its improved convergency method2011 6th IEEE Conference on Industrial Electronics and Applications | 1 Ju
article Numerical Solution of Nonlinear Equations Share on Authors: Jorge J. Moré Applied Mathematics Division, Argonne National Laboratory, 9700, South Cass Ave., Argonne, IL Applied Mathematics Division, Argonne National Laboratory, … article Numerical Solution of Nonlinear Equations Share on Authors: Jorge J. Moré Applied Mathematics Division, Argonne National Laboratory, 9700, South Cass Ave., Argonne, IL Applied Mathematics Division, Argonne National Laboratory, 9700, South Cass Ave., Argonne, ILView Profile , Michel Y. Cosnard Mathématiques Appliquées-Informatique, Universite Scientifique et Médicale de Grenoble, Boite Postale 53, 38041 Grenoble Cédex, France Mathématiques Appliquées-Informatique, Universite Scientifique et Médicale de Grenoble, Boite Postale 53, 38041 Grenoble Cédex, FranceView Profile Authors Info & Claims ACM Transactions on Mathematical SoftwareVolume 5Issue 1March 1979 pp 64–85https://doi.org/10.1145/355815.355820Published:01 March 1979 87citation2,098DownloadsMetricsTotal Citations87Total Downloads2,098Last 12 Months37Last 6 weeks6 Get Citation AlertsNew Citation Alert added!This alert has been successfully added and will be sent to:You will be notified whenever a record that you have chosen has been cited.To manage your alert preferences, click on the button below.Manage my AlertsNew Citation Alert!Please log in to your account Save to BinderSave to BinderCreate a New BinderNameCancelCreateExport CitationPublisher SiteGet Access
A procedure for symmetric matrix updating subject to a linear equation and retaining any sparsity present in the original matrix is derived. The main feature of this procedure is the … A procedure for symmetric matrix updating subject to a linear equation and retaining any sparsity present in the original matrix is derived. The main feature of this procedure is the reduction of the problem to the solution of an <italic>n</italic> dimensional sparse system of linear equations. The matrix of this system is shown to be symmetric and positive definite. The method depends on the Frobenius matrix norm. Comments are made on the difficulties of extending the technique so that it uses more general norms, the main points being shown by a numerical example.
Object-oriented programming is becoming a popular way of developing new software. The promise of this new programming paradigm is that software developed through these concepts will be more reliable and … Object-oriented programming is becoming a popular way of developing new software. The promise of this new programming paradigm is that software developed through these concepts will be more reliable and easier to re-use, thereby decreasing the time and cost of the software development cycle. This report describes the development of a C++ class library for nonlinear optimization. Using object-oriented techniques, this new library was designed so that the interface is easy to use while being general enough so that new optimization algorithms can be added easily to the existing framework.