Quasi-Newton Methods, Motivation and Theory

Type: Article
Publication Date: 1977-01-01
Citations: 1500
DOI: https://doi.org/10.1137/1019005

Abstract

This paper is an attempt to motivate and justify quasi-Newton methods as useful modifications of Newton’s method for general and gradient nonlinear systems of equations. References are given to ample numerical justification; here we give an overview of many of the important theoretical results and each is accompanied by sufficient discussion to make the results and hence the methods plausible.

Locations

  • SIAM Review
  • eCommons (Cornell University)
  • HAL (Le Centre pour la Communication Scientifique Directe)

Ask a Question About This Paper

Summary

Login to see paper summary

Four decades after their invention, quasi-Newton methods are still state of the art in unconstrained numerical optimization. Although not usually interpreted thus, these are learning algorithms tha... Four decades after their invention, quasi-Newton methods are still state of the art in unconstrained numerical optimization. Although not usually interpreted thus, these are learning algorithms tha...
A unified derivation is presented of the quasi-Newton methods for solving systems of nonlinear equations. The general algorithm contains, as special cases, all of the previously proposed quasi-Newton methods. A unified derivation is presented of the quasi-Newton methods for solving systems of nonlinear equations. The general algorithm contains, as special cases, all of the previously proposed quasi-Newton methods.
It was around 1660 Newton discovered the method for solving nonlinear equations that bears his name. Shortly thereafter – around 1665 – he also developed the secant method for solving … It was around 1660 Newton discovered the method for solving nonlinear equations that bears his name. Shortly thereafter – around 1665 – he also developed the secant method for solving nonlinear equations. Since then these methods have become a part of the folklore in numerical analysis (see Exercises 12.1 and 12.2). In addition to solving nonlinear equations, these methods can also be applied to the problem of minimizing a nonlinear function. In this chapter we provide an overview of the classical Newton's method and many of its modern relatives called quasi-Newton methods for unconstrained minimization. The major advantage of the Newton's method is its quadratic convergence (Exercise 12.3) but in finding the next descent direction it requires solution of a linear system which is often a bottleneck. Quasi-Newton methods are designed to preserve the good convergence properties of the Newton's method while they provide considerable relief from this computational bottleneck. Quasi-Newton methods are extensions of the secant method. Davidon was the first to revive the modern interest in quasi-Newton methods in 1959 but his work remained unpublished till 1991. However, Fletcher and Powell in 1963 published Davidon's ideas and helped to revive this line of approach to designing efficient minimization algorithms.
Este trabalho apresenta uma nova abordagem para a modelagem e resolução de problemas de fluxo de carga em sistemas elétricos de potência.O modelo proposto é formado simultaneamente pelo conjunto de … Este trabalho apresenta uma nova abordagem para a modelagem e resolução de problemas de fluxo de carga em sistemas elétricos de potência.O modelo proposto é formado simultaneamente pelo conjunto de equações não lineares que representam as restrições de carga do problema e por restrições de complementaridade associadas com as restrições de operação da rede, as quais propiciam o controle implícito
In this paper, we focus on quasi-Newton methods to solve constrained generalized equations. As is well-known, this problem was firstly studied by Robinson and Josephy in the 70’s. Since then, … In this paper, we focus on quasi-Newton methods to solve constrained generalized equations. As is well-known, this problem was firstly studied by Robinson and Josephy in the 70’s. Since then, it has been extensively studied by many other researchers, specially Dontchev and Rockafellar. Here, we propose two Broyden-type quasi-Newton approaches to dealing with constrained generalized equations, one that requires the exact resolution of the subproblems, and other that allows inexactness, which is closer to numerical reality. In both cases, projections onto the feasible set are also inexact. The local convergence of general quasi-Newton approaches is established under a bounded deterioration of the update matrix and Lipschitz continuity hypotheses. In particular, we prove that a general scheme converges linearly to the solution under suitable assumptions. Furthermore, when a Broyden-type update rule is used, the convergence is superlinearly. Some numerical examples illustrate the applicability of the proposed methods.
Summary In this paper, we present the compact representation for matrices belonging to the Broyden class of quasi‐Newton updates, where each update may be either rank one or rank two. … Summary In this paper, we present the compact representation for matrices belonging to the Broyden class of quasi‐Newton updates, where each update may be either rank one or rank two. This work extends previous results solely for the restricted Broyden class of rank‐two updates. In this article, it is not assumed that the same Broyden update is used in each iteration; rather, different members of the Broyden class may be used in each iteration. Numerical experiments suggest that a practical implementation of the compact representation is able to accurately represent matrices belonging to the Broyden class of updates. Furthermore, we demonstrate how to compute the compact representation for the inverse of these matrices and a practical algorithm for solving linear systems with members of the Broyden class of updates. We demonstrate through numerical experiments that the proposed linear solver is able to efficiently solve linear systems with members of the Broyden class of matrices with high accuracy. As an immediate consequence of this work, it is now possible to efficiently compute the eigenvalues of any limited‐memory member of the Broyden class of matrices, allowing for the computation of condition numbers and the ability to perform sensitivity analysis.
.Sequential residual methods try to solve nonlinear systems of equations \(F(x)=0\) by iteratively updating the current approximate solution along a residual-related direction. Therefore, memory requirements are minimal and, consequently, these … .Sequential residual methods try to solve nonlinear systems of equations \(F(x)=0\) by iteratively updating the current approximate solution along a residual-related direction. Therefore, memory requirements are minimal and, consequently, these methods are attractive for solving large-scale nonlinear systems. However, the convergence of these algorithms may be slow in critical cases; therefore, acceleration procedures are welcome. In this paper, we suggest employing a variation of the sequential secant method in order to accelerate sequential residual methods. The performance of the resulting algorithm is illustrated by applying it to the solution of very large problems coming from the discretization of partial differential equations.Keywordsnonlinear systems of equationssequential residual methodsaccelerationlarge-scale problemsMSC codes65H1065K0590C53
We study how to use the BFGS quasi-Newton matrices to precondition minimization methods for problems where the storage is critical. We give an update formula which generates matrices using information … We study how to use the BFGS quasi-Newton matrices to precondition minimization methods for problems where the storage is critical. We give an update formula which generates matrices using information from the last <italic>m</italic> iterations, where <italic>m</italic> is any number supplied by the user. The quasi-Newton matrix is updated at every iteration by dropping the oldest information and replacing it by the newest information. It is shown that the matrices generated have some desirable properties. The resulting algorithms are tested numerically and compared with several well-known methods.
Over the last two decades, it has been observed that using the gradient vector as a search direction in large-scale optimization may lead to efficient algorithms. The effectiveness relies on … Over the last two decades, it has been observed that using the gradient vector as a search direction in large-scale optimization may lead to efficient algorithms. The effectiveness relies on choosing the step lengths according to novel ideas that are related to the spectrum of the underlying local Hessian rather than related to the standard decrease in the objective function. A review of these so-called spectral projected gradient methods for convex constrained optimization is presented. To illustrate the performance of these low-cost schemes, an optimization problem on the set of positive definite matrices is described.
A Bregman Forward-Backward Linesearch Algorithm for Nonconvex Composite Optimization: Superlinear Convergence to Nonisolated Local Minima A Bregman Forward-Backward Linesearch Algorithm for Nonconvex Composite Optimization: Superlinear Convergence to Nonisolated Local Minima
The limited-memory BFGS method (L-BFGS) has become the workhorse optimization strategy for many large-scale nonlinear optimization problems. A major difficulty with L-BFGS is that, although the memory size $m$ can … The limited-memory BFGS method (L-BFGS) has become the workhorse optimization strategy for many large-scale nonlinear optimization problems. A major difficulty with L-BFGS is that, although the memory size $m$ can have a significant effect on performance, it is difficult to know what memory size will work best. Importantly, a larger $m$ does not necessarily improve performance, but may, in fact, degrade it. There is no guidance in the literature on how to choose $m$. In this paper, we briefly review L-BFGS and then suggest two computationally efficient ways to measure the effectiveness of various memory sizes, thus allowing us to adaptively choose a different memory size at each iteration. Our overall numerical results illustrate that our approach improves the performance of the L-BFGS method. These results also indicate some further directions for research.
Abstract This paper focuses on the problem of convex constraint nonlinear equations involving monotone operators in Euclidean space. A Fletcher and Reeves type derivative-free conjugate gradient method is proposed. The … Abstract This paper focuses on the problem of convex constraint nonlinear equations involving monotone operators in Euclidean space. A Fletcher and Reeves type derivative-free conjugate gradient method is proposed. The proposed method is designed to ensure the descent property of the search direction at each iteration. Furthermore, the convergence of the proposed method is proved under the assumption that the underlying operator is monotone and Lipschitz continuous. The numerical results show that the method is efficient for the given test problems.
We present a quasi-Newton interior-point method appropriate for optimization problems with pointwise inequality constraints in Hilbert function spaces. Among others, our methodology applies to optimization problems constrained by partial differential … We present a quasi-Newton interior-point method appropriate for optimization problems with pointwise inequality constraints in Hilbert function spaces. Among others, our methodology applies to optimization problems constrained by partial differential equations (PDEs) that are posed in a reduced-space formulation and have bounds or inequality constraints on the optimized parameter function. We first introduce the formalization of an infinite-dimensional quasi-Newton interior-point algorithm using secant BFGS updates and then proceed to derive a discretized interior-point method capable of working with a wide range of finite element discretization schemes. We also discuss and address mathematical and software interface issues that are pervasive when existing off-the-shelf PDE solvers are to be used with off-the-shelf nonlinear programming solvers. Finally, we elaborate on the numerical and parallel computing strengths and limitations of the proposed methodology on several classes of PDE-constrained problems.
This paper describes implementations of eight algorithms of Newton and quasi-Newton type for solving large sparse systems of nonlinear equations. For linear algebra calculations, a symbolic manipulation is used, as … This paper describes implementations of eight algorithms of Newton and quasi-Newton type for solving large sparse systems of nonlinear equations. For linear algebra calculations, a symbolic manipulation is used, as well as a static data structure introduced recently by George and Ng, which allows a partial pivoting strategy for solving linear systems. A numerical comparison of the implemented methods is presented.
The non-linear elasticity equations of heart mechanics are solved while emulating the effects of a propagating activation wave. The dynamics of a 1 cm3 slab of active cardiac tissue was … The non-linear elasticity equations of heart mechanics are solved while emulating the effects of a propagating activation wave. The dynamics of a 1 cm3 slab of active cardiac tissue was simulated as the electrical wave traversed the muscular heart wall transmurally. The regular Newton (Newton–Raphson) method was compared to two modified Newton approaches, and also to a third approach that delayed update only of some selected Jacobian elements. In addition, the impact of changing the time step (0.01, 0.1 and 1 ms) and the relative non-linear convergence tolerance (10−4, 10−3 and 10−2) was investigated. Updating the Jacobian only when slow convergence occured was by far the most efficient approach, giving time savings of 83–96%. For each of the four methods, CPU times were reduced by 48–90% when the time step was increased by a factor 10. Increasing the convergence tolerance by the same factor gave time savings of 3–71%. Different combinations of activation wave speed, stress rate and bulk modulus revealed that the fastest method became relatively even faster as stress rate and bulk modulus was decreased, while the activation speed had negligible influence in this respect.
An algorithm for solving linear equations is stable on the class of nonsingular symmetric matrices or on the class of symmetric positive definite matrices if the computed solution solves a … An algorithm for solving linear equations is stable on the class of nonsingular symmetric matrices or on the class of symmetric positive definite matrices if the computed solution solves a system that is near the original problem. Here it is shown that any stable algorithm is also strongly stable on the same matrix class if the computed solution solves a nearby problem that is also symmetric or symmetric positive definite.
The problem of adjusting the second-order term in secant methods for nonlinear least squares problems with zero-residual is addressed. The author uses the framework of structured secant methods to derive … The problem of adjusting the second-order term in secant methods for nonlinear least squares problems with zero-residual is addressed. The author uses the framework of structured secant methods to derive and investigate a new way to resize the approximation of the second-order term using some exact information. The resulting method is a self-adjusting structured secant method for nonlinear least squares problems, yielding a q-quadratic convergence rate for zero residual and a q-superlinear convergence rate for nonzero residual problems.
This paper develops the fixed “almost“ least-change secant methods for solving of nonlinear problems. The presented results generalize and extend some results given by Dennis and Walker [SIAM J. Numer., … This paper develops the fixed “almost“ least-change secant methods for solving of nonlinear problems. The presented results generalize and extend some results given by Dennis and Walker [SIAM J. Numer., Anal., 18 (1981), pp. 949–987]. Among other things, multilevel update algorithms are described.
Many problems in science and engineering can be formulated as nonlinear least-squares (NLS) problems. Thus, the need for efficient algorithms to solve these problems can not be overemphasized. In that … Many problems in science and engineering can be formulated as nonlinear least-squares (NLS) problems. Thus, the need for efficient algorithms to solve these problems can not be overemphasized. In that sense, we introduce a generalized structured-based diagonal Hessian algorithm for solving NLS problems. The formulation associated with this algorithm is essentially a generalization of a similar result in Yahaya <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">et al.</i> (Journal of Computational and Applied Mathematics, pp. 113582, 2021). However, in this work, the structured diagonal Hessian update is derived under a weighted Frobenius norm; this allows other choices of the weighted matrix analogous to the Davidon-Fletcher-Powell (DFP) method. Moreover, to theoretically fill the gap in Yahaya <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">et al.</i> (Journal of Computational and Applied Mathematics, pp. 113582, 2021), we have shown that the proposed algorithm is R-linearly convergent under some standard conditions devoid of any safeguarding strategy. Furthermore, we experimentally tested the proposed scheme on some standard benchmark problems in the literature. Finally, we applied this algorithm to solve robotic motion control problem consisting of 3DOF (degrees of freedom).
The periodic steady-state solution of nonlinear dynamic systems can be computed by the so-called quick steady-state methods, e.g., Newton iteration, optimization or extrapolation. These methods compute a vector <tex xmlns:mml="http://www.w3.org/1998/Math/MathML" … The periodic steady-state solution of nonlinear dynamic systems can be computed by the so-called quick steady-state methods, e.g., Newton iteration, optimization or extrapolation. These methods compute a vector <tex xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">z(t_{0})</tex> of the periodic steady-state solution by requiring the solution to repeat itself periodically, <tex xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">z(t_{0} + T) = z(t_{0})</tex> where <tex xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">T</tex> is the period. It is essential for the performance of the quick steady-state methods that the solution of the differential equations at <tex xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">t_{0} + T</tex> is a smooth function of the initial value at <tex xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">t_{0}</tex> . This paper gives sufficient conditions for the numerical approximation of the solution at <tex xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">t_{0} + T</tex> to be differentiable with Lipschitz continuous derivative with respect to the initial vector at <tex xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">t_{0}</tex> . The numerical approximation is obtained by the backward differentiation formulas, and two cases are considered. First the nonlinear algebraic corrector equations are solved exactly, and this case deals both with continuously differentiable problems and piecewise-linear problems. Secondly the algebraic equations are solved to an accuracy consistent with the discretization error of the backward differentiation formulas. Finally, the theorems are illustrated by examples.
Article published in IAENG International Journal of Applied Mathematics, vol. 47(3): 352-360 Article published in IAENG International Journal of Applied Mathematics, vol. 47(3): 352-360
When solving ill-conditioned nonlinear programs by descent algorithms, the descent requirement may induce the step lengths to become very. small. thus resulting in very poor performances. Recently, suggestions have been … When solving ill-conditioned nonlinear programs by descent algorithms, the descent requirement may induce the step lengths to become very. small. thus resulting in very poor performances. Recently, suggestions have been made to circumvent this problem. among which is a class of approaches in which the objective value may be allowed to increase temporarily. Grippo et al. [GLL91] introduce nonmonotone line searches in the class of deflected gradient methods in unconstrained differentiable optimization; this technique allows for longer steps (typically of unit length) to be taken, and is successfully applied to some ill-conditioned problems. This paper extends their nonmonotone approach and convergence results to the large class of cost approximation algorithms of Patriksson [Pat93b]. and to optimization problems with both convex constraints and nondifferentiable objective functions
The solution of the equations governing solid mechanics is often obtained via Newton's method. This approach can be problematic if the determination, storage, or solution cost associated with the Jacobian … The solution of the equations governing solid mechanics is often obtained via Newton's method. This approach can be problematic if the determination, storage, or solution cost associated with the Jacobian is high. These challenges are magnified for multiphysics applications with many coupled variables. Jacobian-free Newton-Krylov (JFNK) methods avoid many of the difficulties associated with the Jacobian by using a finite difference approximation. BISON is a parallel, object-oriented, nonlinear solid mechanics and multiphysics application that leverages JFNK methods. We overview JFNK, outline the capabilities of BISON, and demonstrate the effectiveness of JFNK for solid mechanics and solid mechanics coupled to other PDEs using a series of demonstration problems.
article Free AccessArtifacts Evaluated & ReusableArtifacts Available Share on Algorithm 611: Subroutines for Unconstrained Minimization Using a Model/Trust-Region Approach Author: David M. Gay Bell Laboratories, 600 Mountain Road, Murray Hill, … article Free AccessArtifacts Evaluated & ReusableArtifacts Available Share on Algorithm 611: Subroutines for Unconstrained Minimization Using a Model/Trust-Region Approach Author: David M. Gay Bell Laboratories, 600 Mountain Road, Murray Hill, NJ Bell Laboratories, 600 Mountain Road, Murray Hill, NJView Profile Authors Info & Claims ACM Transactions on Mathematical SoftwareVolume 9Issue 4Dec. 1983 pp 503–524https://doi.org/10.1145/356056.356066Published:01 December 1983Publication History 266citation1,388DownloadsMetricsTotal Citations266Total Downloads1,388Last 12 Months41Last 6 weeks4 Get Citation AlertsNew Citation Alert added!This alert has been successfully added and will be sent to:You will be notified whenever a record that you have chosen has been cited.To manage your alert preferences, click on the button below.Manage my Alerts New Citation Alert!Please log in to your account Save to BinderSave to BinderCreate a New BinderNameCancelCreateExport CitationPublisher SiteeReaderPDF
We consider the problem of minimizing a smooth function of n variables subject to m smooth equality constraints. We begin by describing various approaches to Newton's method for this problem, … We consider the problem of minimizing a smooth function of n variables subject to m smooth equality constraints. We begin by describing various approaches to Newton's method for this problem, with emphasis on the recent work of Goodman. This leads to the proposal of a Broyden-type method which updates an $n \times (n - m)$ matrix approximating a "one-sided projected Hessian" of a Lagrangian function. This method is shown to converge Q-superlinearly. We also give a new short proof of the Boggs-Tolle-Wang necessary and sufficient condition for Q-superlinear convergence of a class of quasi-Newton methods for solving this problem. Finally, we describe an algorithm which updates an approximation to a "two-sided projected Hessian," a symmetric matrix of order $n - m$ which is generally positive definite near a solution. We present several new variants of this algorithm and show that under certain conditions they all have a local two-step Q-superlinear convergence property, even though only one set of gradients is evaluated per iteration. Numerical results are presented, indicating that the methods may be very useful in practice.
This paper reviews some of the most successful methods for unconstrained, constrained and nondifferentiable optimization calculations. Particular attention is given to the contribution that theoretical analysis has made to the … This paper reviews some of the most successful methods for unconstrained, constrained and nondifferentiable optimization calculations. Particular attention is given to the contribution that theoretical analysis has made to the development of algorithms. It seems that practical considerations provide the main new ideas, and that subsequent theoretical studies give improvements to algorithms, coherence to the subject, and better understanding.
It is shown that certain rank-one and rank-two corrections to symmetric positive definite matrices may be expressed in the form of a product. This product form gives control over the … It is shown that certain rank-one and rank-two corrections to symmetric positive definite matrices may be expressed in the form of a product. This product form gives control over the positive definiteness, determinant value and conditioning of the corrected matrix. An application to updating formulae of quasi-Newton methods for unconstrained minimization is discussed.
A new algorithm for solving systems of nonlinear equations where the Jacobian is known to be sparse is shown to converge locally if a sufficiently good initial estimate of the … A new algorithm for solving systems of nonlinear equations where the Jacobian is known to be sparse is shown to converge locally if a sufficiently good initial estimate of the solution is available and if the Jacobian satisfies a Lipschitz condition. The results of numerical experiments are quoted in which systems of up to 600 equations have been solved by the method.
Analyses of the convergence properties of general quasi-Newton methods are presented, particular attention being paid to how the approximate solutions and the iteration matrices approach their final values. It is … Analyses of the convergence properties of general quasi-Newton methods are presented, particular attention being paid to how the approximate solutions and the iteration matrices approach their final values. It is further shown that when Broyden’s algorithm is applied to linear systems, the error norms are majorised by a superlinearly convergent sequence of an unusual kind.
solution. The functions that require zeroing are real functions of real variables and it will be assumed that they are continuous and differentiable with respect to these variables. In many … solution. The functions that require zeroing are real functions of real variables and it will be assumed that they are continuous and differentiable with respect to these variables. In many practical examples they are extremely complicated anld hence laborious to compute, an-d this fact has two important immediate consequences. The first is that it is impracticable to compute any derivative that may be required by the evaluation of the algebraic expression of this derivative. If derivatives are needed they must be obtained by differencing. The second is that during any iterative solution process the bulk of the computing time will be spent in evaluating the functions. Thus, the most efficient process will tenid to be that which requires the smallest number of function evaluations. This paper discusses certain modificatioins to Newton's method designed to reduce the number of function evaluations required. Results of various numerical experiments are given and conditions under which the modified versions are superior to the original are tentatively suggested.
We consider Broyden’s 1965 method for solving nonlinear equations. If the mapping is linear, then a simple modification of this method guarantees global and <italic>Q</italic>-superlinear convergence. For nonlinear mappings it … We consider Broyden’s 1965 method for solving nonlinear equations. If the mapping is linear, then a simple modification of this method guarantees global and <italic>Q</italic>-superlinear convergence. For nonlinear mappings it is shown that the hybrid strategy for nonlinear equations due to Powell leads to <italic>R</italic>-superlinear convergence provided the search directions form a uniformly linearly independent sequence. We then explore this last concept and its connection with Broyden’s method. Finally, we point out how the above results extend to Powell’s symmetric version of Broyden’s method.
Previous article Next article Convergence Conditions for Ascent MethodsPhilip WolfePhilip Wolfehttps://doi.org/10.1137/1011036PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstractLiberal conditions on the steps of a “descent” method for finding extrema of a … Previous article Next article Convergence Conditions for Ascent MethodsPhilip WolfePhilip Wolfehttps://doi.org/10.1137/1011036PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstractLiberal conditions on the steps of a “descent” method for finding extrema of a function are given; most known results are special cases.[1] Haskell B. Curry, The method of steepest descent for non-linear minimization problems, Quart. Appl. Math., 2 (1944), 258–261 MR0010667 0061.26801 CrossrefGoogle Scholar[2] Augustine Cauchy, Méthode générale pour la résolution des systèmes d'équations simultanées, C.R. Acad. Sci., 25 (1847), 536–538 Google Scholar[3] A. A. Goldstein, Minimizing functionals on normed-linear spaces, SIAM J. Control, 4 (1966), 81–89 10.1137/0304008 MR0196900 0147.12701 LinkGoogle Scholar[4] A. A. Goldstein, Cauchy's method of minimization, Numer. Math., 4 (1962), 146–150 10.1007/BF01386306 MR0141222 0105.10201 CrossrefGoogle Scholar[5] Alexander Ostrowski, Solution of equations and systems of equations, Second edition. Pure and Applied Mathematics, Vol. 9, Academic Press, New York, 1966xiv+338 MR0216746 0222.65070 Google Scholar[6] G. Zoutehdijk, 1967, Private communication Google Scholar[7] R. Fletcher and , C. M. Reeves, Function minimization by conjugate gradients, Comput. J., 7 (1964), 149–154 10.1093/comjnl/7.2.149 MR0187375 0132.11701 CrossrefISIGoogle Scholar[8] W. Oettli, 1967, Private communication Google Scholar[9] L. V. Kantorovich and , G. P. Akilov, Functional analysis in normed spaces, Translated from the Russian by D. E. Brown. Edited by A. P. Robertson. International Series of Monographs in Pure and Applied Mathematics, Vol. 46, The Macmillan Co., New York, 1964xiii+771, Chap. 15 MR0213845 0127.06104 Google Scholar[10] Hirotugu Akaike, On a successive transformation of probability distribution and its application to the analysis of the optimum gradient method, Ann. Inst. Statist. Math. Tokyo, 11 (1959), 1–16 10.1007/BF01831719 MR0107973 0100.14002 CrossrefISIGoogle Scholar[11] R. Fletcher and , M. J. D. Powell, A rapidly convergent descent method for minimization, Comput. J., 6 (1963/1964), 163–168 MR0152116 0132.11603 CrossrefISIGoogle Scholar[12] M. J. Box, A comparison of several current optimization methods, and the use of transformations in constrained problems, Comput. J., 9 (1966), 67–77 MR0192645 0146.13304 CrossrefISIGoogle Scholar[13] Donald M. Topkis and , Arthur F. Veinott, on the convergence of some feasible direction algorithms for nonlinear programming, J. SIAM control, 5 (1967), 268–274 10.1137/0305018 0158.18805 LinkGoogle Scholar[14] Philip Wolfe, on the convergence of gradient methods under constraints, Rep., RZ-204, IBM watson research center, yorktown heights, new york, 1966 Google Scholar Previous article Next article FiguresRelatedReferencesCited ByDetails Two efficient modifications of AZPRP conjugate gradient method with sufficient descent propertyJournal of Inequalities and Applications, Vol. 2022, No. 1 | 10 January 2022 Cross Ref Optimal Transport Based Seismic Inversion:Beyond Cycle SkippingCommunications on Pure and Applied Mathematics, Vol. 75, No. 10 | 1 April 2021 Cross Ref A robust BFGS algorithm for unconstrained nonlinear optimization problemsOptimization, Vol. 17 | 19 September 2022 Cross Ref Two Methods for the Implicit Integration of Stiff Reaction SystemsComputational Methods in Applied Mathematics, Vol. 0, No. 0 | 14 September 2022 Cross Ref Simple and fast convergent procedure to estimate recursive path analysis modelBehaviormetrika, Vol. 107 | 6 September 2022 Cross Ref Adaptive three-term PRP algorithms without gradient Lipschitz continuity condition for nonconvex functionsNumerical Algorithms, Vol. 91, No. 1 | 20 January 2022 Cross Ref A Hybrid Stochastic Deterministic Algorithm for Solving Unconstrained Optimization ProblemsMathematics, Vol. 10, No. 17 | 23 August 2022 Cross Ref Pseudospectral methods and iterative solvers for optimization problems from multiscale particle dynamicsBIT Numerical Mathematics, Vol. 48 | 11 August 2022 Cross Ref An outlier-resistant κ -generalized approach for robust physical parameter estimationPhysica A: Statistical Mechanics and its Applications, Vol. 600 | 1 Aug 2022 Cross Ref An Active Set Trust-Region Method for Bound-Constrained OptimizationBulletin of the Iranian Mathematical Society, Vol. 48, No. 4 | 27 July 2021 Cross Ref Advancing Three-Dimensional Coupled Water Quality Model of Marine Ranches: Model Development, Global Sensitivity Analysis, and Optimization Based on Observation SystemJournal of Marine Science and Engineering, Vol. 10, No. 8 | 27 July 2022 Cross Ref Robust regression against heavy heterogeneous contaminationMetrika, Vol. 16 | 1 July 2022 Cross Ref A new class of nonlinear conjugate gradient coefficients for unconstrained optimizationAsian-European Journal of Mathematics, Vol. 15, No. 07 | 14 October 2021 Cross Ref New iterative conjugate gradient method for nonlinear unconstrained optimizationRAIRO - Operations Research, Vol. 56, No. 4 | 29 July 2022 Cross Ref Optimizing Oblique Projections for Nonlinear Systems using TrajectoriesSamuel E. Otto, Alberto Padovan, and Clarence W. RowleySIAM Journal on Scientific Computing, Vol. 44, No. 3 | 24 June 2022AbstractPDF (2764 KB)Variational methods for finding periodic orbits in the incompressible Navier–Stokes equationsJournal of Fluid Mechanics, Vol. 941 | 26 April 2022 Cross Ref LMBOPT: a limited memory method for bound-constrained optimizationMathematical Programming Computation, Vol. 14, No. 2 | 10 January 2022 Cross Ref Accelerated memory-less SR1 method with generalized secant equation for unconstrained optimizationCalcolo, Vol. 59, No. 2 | 11 March 2022 Cross Ref DiffPoseNet: Direct Differentiable Camera Pose Estimation2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) | 1 Jun 2022 Cross Ref A modified secant equation quasi-Newton method for unconstrained optimizationJournal of Applied Mathematics and Computing, Vol. 30 | 31 May 2022 Cross Ref Coupled support tensor machine classification for multimodal neuroimaging dataStatistical Analysis and Data Mining: The ASA Data Science Journal, Vol. 10 | 23 May 2022 Cross Ref A new hybrid conjugate gradient method of unconstrained optimization methodsAsian-European Journal of Mathematics, Vol. 15, No. 04 | 18 June 2021 Cross Ref A link between the steepest descent method and fixed-point iterationsOptimization Letters, Vol. 69 | 18 March 2022 Cross Ref A hybrid-line-and-curve search globalization technique for inexact Newton methodsApplied Numerical Mathematics, Vol. 173 | 1 Mar 2022 Cross Ref Development and Evaluation of Geometry Optimization Algorithms in Conjunction with ANI PotentialsJournal of Chemical Theory and Computation, Vol. 18, No. 2 | 12 January 2022 Cross Ref Nonlinear System Identification: Learning While Respecting Physical Models Using a Sequential Monte Carlo MethodIEEE Control Systems, Vol. 42, No. 1 | 1 Feb 2022 Cross Ref Visual analytics for nonlinear programming in robot motion planningJournal of Visualization, Vol. 25, No. 1 | 13 September 2021 Cross Ref Elastic wave full-waveform inversion in the time domain by the trust region methodJournal of Applied Geophysics, Vol. 197 | 1 Feb 2022 Cross Ref Adjoint Waveform Tomography of South AmericaJournal of Geophysical Research: Solid Earth, Vol. 127, No. 2 | 27 January 2022 Cross Ref A conjugate gradient algorithm based on double parameter scaled Broyden–Fletcher–Goldfarb–Shanno update for optimization problems and image restorationNeural Computing and Applications, Vol. 34, No. 1 | 13 August 2021 Cross Ref In-line viscosity identification via thermal-rheological measurements in an annular duct for polymer processingInternational Journal of Heat and Mass Transfer, Vol. 182 | 1 Jan 2022 Cross Ref Regularized elastic full-waveform inversion using deep learningAdvances in Subsurface Data Analytics | 1 Jan 2022 Cross Ref Nonlinear Optimization: A Brief OverviewNumerical Infinities and Infinitesimals in Optimization | 6 July 2022 Cross Ref Leveraging Joint-Diagonalization in Transform-Learning NMFIEEE Transactions on Signal Processing, Vol. 70 | 1 Jan 2022 Cross Ref Backtracking Gradient Descent Method and Some Applications in Large Scale Optimisation. Part 2: Algorithms and ExperimentsApplied Mathematics & Optimization, Vol. 84, No. 3 | 6 September 2020 Cross Ref Optical diffraction tomography from single-molecule localization microscopyOptics Communications, Vol. 499 | 1 Nov 2021 Cross Ref Constrained neural network training and its application to hyperelastic material modelingComputational Mechanics, Vol. 68, No. 5 | 3 August 2021 Cross Ref The Human Lumbar Spine During High-Rate Under Seat Loading: A Combined Metric Injury CriteriaAnnals of Biomedical Engineering, Vol. 49, No. 11 | 23 July 2021 Cross Ref Rayleigh Wave Dispersion Spectrum Inversion Across ScalesSurveys in Geophysics, Vol. 42, No. 6 | 19 October 2021 Cross Ref Pre-conditioned BFGS-based uncertainty quantification in elastic full-waveform inversionGeophysical Journal International, Vol. 228, No. 2 | 21 September 2021 Cross Ref Neural network method for solving parabolic two-temperature microscale heat conduction in double-layered thin films exposed to ultrashort-pulsed lasersInternational Journal of Heat and Mass Transfer, Vol. 178 | 1 Oct 2021 Cross Ref New hyrid conjugate gradient method as a convex combination of HZ and CD methodsAsian-European Journal of Mathematics, Vol. 14, No. 10 | 6 March 2021 Cross Ref Parallel Dislocation Model Implementation for Earthquake Source Parameter Estimation on Multi-Threaded GPUApplied Sciences, Vol. 11, No. 20 | 11 October 2021 Cross Ref Extended full waveform inversion with matching filterGeophysical Prospecting, Vol. 69, No. 7 | 24 June 2021 Cross Ref Full-waveform Inversion Based on q-Laplace DistributionPure and Applied Geophysics, Vol. 178, No. 9 | 24 August 2021 Cross Ref Modifications of Hestenes and Stiefel CG Method for Solving Unconstrained Optimization Problems2021 7th International Conference on Contemporary Information Technology and Mathematics (ICCITM) | 25 Aug 2021 Cross Ref A note on memory-less SR1 and memory-less BFGS methods for large-scale unconstrained optimizationNumerical Algorithms, Vol. 11 | 17 August 2021 Cross Ref A Descent Four-Term Conjugate Gradient Method with Global Convergence Properties for Large-Scale Unconstrained Optimisation ProblemsMathematical Problems in Engineering, Vol. 2021 | 14 Aug 2021 Cross Ref A modified conjugate gradient-based Elman neural networkCognitive Systems Research, Vol. 68 | 1 Aug 2021 Cross Ref On the estimation of destructive cure rate model: A new study with exponentially weighted Poisson competing risksStatistica Neerlandica, Vol. 75, No. 3 | 2 March 2021 Cross Ref A Modified Liu and Storey Conjugate Gradient Method for Large Scale Unconstrained Optimization ProblemsAlgorithms, Vol. 14, No. 8 | 28 July 2021 Cross Ref Robust parameter estimation based on the generalized log-likelihood in the context of Sharma-Taneja-Mittal measurePhysical Review E, Vol. 104, No. 2 | 6 August 2021 Cross Ref Geodesic density regression for correcting 4DCT pulmonary respiratory motion artifactsMedical Image Analysis, Vol. 72 | 1 Aug 2021 Cross Ref Optimisation of Parameters in a German Bight Circulation Model by 4DVAR Assimilation of Current and Water Level ObservationsFrontiers in Marine Science, Vol. 8 | 29 July 2021 Cross Ref ASLR: An Adaptive Scheduler for Learning Rate2021 International Joint Conference on Neural Networks (IJCNN) | 18 Jul 2021 Cross Ref A Convex Combination between Two Different Search Directions of Conjugate Gradient Method and Application in Image RestorationMathematical Problems in Engineering, Vol. 2021 | 13 Jul 2021 Cross Ref An Adaptive Three-Term Conjugate Gradient Method with Sufficient Descent Condition and Conjugacy ConditionJournal of the Operations Research Society of China, Vol. 9, No. 2 | 12 July 2019 Cross Ref Direct Energy Minimization Based on Exponential Transformation in Density Functional Calculations of Finite and Extended SystemsComputer Physics Communications | 1 Jun 2021 Cross Ref A new three-term spectral conjugate gradient algorithm with higher numerical performance for solving large scale optimization problems based on Quasi-Newton equationInternational Journal of Modeling, Simulation, and Scientific Computing | 31 May 2021 Cross Ref Robust approaches for inverse problems based on Tsallis and Kaniadakis generalised statisticsThe European Physical Journal Plus, Vol. 136, No. 5 | 11 May 2021 Cross Ref Stochastic quasi-Newton with line-search regularisationAutomatica, Vol. 127 | 1 May 2021 Cross Ref An Efficient Modified AZPRP Conjugate Gradient Method for Large-Scale Unconstrained Optimization ProblemJournal of Mathematics, Vol. 2021 | 26 Apr 2021 Cross Ref A new hybrid conjugate gradient algorithm as a convex combination of MMWU and RMIL nonlinear problemsJournal of Interdisciplinary Mathematics, Vol. 24, No. 3 | 18 March 2021 Cross Ref Estimation of discrete choice models with hybrid stochastic adaptive batch size algorithmsJournal of Choice Modelling, Vol. 38 | 1 Mar 2021 Cross Ref A Modified Descent Spectral Conjugate Gradient Method for Unconstrained OptimizationIranian Journal of Science and Technology, Transactions A: Science, Vol. 45, No. 1 | 31 October 2020 Cross Ref QSSR Modeling of Bacillus Subtilis Lipase A Peptide Collision Cross-Sections in Ion Mobility Spectrometry: Local Descriptor Versus Global DescriptorThe Protein Journal, Vol. 40, No. 1 | 16 January 2021 Cross Ref Numerical optimization of a multiphysics calculation scheme based on partial convergenceAnnals of Nuclear Energy, Vol. 151 | 1 Feb 2021 Cross Ref A new accelerated diagonal quasi-Newton updating method with scaled forward finite differences directional derivative for unconstrained optimizationOptimization, Vol. 70, No. 2 | 16 January 2020 Cross Ref A Three-Term Gradient Descent Method with Subspace TechniquesMathematical Problems in Engineering, Vol. 2021 | 7 Jan 2021 Cross Ref Implementing and modifying Broyden class updates for large scale optimizationComputational Optimization and Applications, Vol. 78, No. 1 | 9 November 2020 Cross Ref Set-point optimization in wind farms to mitigate effects of flow blockage induced by atmospheric gravity wavesWind Energy Science, Vol. 6, No. 1 | 5 February 2021 Cross Ref Statistics and Numerical MethodsPattern Recognition, Tracking and Vertex Reconstruction in Particle Detectors | 23 November 2020 Cross Ref A training algorithm with selectable search direction for complex-valued feedforward neural networksNeural Networks, Vol. 40 | 1 Jan 2021 Cross Ref A decent three term conjugate gradient method with global convergence properties for large scale unconstrained optimization problemsAIMS Mathematics, Vol. 6, No. 10 | 1 Jan 2021 Cross Ref Adaptive Learning Rate and Momentum for Training Deep Neural NetworksMachine Learning and Knowledge Discovery in Databases. Research Track | 11 September 2021 Cross Ref A Quasi‐Newton Reformulated Geostatistical Approach on Reduced Dimensions for Large‐Dimensional Inverse ProblemsWater Resources Research, Vol. 57, No. 1 | 22 January 2021 Cross Ref Manifold Optimization for High-Accuracy Spatial Location Estimation Using Ultrasound WavesIEEE Transactions on Signal Processing, Vol. 69 | 1 Jan 2021 Cross Ref Basic Descent MethodsLinear and Nonlinear Programming | 20 August 2021 Cross Ref Taking the 4D Nature of fMRI Data Into Account Promises Significant Gains in Data CompletionIEEE Access, Vol. 9 | 1 Jan 2021 Cross Ref Approaching the full configuration interaction ground state from an arbitrary wavefunction with gradient descent and quasi-Newton algorithmsThe Journal of Chemical Physics, Vol. 153, No. 23 | 21 Dec 2020 Cross Ref Customized Federated Learning for accelerated edge computing with heterogeneous task targetsComputer Networks, Vol. 183 | 1 Dec 2020 Cross Ref Hybrid Riemannian conjugate gradient methods with global convergence propertiesComputational Optimization and Applications, Vol. 77, No. 3 | 5 September 2020 Cross Ref A one-parameter class of three-term conjugate gradient methods with an adaptive parameter choiceOptimization Methods and Software, Vol. 35, No. 6 | 13 September 2018 Cross Ref Gaussian Process Regression for Maximum Entropy DistributionJournal of Computational Physics, Vol. 418 | 1 Oct 2020 Cross Ref A new non-linear conjugate gradient algorithm for destructive cure rate model and a simulation study: illustration with negative binomial competing risksCommunications in Statistics - Simulation and Computation, Vol. 6 | 10 September 2020 Cross Ref A double parameter self-scaling memoryless BFGS method for unconstrained optimizationComputational and Applied Mathematics, Vol. 39, No. 3 | 2 June 2020 Cross Ref Adaptive type-2 neural fuzzy sliding mode control of a class of nonlinear systemsNonlinear Dynamics, Vol. 101, No. 4 | 18 August 2020 Cross Ref A modified nonlinear Polak–Ribière–Polyak conjugate gradient method with sufficient descent propertyCalcolo, Vol. 57, No. 3 | 26 August 2020 Cross Ref Semi-discrete optimal transport: a solution procedure for the unsquared Euclidean distance caseMathematical Methods of Operations Research, Vol. 92, No. 1 | 12 February 2020 Cross Ref Probing depth and lateral variations of upper-mantle seismic anisotropy from full-waveform inversion of teleseismic body-wavesGeophysical Journal International, Vol. 222, No. 1 | 20 April 2020 Cross Ref High-resolution reservoir characterization using deep learning-aided elastic full-waveform inversion: The North Sea field data exampleGEOPHYSICS, Vol. 85, No. 4 | 8 May 2020 Cross Ref Estimation of Beta-Pareto Distribution Based on Several Optimization MethodsMathematics, Vol. 8, No. 7 | 1 July 2020 Cross Ref Multi‐parameter reflection waveform inversion for acoustic transversely isotropic media with a vertical symmetry axisGeophysical Prospecting, Vol. 68, No. 6 | 17 June 2020 Cross Ref The application of nonlinear least-squares estimation algorithms in atmospheric density model calibrationAircraft Engineering and Aerospace Technology, Vol. 92, No. 7 | 20 May 2020 Cross Ref Decreasing the uncertainty of classical laser flash analysis using numerical algorithms robust to noise and systematic errorsReview of Scientific Instruments, Vol. 91, No. 6 | 1 Jun 2020 Cross Ref An Efficient Three-Term Iterative Method for Estimating Linear Approximation Models in Regression AnalysisMathematics, Vol. 8, No. 6 | 15 June 2020 Cross Ref Diagonal Approximation of the Hessian by Finite Differences for Unconstrained OptimizationJournal of Optimization Theory and Applications, Vol. 185, No. 3 | 10 May 2020 Cross Ref New conjugate gradient algorithms based on self-scaling memoryless Broyden–Fletcher–Goldfarb–Shanno methodCalcolo, Vol. 57, No. 2 | 18 May 2020 Cross Ref A class of globally convergent three-term Dai-Liao conjugate gradient methodsApplied Numerical Mathematics, Vol. 151 | 1 May 2020 Cross Ref A recalling-enhanced recurrent neural network: Conjugate gradient learning algorithm and its convergence analysisInformation Sciences, Vol. 519 | 1 May 2020 Cross Ref Three-Dimensional Regularized Focusing Migration: A Case Study from the Yucheng Mining Area, Shandong, ChinaMinerals, Vol. 10, No. 5 | 22 May 2020 Cross Ref On Quasi‐Newton methods in fast Fourier transform‐based micromechanicsInternational Journal for Numerical Methods in Engineering, Vol. 121, No. 8 | 23 December 2019 Cross Ref Some three-term conjugate gradient methods with the new direction structureApplied Numerical Mathematics, Vol. 150 | 1 Apr 2020 Cross Ref Analysis of the gradient method with an Armijo–Wolfe line search on a class of non-smooth convex functionsOptimization Methods and Software, Vol. 35, No. 2 | 9 October 2019 Cross Ref Dynamic search trajectory methods for global optimizationAnnals of Mathematics and Artificial Intelligence, Vol. 88, No. 1-3 | 27 August 2019 Cross Ref Adaptive Control of a Two-Link Flexible Manipulator Using a Type-2 Neural Fuzzy SystemArabian Journal for Science and Engineering, Vol. 45, No. 3 | 21 January 2020 Cross Ref A Study on Optimization Algorithms in MPCJournal of Physics: Conference Series, Vol. 1490, No. 1 | 1 Mar 2020 Cross Ref Global Convergence of the (1 + 1) Evolution Strategy to a Critical PointEvolutionary Computation, Vol. 28, No. 1 | 1 Mar 2020 Cross Ref Mineral Potential Mapping Using a Conjugate Gradient Logistic Regression ModelNatural Resources Research, Vol. 29, No. 1 | 27 June 2019 Cross Ref Orientation optimization in anisotropic materials using gradient descent methodComposite Structures, Vol. 234 | 1 Feb 2020 Cross Ref Refinement for single-nanoparticle structure determination from low-quality single-shot coherent diffraction dataIUCrJ, Vol. 7, No. 1 | 1 January 2020 Cross Ref Fast facial expression recognition using local binary features and shallow neural networksThe Visual Computer, Vol. 36, No. 1 | 22 August 2018 Cross Ref NPSOG: A New Hybrid Method for Unconstrained Differentiable OptimizationNumerical Solutions of Realistic Nonlinear Phenomena | 20 February 2020 Cross Ref Quasi-Newton Optimization Methods for Deep Learning ApplicationsDeep Learning Applications | 29 February 2020 Cross Ref Automatic targetless camera–LIDAR calibration by aligning edge with Gaussian mixture modelJournal of Field Robotics, Vol. 37, No. 1 | 9 August 2019 Cross Ref Basics of Mathematical ProgrammingShape Optimization Problems | 1 October 2020 Cross Ref An efficient solution scheme for small-strain crystal-elasto-viscoplasticity in a dual frameworkComputer Methods in Applied Mechanics and Engineering, Vol. 358 | 1 Jan 2020 Cross Ref Image and Depth Estimation with Mask-Based Lensless Cameras2019 IEEE 8th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP) | 1 Dec 2019 Cross Ref A new accelerated conjugate gradient method for large-scale unconstrained optimizationJournal of Inequalities and Applications, Vol. 2019, No. 1 | 20 November 2019 Cross Ref Research of step-length estimation methods for full waveform inversion in time domainExploration Geophysics, Vol. 50, No. 6 | 17 September 2019 Cross Ref Local-crosscorrelation elastic full-waveform inversionGEOPHYSICS, Vol. 84, No. 6 | 16 October 2019 Cross Ref A Hybrid of Quasi-Newton Method with CG Method for Unconstrained OptimizationJournal of Physics: Conference Series, Vol. 1366, No. 1 | 1 Nov 2019 Cross Ref Conjugate gradient-based Takagi-Sugeno fuzzy neural network parameter identification and its convergence analysisNeurocomputing, Vol. 364 | 1 Oct 2019 Cross Ref Regularized elastic full-waveform inversion using deep learningGEOPHYSICS, Vol. 84, No. 5 | 24 August 2019 Cross Ref Incremental and Parallel Machine Learning Algorithms With Automated Learning Rate AdjustmentsFrontiers in Robotics and AI, Vol. 6 | 27 August 2019 Cross Ref Localization of IoT Networks via Low-Rank Matrix CompletionIEEE Transactions on Communications, Vol. 67, No. 8 | 1 Aug 2019 Cross Ref A diagonal quasi-Newton updating method for unconstrained optimizationNumerical Algorithms, Vol. 81, No. 2 | 29 June 2018 Cross Ref A New Diagonal Quasi-Newton Updating Method With Scaled Forward Finite Differences Directional Derivative for Unconstrained OptimizationNumerical Functional Analysis and Optimization, Vol. 6 | 20 May 2019 Cross Ref Efficient Ensemble Refinement by ReweightingJournal of Chemical Theory and Computation, Vol. 15, No. 5 | 2 April 2019 Cross Ref An efficient adaptive three-term extension of the Hestenes–Stiefel conjugate gradient methodOptimization Methods and Software, Vol. 34, No. 3 | 15 October 2018 Cross Ref A Quasi-Newton Algorithm on the Orthogonal Manifold for NMF with Transform LearningICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) | 1 May 2019 Cross Ref Simultaneous reconstruction of the perfusion coefficient and initial temperature from time-average integral temperature measurementsApplied Mathematical Modelling, Vol. 68 | 1 Apr 2019 Cross Ref Interface inversion of gravitational data using spherical triangular tessellation: an application for the estimation of the Moon's crustal thicknessGeophysical Journal International, Vol. 217, No. 1 | 17 January 2019 Cross Ref An Advanced Estimation Algorithm for Ground‐Motion Models with Spatial CorrelationBulletin of the Seismological Society of America, Vol. 109, No. 2 | 5 March 2019 Cross Ref The non-convex geometry of low-rank matrix optimizationInformation and Inference: A Journal of the IMA, Vol. 8, No. 1 | 22 March 2018 Cross Ref The Optimal Location of Ground-Based GNSS Augmentation TransceiversGeosciences, Vol. 9, No. 3 | 27 February 2019 Cross Ref Convergence Rate of Descent Method with New Inexact Line-Search on Riemannian ManifoldsJournal of Optimization Theory and Applications, Vol. 180, No. 3 | 24 September 2018 Cross Ref Cooperative Localization in Hybrid Infrared/Visible Light Networks: Theoretical Limits and Distributed AlgorithmsIEEE Transactions on Signal and Information Processing over Networks, Vol. 5, No. 1 | 1 Mar 2019 Cross Ref A new hybrid approach for reliability-based design optimization of structural componentsMaterials Testing, Vol. 61, No. 2 | 24 April 2019 Cross Ref Ensemble Teaching for Hybrid Label PropagationIEEE Transactions on Cybernetics, Vol. 49, No. 2 | 1 Feb 2019 Cross Ref A New Dai-Liao Conjugate Gradient Method with Optimal Parameter ChoiceNumerical Functional Analysis and Optimization, Vol. 40, No. 2 | 20 November 2018 Cross Ref Scalable Inference of Ordinary Differential Equation Models of Biochemical ProcessesGene Regulatory Networks | 14 December 2018 Cross Ref Modelling the distribution of white matter hyperintensities due to ageing on MRI images using Bayesian inferenceNeuroImage, Vol. 185 | 1 Jan 2019 Cross Ref A Structured Quasi-Newton Algorithm for Optimizing with Incomplete Hessian InformationCosmin G. Petra, Naiyuan Chiang, and Mihai AnitescuSIAM Journal on Optimization, Vol. 29, No. 2 | 11 April 2019AbstractPDF (588 KB) 2019 Cross Ref 2019 Cross Ref Unconstrained OptimizationNonlinear Optimization | 10 November 2019 Cross Ref Optimization OverviewModelling and Simulation | 2 January 2020 Cross Ref New Hybrid Conjugate Gradient Method As A Convex Combination of Ls and Fr MethodsActa Mathematica Scientia, Vol. 39, No. 1 | 4 January 2019 Cross Ref Shape optimization via a levelset and a Gauss-Newton methodESAIM: Control, Optimisation and Calculus of Variations, Vol. 25 | 5 April 2019 Cross Ref Cubic regularization in symmetric rank-1 quasi-Newton methodsMathematical Programming Computation, Vol. 10, No. 4 | 12 February 2018 Cross Ref An Accelerated Three-Term Conjugate Gradient Method with Sufficient Descent Condition and Conjugacy ConditionJournal of Optimization Theory and Applications, Vol. 179, No. 3 | 28 August 2018 Cross Ref Measuring the three-dimensional structural properties of topologically associating domains2018 IEEE International Conference on Bioinformatics and Biomedicine (BIBM) | 1 Dec 2018 Cross Ref A New Correntropy-Based Conjugate Gradient Backpropagation Algorithm for Improving Training in Neural NetworksIEEE Transactions on Neural Networks and Learning Systems, Vol. 29, No. 12 | 1 Dec 2018 Cross Ref Full waveform inversion in time and frequency domain of velocity modeling in seismic imaging: FWISIMAT a Matlab codeEarth Sciences Research Journal, Vol. 22, No. 4 | 1 October 2018 Cross Ref A diagonal quasi-Newton updating method based on minimizing the measure function of Byrd and Nocedal for unconstrained optimizationOptimization, Vol. 67, No. 9 | 17 June 2018 Cross Ref Useful redundancy in parameter and time delay estimation for continuous-time modelsAutomatica, Vol. 95 | 1 Sep 2018 Cross Ref Optimal control of bioprocess systems using hybrid numerical optimization algorithmsOptimization, Vol. 67, No. 8 | 2 May 2018 Cross Ref Faster Independent Component Analysis by Preconditioning With Hessian ApproximationsIEEE Transactions on Signal Processing, Vol. 66, No. 15 | 1 Aug 2018 Cross Ref Toward interactive scanning tunneling microscopy simulations of large-scale molecular systems in real timeJournal of Applied Physics, Vol. 124, No. 4 | 28 Jul 2018 Cross Ref A Reduced-Order Gauss-Newton Method for Nonlinear Problems Based on Compressed Sensing for PDE ApplicationsNonlinear Systems - Modeling, Estimation, and Stability | 18 July 2018 Cross Ref A compressive hybrid conjugate gradient image recovery approach for radial MRIThe Imaging Science Journal, Vol. 66, No. 5 | 16 February 2018 Cross Ref A Novel Fractional Tikhonov Regularization Coupled with an Improved Super-Memory Gradient Method and Application to Dynamic Force Identification ProblemsMathematical Problems in Engineering, Vol. 2018 | 2 Jul 2018 Cross Ref Optimization approach for the Monge-Ampère equationActa Mathematica Scientia, Vol. 38, No. 4 | 1 Jul 2018 Cross Ref A Double-Parameter Scaling Broyden–Fletcher–Goldfarb–Shanno Method Based on Minimizing the Measure Function of Byrd and Nocedal for Unconstrained OptimizationJournal of Optimization Theory and Applications, Vol. 178, No. 1 | 4 May 2018 Cross Ref Designing stellarator coils by a modified Newton method using FOCUSPlasma Physics and Controlled Fusion, Vol. 60, No. 6 | 18 April 2018 Cross Ref A Note on the Morozov Principle via Lagrange DualitySet-Valued and Variational Analysis, Vol. 26, No. 2 | 17 February 2018 Cross Ref Some three-term conjugate gradient methods with the inexact line search conditionCalcolo, Vol. 55, No. 2 | 16 March 2018 Cross Ref Wavelet-based joint CT-MRI reconstructionJournal of X-Ray Science and Technology, Vol. 26, No. 3 | 25 May 2018 Cross Ref Interpretation of deep directi
Quasi-Newton methods accelerate the steepest-descent technique for function minimization by using computational history to generate a sequence of approximations to the inverse of the Hessian matrix. This paper presents a … Quasi-Newton methods accelerate the steepest-descent technique for function minimization by using computational history to generate a sequence of approximations to the inverse of the Hessian matrix. This paper presents a class of approximating matrices as a function of a scalar parameter. The problem of optimal conditioning of these matrices under an appropriate norm as a function of the scalar parameter is investigated. A set of computational results verifies the superiority of the new methods arising from conditioning considerations to known methods.
Previous article Next article Convergence Conditions for Ascent Methods. II: Some CorrectionsPhilip WolfePhilip Wolfehttps://doi.org/10.1137/1013035PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstractSome corrections and amplifications are appended to Convergence conditions for ascent … Previous article Next article Convergence Conditions for Ascent Methods. II: Some CorrectionsPhilip WolfePhilip Wolfehttps://doi.org/10.1137/1013035PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstractSome corrections and amplifications are appended to Convergence conditions for ascent methods, this Review, 11 (1969), pp. 226–235.[1] Philip Wolfe, Convergence conditions for ascent methods, SIAM Rev., 11 (1969), 226–235 10.1137/1011036 MR0250453 0177.20603 LinkISIGoogle Scholar[2] R. Fletcher and , M. J. D. Powell, A rapidly convergent descent method for minimization, Comput. J., 6 (1963/1964), 163–168 MR0152116 0132.11603 CrossrefISIGoogle Scholar[3] Haskell B. Curry, The method of steepest descent for non-linear minimization problems, Quart. Appl. Math., 2 (1944), 258–261 MR0010667 0061.26801 CrossrefGoogle Scholar[4] Philip Wolfe, J. Abadie, Convergence theory in nonlinear programmingInteger and nonlinear programming, North-Holland, Amsterdam, 1970, 1–36 MR0437080 0336.90045 Google Scholar[5] Konrad Knopp, The Theory and Application of Infinite Series, Hafner, New York, 1950 Google Scholar[6] Rickard M. Elkin, Convergence theorems for Gauss-Seidel and other minimization algorithms, Tech. Rep., 68–59, Computer Science Center, University of Maryland, College Park, 1968, 121 pp. Google Scholar[7] James W. Daniel, Convergence of the conjugate gradient method with computationally convenient modifications, Numer. Math., 10 (1967), 125–131 10.1007/BF02174144 MR0219232 0178.18302 CrossrefISIGoogle Scholar[8] Ju. I. Ljubič, General theorems on quadratic relaxation, Dokl. Akad. Nauk SSSR, 161 (1965), 1274–1277, Soviet Math. Dokl., 6 (1965), pp. 588–591 MR0198654 0179.20802 Google Scholar[9] B. T. Poljak, Existence theorems and convergence of minimizing sequences for extremal problems with constraints, Dokl. Akad. Nauk SSSR, 166 (1966), 287–290, Soviet Math. Dokl., 7 (1966), pp. 72–75 MR0198307 0171.09501 Google Scholar Previous article Next article FiguresRelatedReferencesCited ByDetails Two efficient modifications of AZPRP conjugate gradient method with sufficient descent propertyJournal of Inequalities and Applications, Vol. 2022, No. 1 | 10 January 2022 Cross Ref A robust BFGS algorithm for unconstrained nonlinear optimization problemsOptimization, Vol. 17 | 19 September 2022 Cross Ref Adaptive three-term PRP algorithms without gradient Lipschitz continuity condition for nonconvex functionsNumerical Algorithms, Vol. 91, No. 1 | 20 January 2022 Cross Ref A Hybrid Stochastic Deterministic Algorithm for Solving Unconstrained Optimization ProblemsMathematics, Vol. 10, No. 17 | 23 August 2022 Cross Ref Pseudospectral methods and iterative solvers for optimization problems from multiscale particle dynamicsBIT Numerical Mathematics, Vol. 48 | 11 August 2022 Cross Ref Advancing Three-Dimensional Coupled Water Quality Model of Marine Ranches: Model Development, Global Sensitivity Analysis, and Optimization Based on Observation SystemJournal of Marine Science and Engineering, Vol. 10, No. 8 | 27 July 2022 Cross Ref A modified conjugate gradient method based on the self-scaling memoryless BFGS updateNumerical Algorithms, Vol. 90, No. 3 | 30 October 2021 Cross Ref New iterative conjugate gradient method for nonlinear unconstrained optimizationRAIRO - Operations Research, Vol. 56, No. 4 | 29 July 2022 Cross Ref Accelerated memory-less SR1 method with generalized secant equation for unconstrained optimizationCalcolo, Vol. 59, No. 2 | 11 March 2022 Cross Ref Coupled support tensor machine classification for multimodal neuroimaging dataStatistical Analysis and Data Mining: The ASA Data Science Journal, Vol. 10 | 23 May 2022 Cross Ref A link between the steepest descent method and fixed-point iterationsOptimization Letters, Vol. 69 | 18 March 2022 Cross Ref A hybrid-line-and-curve search globalization technique for inexact Newton methodsApplied Numerical Mathematics, Vol. 173 | 1 Mar 2022 Cross Ref Nonlinear System Identification: Learning While Respecting Physical Models Using a Sequential Monte Carlo MethodIEEE Control Systems, Vol. 42, No. 1 | 1 Feb 2022 Cross Ref A Hierarchical Prestack Seismic Inversion Scheme for VTI media based on the Exact Reflection CoefficientIEEE Transactions on Geoscience and Remote Sensing | 1 Jan 2022 Cross Ref A q-Polak–Ribière–Polyak conjugate gradient algorithm for unconstrained optimization problemsJournal of Inequalities and Applications, Vol. 2021, No. 1 | 28 January 2021 Cross Ref A new CG algorithm based on a scaled memoryless BFGS update with adaptive search strategy, and its application to large-scale unconstrained optimization problemsJournal of Computational and Applied Mathematics, Vol. 398 | 1 Dec 2021 Cross Ref Demand modelling for emergency medical service system with multiple casualties cases: k-inflated mixture regression modelFlexible Services and Manufacturing Journal, Vol. 33, No. 4 | 25 February 2021 Cross Ref Constrained neural network training and its application to hyperelastic material modelingComputational Mechanics, Vol. 68, No. 5 | 3 August 2021 Cross Ref Neural network method for solving parabolic two-temperature microscale heat conduction in double-layered thin films exposed to ultrashort-pulsed lasersInternational Journal of Heat and Mass Transfer, Vol. 178 | 1 Oct 2021 Cross Ref Accuracy and efficient solution of helical coiled once-through steam generator model using JFNK methodAnnals of Nuclear Energy, Vol. 159 | 1 Sep 2021 Cross Ref Modifications of Hestenes and Stiefel CG Method for Solving Unconstrained Optimization Problems2021 7th International Conference on Contemporary Information Technology and Mathematics (ICCITM) | 25 Aug 2021 Cross Ref A Modified Sufficient Descent Spectral Conjugate Gradient Method for Minimization2021 7th International Conference on Contemporary Information Technology and Mathematics (ICCITM) | 25 Aug 2021 Cross Ref A note on memory-less SR1 and memory-less BFGS methods for large-scale unconstrained optimizationNumerical Algorithms, Vol. 11 | 17 August 2021 Cross Ref A Descent Four-Term Conjugate Gradient Method with Global Convergence Properties for Large-Scale Unconstrained Optimisation ProblemsMathematical Problems in Engineering, Vol. 2021 | 14 Aug 2021 Cross Ref A modified conjugate gradient-based Elman neural networkCognitive Systems Research, Vol. 68 | 1 Aug 2021 Cross Ref A Modified Liu and Storey Conjugate Gradient Method for Large Scale Unconstrained Optimization ProblemsAlgorithms, Vol. 14, No. 8 | 28 July 2021 Cross Ref ASLR: An Adaptive Scheduler for Learning Rate2021 International Joint Conference on Neural Networks (IJCNN) | 18 Jul 2021 Cross Ref A Convex Combination between Two Different Search Directions of Conjugate Gradient Method and Application in Image RestorationMathematical Problems in Engineering, Vol. 2021 | 13 Jul 2021 Cross Ref Direct Energy Minimization Based on Exponential Transformation in Density Functional Calculations of Finite and Extended SystemsComputer Physics Communications | 1 Jun 2021 Cross Ref A new three-term spectral conjugate gradient algorithm with higher numerical performance for solving large scale optimization problems based on Quasi-Newton equationInternational Journal of Modeling, Simulation, and Scientific Computing | 31 May 2021 Cross Ref Stochastic quasi-Newton with line-search regularisationAutomatica, Vol. 127 | 1 May 2021 Cross Ref An Efficient Modified AZPRP Conjugate Gradient Method for Large-Scale Unconstrained Optimization ProblemJournal of Mathematics, Vol. 2021 | 26 Apr 2021 Cross Ref A new hybrid conjugate gradient algorithm as a convex combination of MMWU and RMIL nonlinear problemsJournal of Interdisciplinary Mathematics, Vol. 24, No. 3 | 18 March 2021 Cross Ref Estimation of discrete choice models with hybrid stochastic adaptive batch size algorithmsJournal of Choice Modelling, Vol. 38 | 1 Mar 2021 Cross Ref Learning to Optimize Molecular Geometries Using Reinforcement LearningJournal of Chemical Theory and Computation, Vol. 17, No. 2 | 20 January 2021 Cross Ref Numerical optimization of a multiphysics calculation scheme based on partial convergenceAnnals of Nuclear Energy, Vol. 151 | 1 Feb 2021 Cross Ref A new accelerated diagonal quasi-Newton updating method with scaled forward finite differences directional derivative for unconstrained optimizationOptimization, Vol. 70, No. 2 | 16 January 2020 Cross Ref A Three-Term Gradient Descent Method with Subspace TechniquesMathematical Problems in Engineering, Vol. 2021 | 7 Jan 2021 Cross Ref Numerical ResultsA Derivative-free Two Level Random Search Method for Unconstrained Optimization | 1 April 2021 Cross Ref A decent three term conjugate gradient method with global convergence properties for large scale unconstrained optimization problemsAIMS Mathematics, Vol. 6, No. 10 | 1 Jan 2021 Cross Ref Adaptive Learning Rate and Momentum for Training Deep Neural NetworksMachine Learning and Knowledge Discovery in Databases. Research Track | 11 September 2021 Cross Ref A Quasi‐Newton Reformulated Geostatistical Approach on Reduced Dimensions for Large‐Dimensional Inverse ProblemsWater Resources Research, Vol. 57, No. 1 | 22 January 2021 Cross Ref Approaching the full configuration interaction ground state from an arbitrary wavefunction with gradient descent and quasi-Newton algorithmsThe Journal of Chemical Physics, Vol. 153, No. 23 | 21 Dec 2020 Cross Ref Hybrid Riemannian conjugate gradient methods with global convergence propertiesComputational Optimization and Applications, Vol. 77, No. 3 | 5 September 2020 Cross Ref A one-parameter class of three-term conjugate gradient methods with an adaptive parameter choiceOptimization Methods and Software, Vol. 35, No. 6 | 13 September 2018 Cross Ref Gaussian Process Regression for Maximum Entropy DistributionJournal of Computational Physics, Vol. 418 | 1 Oct 2020 Cross Ref A double parameter self-scaling memoryless BFGS method for unconstrained optimizationComputational and Applied Mathematics, Vol. 39, No. 3 | 2 June 2020 Cross Ref Semi-discrete optimal transport: a solution procedure for the unsquared Euclidean distance caseMathematical Methods of Operations Research, Vol. 92, No. 1 | 12 February 2020 Cross Ref The application of nonlinear least-squares estimation algorithms in atmospheric density model calibrationAircraft Engineering and Aerospace Technology, Vol. 92, No. 7 | 20 May 2020 Cross Ref A Modified Bat Algorithm with Conjugate Gradient Method for Global OptimizationInternational Journal of Mathematics and Mathematical Sciences, Vol. 2020 | 4 Jun 2020 Cross Ref Decreasing the uncertainty of classical laser flash analysis using numerical algorithms robust to noise and systematic errorsReview of Scientific Instruments, Vol. 91, No. 6 | 1 Jun 2020 Cross Ref Diagonal Approximation of the Hessian by Finite Differences for Unconstrained OptimizationJournal of Optimization Theory and Applications, Vol. 185, No. 3 | 10 May 2020 Cross Ref New conjugate gradient algorithms based on self-scaling memoryless Broyden–Fletcher–Goldfarb–Shanno methodCalcolo, Vol. 57, No. 2 | 18 May 2020 Cross Ref A class of globally convergent three-term Dai-Liao conjugate gradient methodsApplied Numerical Mathematics, Vol. 151 | 1 May 2020 Cross Ref A recalling-enhanced recurrent neural network: Conjugate gradient learning algorithm and its convergence analysisInformation Sciences, Vol. 519 | 1 May 2020 Cross Ref Dynamic search trajectory methods for global optimizationAnnals of Mathematics and Artificial Intelligence, Vol. 88, No. 1-3 | 27 August 2019 Cross Ref A Study on Optimization Algorithms in MPCJournal of Physics: Conference Series, Vol. 1490, No. 1 | 1 Mar 2020 Cross Ref SNR enhancement in brillouin microspectroscopy using spectrum reconstructionBiomedical Optics Express, Vol. 11, No. 2 | 22 January 2020 Cross Ref An Efficient Single-Parameter Scaling Memoryless Broyden-Fletcher-Goldfarb-Shanno Algorithm for Solving Large Scale Unconstrained Optimization ProblemsIEEE Access, Vol. 8 | 1 Jan 2020 Cross Ref Spectral Three-Term Constrained Conjugate Gradient Algorithm for Function MinimizationsJournal of Applied Mathematics, Vol. 2019 | 25 Dec 2019 Cross Ref A new accelerated conjugate gradient method for large-scale unconstrained optimizationJournal of Inequalities and Applications, Vol. 2019, No. 1 | 20 November 2019 Cross Ref A New Diagonal Quasi-Newton Updating Method With Scaled Forward Finite Differences Directional Derivative for Unconstrained OptimizationNumerical Functional Analysis and Optimization, Vol. 40, No. 13 | 20 May 2019 Cross Ref Conjugate gradient-based Takagi-Sugeno fuzzy neural network parameter identification and its convergence analysisNeurocomputing, Vol. 364 | 1 Oct 2019 Cross Ref New Dual Parameter Quasi-Newton Methods for Unconstrained Nonlinear ProgramsInternational Journal of Strategic Decision Sciences, Vol. 10, No. 3 | 1 Jul 2019 Cross Ref A diagonal quasi-Newton updating method for unconstrained optimizationNumerical Algorithms, Vol. 81, No. 2 | 29 June 2018 Cross Ref Convergence of a two-parameter family of conjugate gradient methods with a fixed formula of stepsizeBoletim da Sociedade Paranaense de Matemática, Vol. 38, No. 6 | 25 May 2019 Cross Ref Efficient Ensemble Refinement by ReweightingJournal of Chemical Theory and Computation, Vol. 15, No. 5 | 2 April 2019 Cross Ref Simultaneous reconstruction of the perfusion coefficient and initial temperature from time-average integral temperature measurementsApplied Mathematical Modelling, Vol. 68 | 1 Apr 2019 Cross Ref Interface inversion of gravitational data using spherical triangular tessellation: an application for the estimation of the Moon's crustal thicknessGeophysical Journal International, Vol. 217, No. 1 | 17 January 2019 Cross Ref A New Hybrid Algorithm for Convex Nonlinear Unconstrained OptimizationJournal of Applied Mathematics, Vol. 2019 | 1 Apr 2019 Cross Ref An Advanced Estimation Algorithm for Ground‐Motion Models with Spatial CorrelationBulletin of the Seismological Society of America, Vol. 109, No. 2 | 5 March 2019 Cross Ref A new hybrid approach for reliability-based design optimization of structural componentsMaterials Testing, Vol. 61, No. 2 | 24 April 2019 Cross Ref A New Dai-Liao Conjugate Gradient Method with Optimal Parameter ChoiceNumerical Functional Analysis and Optimization, Vol. 40, No. 2 | 20 November 2018 Cross Ref A Structured Quasi-Newton Algorithm for Optimizing with Incomplete Hessian InformationCosmin G. Petra, Naiyuan Chiang, and Mihai AnitescuSIAM Journal on Optimization, Vol. 29, No. 2 | 11 April 2019AbstractPDF (588 KB)Using hybrid PSO algorithm with modified conjugate gradient method for some image processing1 Jan 2019 Cross Ref Cubic regularization in symmetric rank-1 quasi-Newton methodsMathematical Programming Computation, Vol. 10, No. 4 | 12 February 2018 Cross Ref A New Correntropy-Based Conjugate Gradient Backpropagation Algorithm for Improving Training in Neural NetworksIEEE Transactions on Neural Networks and Learning Systems, Vol. 29, No. 12 | 1 Dec 2018 Cross Ref Full waveform inversion in time and frequency domain of velocity modeling in seismic imaging: FWISIMAT a Matlab codeEarth Sciences Research Journal, Vol. 22, No. 4 | 1 October 2018 Cross Ref A diagonal quasi-Newton updating method based on minimizing the measure function of Byrd and Nocedal for unconstrained optimizationOptimization, Vol. 67, No. 9 | 17 June 2018 Cross Ref Optimal control of bioprocess systems using hybrid numerical optimization algorithmsOptimization, Vol. 67, No. 8 | 2 May 2018 Cross Ref Toward Fast Dynamic Optimization: An Indirect Algorithm That Uses Parsimonious Input ParameterizationIndustrial & Engineering Chemistry Research, Vol. 57, No. 30 | 17 June 2018 Cross Ref Toward interactive scanning tunneling microscopy simulations of large-scale molecular systems in real timeJournal of Applied Physics, Vol. 124, No. 4 | 28 Jul 2018 Cross Ref A Double-Parameter Scaling Broyden–Fletcher–Goldfarb–Shanno Method Based on Minimizing the Measure Function of Byrd and Nocedal for Unconstrained OptimizationJournal of Optimization Theory and Applications, Vol. 178, No. 1 | 4 May 2018 Cross Ref Designing stellarator coils by a modified Newton method using FOCUSPlasma Physics and Controlled Fusion, Vol. 60, No. 6 | 18 April 2018 Cross Ref Wavelet-based joint CT-MRI reconstructionJournal of X-Ray Science and Technology, Vol. 26, No. 3 | 25 May 2018 Cross Ref An adaptive three-term conjugate gradient method based on self-scaling memoryless BFGS matrixJournal of Computational and Applied Mathematics, Vol. 332 | 1 Apr 2018 Cross Ref A double parameter scaled BFGS method for unconstrained optimizationJournal of Computational and Applied Mathematics, Vol. 332 | 1 Apr 2018 Cross Ref An adaptive scaled BFGS method for unconstrained optimizationNumerical Algorithms, Vol. 77, No. 2 | 3 April 2017 Cross Ref On obtaining optimal well rates and placement for CO2 storageComputational Geosciences, Vol. 21, No. 5-6 | 22 March 2017 Cross Ref Accelerated adaptive Perry conjugate gradient algorithms based on the self-scaling memoryless BFGS updateJournal of Computational and Applied Mathematics, Vol. 325 | 1 Dec 2017 Cross Ref Splitting methods for tensor equationsNumerical Linear Algebra with Applications, Vol. 24, No. 5 | 18 April 2017 Cross Ref Ranking and categorizing large-scale saline aquifer formations based on optimized CO 2 storage potentials and economic factorsInternational Journal of Greenhouse Gas Control, Vol. 65 | 1 Oct 2017 Cross Ref Smoothed $$\ell _1$$ ℓ 1 -regularization-based line search for sparse signal recoverySoft Computing, Vol. 21, No. 16 | 9 November 2016 Cross Ref Numeric modelling and risk assessment of pollutions in the Chinese Bohai SeaScience China Earth Sciences, Vol. 60, No. 8 | 17 July 2017 Cross Ref Categorization of Norwegian Continental Shelf Formations in Terms of Geological CO2 Storage PotentialsEnergy Procedia, Vol. 114 | 1 Jul 2017 Cross Ref A Poisson Log-Normal Model for Constructing Gene Covariation Network Using RNA-seq DataJournal of Computational Biology, Vol. 24, No. 7 | 1 Jul 2017 Cross Ref Eigenvalues versus singular values study in conjugate gradient algorithms for large-scale unconstrained optimizationOptimization Methods and Software, Vol. 32, No. 3 | 28 September 2016 Cross Ref A new spectral conjugate gradient method for large-scale unconstrained optimizationOptimization Methods and Software, Vol. 32, No. 3 | 26 September 2016 Cross Ref An optimal control framework for dynamic induction control of wind farms and their interaction with the atmospheric boundary layerPhilosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, Vol. 375, No. 2091 | 6 March 2017 Cross Ref Computational method for optimal machine scheduling problem with maintenance and productionInternational Journal of Production Research, Vol. 55, No. 6 | 21 October 2016 Cross Ref Introduction to Nonlinear ProgrammingHybrid Systems, Optimal Control and Hybrid Vehicles | 3 February 2017 Cross Ref Simple Bound Constraints OptimizationContinuous Nonlinear Optimization for Engineering Applications in GAMS Technology | 9 November 2017 Cross Ref Sparse-Prior-Based Projection Distance Optimization Method for Joint CT-MRI ReconstructionIEEE Access, Vol. 5 | 1 Jan 2017 Cross Ref ESTIMATION OF NATURAL VENTILATION PARAMETERS BY A BAYESIAN APPROACHJournal of Environmental Engineering (Transactions of AIJ), Vol. 82, No. 734 | 1 Jan 2017 Cross Ref A New Search Direction for Broyden’s Family Method in Solving Unconstrained Optimization ProblemsRecent Advances on Soft Computing and Data Mining | 1 Jan 2017 Cross Ref An efficient modification of the Hestenes-Stiefel nonlinear conjugate gradient method with restart propertyJournal of Inequalities and Applications, Vol. 2016, No. 1 | 6 April 2016 Cross Ref Line search fixed point algorithms based on nonlinear conjugate gradient directions: application to constrained smooth convex optimizationFixed Point Theory and Applications, Vol. 2016, No. 1 | 8 July 2016 Cross Ref Some modified conjugate gradient methods for unconstrained optimizationJournal of Computational and Applied Mathematics, Vol. 305 | 1 Oct 2016 Cross Ref On the global convergence rate of the gradient descent method for functions with Hölder continuous gradientsOptimization Letters, Vol. 10, No. 6 | 26 August 2015 Cross Ref Constrained optimal control problems of nonlinear systems based on improved Newton algorithms2016 3rd International Conference on Informative and Cybernetics for Computational Social Systems (ICCSS) | 1 Aug 2016 Cross Ref Quantitative reconstruction of thermal and dynamic characteristics of lava flow from surface thermal measurementsGeophysical Journal International, Vol. 205, No. 3 | 3 April 2016 Cross Ref Modified Newton-Raphson GRAPE methods for optimal control of spin systemsThe Journal of Chemical Physics, Vol. 144, No. 20 | 28 May 2016 Cross Ref New data assimilation system DNDAS for high-dimensional modelsChinese Physics B, Vol. 25, No. 5 | 6 May 2016 Cross Ref A Ranking Approach on Large-Scale Graph With Multidimensional Heterogeneous InformationIEEE Transactions on Cybernetics, Vol. 46, No. 4 | 1 Apr 2016 Cross Ref Variable metric random pursuitMathematical Programming, Vol. 156, No. 1-2 | 24 May 2015 Cross Ref Amplitude variation with angle inversion using the exact Zoeppritz equations — Theory and methodologyGEOPHYSICS, Vol. 81, No. 2 | 1 Mar 2016 Cross Ref Projection onto a Polyhedron that Exploits SparsityWilliam W. Hager and Hongchao ZhangSIAM Journal on Optimization, Vol. 26, No. 3 | 1 September 2016AbstractPDF (439 KB)Computational Approaches in Large-Scale Unconstrained OptimizationBig Data Optimization: Recent Developments and Challenges | 27 May 2016 Cross Ref A modification of classical conjugate gradient method using strong Wolfe line search1 Jan 2016 Cross Ref Inversion study on pollutant discharges in the Bohai Sea with the adjoint methodJournal of Ocean University of China, Vol. 14, No. 6 | 7 November 2015 Cross Ref Phase equilibrium calculations with quasi-Newton methodsFluid Phase Equilibria, Vol. 406 | 1 Nov 2015 Cross Ref A globally and quadratically convergent algorithm with efficient implementation for unconstrained optimizationComputational and Applied Mathematics, Vol. 34, No. 3 | 12 August 2014 Cross Ref A sufficient descent Liu–Storey conjugate gradient method and its global convergenceOptimization, Vol. 64, No. 9 | 10 March 2014 Cross Ref Global Trajectory Optimization on Multilane Roads2015 IEEE 18th International Conference on Intelligent Transportation Systems | 1 Sep 2015 Cross Ref Convergence of Rprop and variantsNeurocomputing, Vol. 159 | 1 Jul 2015 Cross Ref Interpretation of disturbed data in thermal response tests using the infinite line source model and numerical parameter estimation methodApplied Energy, Vol. 148 | 1 Jun 2015 Cross Ref A three-term conjugate gradient algorithm for large-scale unconstrained optimization problemsApplied Numerical Mathematics, Vol. 92 | 1 Jun 2015 Cross Ref A new three-term conjugate gradient algorithm for unconstrained optimizationNumerical Algorithms, Vol. 68, No. 2 | 12 April 2014 Cross Ref On Matrix Nearness Problems: Distance to DelocalizationVladimir R. Kostić, Agnieszka Miȩdlar, and Jeroen J. StolwijkSIAM Journal on Matrix Analysis and Applications, Vol. 36, No. 2 | 16 April 2015AbstractPDF (530 KB)A new reliability analysis method based on the conjugate gradient directionStructural and Multidisciplinary Optimization, Vol. 51, No. 1 | 5 June 2014 Cross Ref A review on accelerating scientific computations using the Conjugate Gradient method2015 International Conference on Electronic Design, Computer Networks & Automated Verification (EDCAV) | 1 Jan 2015 Cross Ref ISAR Cross-Range Scaling by Using Sharpness MaximizationIEEE Geoscience and Remote Sensing Letters, Vol. 12, No. 1 | 1 Jan 2015 Cross Ref An Efficient Hybrid Conjugate Gradient Method with the Strong Wolfe-Powell Line SearchMathematical Problems in Engineering, Vol. 2015 | 1 Jan 2015 Cross Ref A New Conjugate Gradient Algorithm with Sufficient Descent Property for Unconstrained OptimizationMathematical Problems in Engineering, Vol. 2015 | 1 Jan 2015 Cross Ref Maximum Likelihood Detection of Nonlinearly Distorted OFDM Signal2015 IEEE Global Communications Conference (GLOBECOM) | 1 Dec 2014 Cross Ref Total variation minimization-based multimodality medical image reconstruction11 Sep 2014 Cross Ref A subclass of generating set search with convergence to second-order stationary pointsOptimization Methods and Software, Vol. 29, No. 5 | 27 June 2013 Cross Ref Constrained optimal control of switched systems based on modified BFGS algorithm and filled function methodInternational Journal of Computer Mathematics, Vol. 91, No. 8 | 16 January 2014 Cross Ref Spectral method and its application to the conjugate gradient methodApplied Mathematics and Computation, Vol. 240 | 1 Aug 2014 Cross Ref AN ADAPTIVE CONJUGACY CONDITION AND RELATED NONLINEAR CONJUGATE GRADIENT METHODSInternational Journal of Computational Methods, Vol. 11, No. 04 | 19 August 2014 Cross Ref Some sufficient descent conjugate gradient methods and their global convergenceComputational and Applied Mathematics, Vol. 33, No. 2 | 10 August 2013 Cross Ref Two hybrid nonlinear conjugate gradient methods based on a modified secant equationOptimization, Vol. 63, No. 7 | 13 June 2012 Cross Ref A descent family of Dai–Liao conjugate gradient methodsOptimization Methods and Software, Vol. 29, No. 3 | 9 September 2013 Cross Ref Two modified scaled nonlinear conjugate gradient methodsJournal of Computational and Applied Mathematics, Vol. 261 | 1 May 2014 Cross Ref An accelerated subspace minimization three-term conjugate gradient algorithm for unconstrained optimizationNumerical Algorithms, Vol. 65, No. 4 | 19 May 2013 Cross Ref Oblique projections, Broyden restricted class and limited-memory quasi-Newton methodsOptimization, Vol. 63, No. 1 | 8 May 2013 Cross Ref The Hybrid BFGS-CG Method in Solving Unconstrained Optimization ProblemsAbstract and Applied Analysis, Vol. 2014 | 1 Jan 2014 Cross Ref Several Guaranteed Descent Conjugate Gradient Methods for Unconstrained OptimizationJournal of Applied Mathematics, Vol. 2014 | 1 Jan 2014 Cross Ref Conjugate Gradient Trained Neural Network for Intelligent Sensing of Manhole Gases to Avoid Human FatalityAdvances in Secure Computing, Internet Services, and Applications | 1 Jan 2014 Cross Ref Convergence and Stability of Line Search Methods for Unconstrained OptimizationActa Applicandae Mathematicae, Vol. 127, No. 1 | 10 January 2013 Cross Ref BibliographyComputational Methods for Data Evaluation and Assimilation | 25 July 2013 Cross Ref P-norm regularized SVM classifier by non-convex conjugate gradient algorithm2013 25th Chinese Control and Decision Conference (CCDC) | 1 May 2013 Cross Ref On the sufficient descent property of the Shanno’s conjugate gradient methodOptimization Letters, Vol. 7, No. 4 | 8 March 2012 Cross Ref A simple three-term conjugate gradient algorithm for unconstrained optimizationJournal of Computational and Applied Mathematics, Vol. 241 | 1 Mar 2013 Cross Ref On three-term conjugate gradient algorithms for unconstrained optimizationApplied Mathematics and Computation, Vol. 219, No. 11 | 1 Feb 2013 Cross Ref A Nonlinear Conjugate Gradient Algorithm with an Optimal Property and an Improved Wolfe Line SearchYu-Hong Dai and Cai-Xia KouSIAM Journal on Optimization, Vol. 23, No. 1 | 21 February 2013AbstractPDF (397 KB)The Limited Memory Conjugate Gradient MethodWilliam W. Hager and Hongchao ZhangSIAM Journal on Optimization, Vol. 23, No. 4 | 5 November 2013AbstractPDF (490 KB)Trading Off Subtask Dispersion and Response Time in Split-Merge SystemsAnalytical and Stochastic Modeling Techniques and Applications | 1 Jan 2013 Cross Ref Estimation of Open Boundary Conditions for an Internal Tidal Model with Adjoint Method: A Comparative Study on Optimization MethodsMathematical Problems in Engineering, Vol. 2013 | 1 Jan 2013 Cross Ref Full-waveform Velocity Inversion Based on the Acoustic Wave EquationAmerican Journal of Computational Mathematics, Vol. 03, No. 03 | 1 Jan 2013 Cross Ref An accelerated conjugate gradient algorithm with guaranteed descent and conjugacy conditions for unconstrained optimizationOptimization Methods and Software, Vol. 27, No. 4-5 | 1 Oct 2012 Cross Ref A Quadratic Hybridization of Polak–Ribière–Polyak and Fletcher–Reeves Conjugate Gradient MethodsJournal of Optimization Theory and Applications, Vol. 154, No. 3 | 13 March 2012 Cross Ref A note on the global convergence theorem of the scaled conjugate gradient algorithms proposed by AndreiComputational Optimization and Applications, Vol. 52, No. 2 | 21 May 2011 Cross Ref A Modified DY Conjugate Gradient Algorithm with Sufficient Descent2012 Fifth International Joint Conference on Computational Sciences and Optimization | 1 Jun 2012 Cross Ref A new general form of conjugate gradient methods with guaranteed descent and strong global convergence propertiesNumerical Algorithms, Vol. 60, No. 1 | 12 November 2011 Cross Ref A sufficient descent LS conjugate gradient method for unconstrained optimization problemsApplied Mathematics and Computation, Vol. 218, No. 5 | 1 Nov 2011 Cross Ref ACGHSIM: A Simulation Tool for Accelerated Conjugate Hessian Gradient Optimization Algorithm2011 Third International Conference on Computational Intelligence, Modelling & Simulation | 1 Sep 2011 Cross Ref STUDYING THE BASIN OF CONVERGENCE OF METHODS FOR COMPUTING PERIODIC ORBITSInternational Journal of Bifurcation and Chaos, Vol. 21, No. 08 | 20 November 2011 Cross Ref Multi-objective reinforcement learning algorithm and its improved convergency method2011 6th IEEE Conference on Industrial Electronics and Applications | 1 Ju
Abstract : This report gives the most comprehensive and detailed treatment to date of some of the most powerful mathematical programming techniques currently known--sequential unconstrained methods for constrained minimization problems … Abstract : This report gives the most comprehensive and detailed treatment to date of some of the most powerful mathematical programming techniques currently known--sequential unconstrained methods for constrained minimization problems in Euclidean n-space--giving many new results not published elsewhere. It provides a fresh presentation of nonlinear programming theory, a detailed review of other unconstrained methods, and a development of the latest algorithms for unconstrained minimization. (Author)
For solving large systems of nonlinear equations by quasi-Newton methods it may often be preferable to store an approximation to the Jacobian rather than an approximation to the inverse Jacobian. … For solving large systems of nonlinear equations by quasi-Newton methods it may often be preferable to store an approximation to the Jacobian rather than an approximation to the inverse Jacobian. The main reason is that when the Jacobian is sparse and the locations of the zeroes are known, the updating procedure can be made more efficient for the approximate Jacobian than for the approximate inverse Jacobian.
A powerful iterative descent method for finding a local minimum of a function of several variables is described. A number of theorems are proved to show that it always converges … A powerful iterative descent method for finding a local minimum of a function of several variables is described. A number of theorems are proved to show that it always converges and that it converges rapidly. Numerical tests on a variety of functions confirm these theorems. The method has been used to solve a system of one hundred non-linear simultaneous equations.
Introduction.The solution of a set of n nonlinear simultaneous equations, which may be writtencan in general only be found by an iterative process in which successively better, in some sense, … Introduction.The solution of a set of n nonlinear simultaneous equations, which may be writtencan in general only be found by an iterative process in which successively better, in some sense, approximations to the solution are computed.Of the methods available most rely on evaluating at each stage of the calculation a set of residuals and from these obtaining a correction to each element of the approximate solution.The most common way of doing this is to take each correction to be a suitable linear combination of the residuals.There is, of course, no reason in principle why more elaborate schemes should not be used but they are difficult both to analyse theoretically and to implement in practice.The minimisation of a function of n variables, for which it is possible to obtain analytic expressions for the n first partial derivatives, is a particular example of this type of problem.Any technique used to solve nonlinear equations may be applied to the expressions for the partial derivatives but, because it is known in this case that the residuals form the gradient of some function, it is possible to introduce refinements into the method of solution to take account of this extra information.Since, in addition, the value of the function itself is known, further refinements are possible.
In recent years, several algorithms have appeared for modifying the factors of a matrix following a rank-one change. These methods have always been given in the context of specific applications … In recent years, several algorithms have appeared for modifying the factors of a matrix following a rank-one change. These methods have always been given in the context of specific applications and this has probably inhibited their use over a wider field. In this report, several methods are described for modifying Cholesky factors. Some of these have been published previously while others appear for the first time. In addition, a new algorithm is presented for modifying the complete orthogonal factorization of a general matrix, from which the conventional <italic>QR</italic> factors are obtained as a special case. A uniform notation has been used and emphasis has been placed on illustrating the similarity between different methods.
We compare the Ostrowski efficiency of some methods for solving systems of nonlinear equations without explicitly using derivatives. The methods considered include the discrete Newton method, Shamanskii’s method, the two-point … We compare the Ostrowski efficiency of some methods for solving systems of nonlinear equations without explicitly using derivatives. The methods considered include the discrete Newton method, Shamanskii’s method, the two-point secant method, and Brown’s methods. We introduce a class of secant methods and a class of methods related to Brown’s methods, but using orthogonal rather than stabilized elementary transformations. The idea of these methods is to avoid finding a new approximation to the Jacobian matrix of the system at each step, and thus increase the efficiency. Local convergence theorems are proved, and the efficiencies of the methods are calculated. Numerical results are given, and some possible extensions are mentioned.
A new rank-two variable-metric method is derived using Greenstadt’s variational approach [<italic>Math. Comp.</italic>, this issue]. Like the Davidon-Fletcher-Powell (DFP) variable-metric method, the new method preserves the positive-definiteness of the approximating … A new rank-two variable-metric method is derived using Greenstadt’s variational approach [<italic>Math. Comp.</italic>, this issue]. Like the Davidon-Fletcher-Powell (DFP) variable-metric method, the new method preserves the positive-definiteness of the approximating matrix. Together with Greenstadt’s method, the new method gives rise to a one-parameter family of variable-metric methods that includes the DFP and rank-one methods as special cases. It is equivalent to Broyden’s one-parameter family [<italic>Math. Comp.</italic>, v. 21, 1967, pp. 368–381]. Choices for the inverse of the weighting matrix in the variational approach are given that lead to the derivation of the DFP and rank-one methods directly.
Two basic approaches to the generation of conjugate directions are considered for the problem of unconstrained minimisation of quadratic functions. The first approach results in a projected gradient algorithm which … Two basic approaches to the generation of conjugate directions are considered for the problem of unconstrained minimisation of quadratic functions. The first approach results in a projected gradient algorithm which gives 'n step' convergence for a quadratic. The second approach is based on the generalised solution of a set of underdetermined linear equations, various forms of which generate various new algorithms also giving n step convergence. One of them is the Fletcher and Powell modification of Davidon's method. Results of an extensive numerical comparison of these methods with the Newton–Raphson method, the Fletcher–Reeves method, and the Fletcher–Powell–Davidon method are included, the test functions being non-quadratic.
This paper presents a new minimization algorithm and discusses theoretically some of its properties when applied to quadratic functions. Results of comparative testing for a set of non-quadratic functions are … This paper presents a new minimization algorithm and discusses theoretically some of its properties when applied to quadratic functions. Results of comparative testing for a set of non-quadratic functions are described and reasons for the observed experimental behaviour are suggested.
This paper gives proofs of the results stated by Powell (1972), on the quadratic termination properties of a class of minimization algorithms. These results are relevant to many gradient methods … This paper gives proofs of the results stated by Powell (1972), on the quadratic termination properties of a class of minimization algorithms. These results are relevant to many gradient methods for calculating the least value of a function of several variables, when there are no constraints on the variables, and when the variables are subject to linear constraints.
Previous article Next article On Steepest DescentA. A. GoldsteinA. A. Goldsteinhttps://doi.org/10.1137/0303013PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAbout[1] A. A. Goldstein, Minimizing functionals on Hilbert spaceComputing Methods in Optimization Problems (Proc. … Previous article Next article On Steepest DescentA. A. GoldsteinA. A. Goldsteinhttps://doi.org/10.1137/0303013PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAbout[1] A. A. Goldstein, Minimizing functionals on Hilbert spaceComputing Methods in Optimization Problems (Proc. Conf., Univ. California, Los Angeles, Calif., 1964), Academic Press, New York, 1964, 159–165 MR0172117 (30:2343) 0151.21005 CrossrefGoogle Scholar[2] Haskell B. Curry, The method of steepest descent for non-linear minimization problems, Quart. Appl. Math., 2 (1944), 258–261 MR0010667 (6,52b) 0061.26801 CrossrefGoogle Scholar Previous article Next article FiguresRelatedReferencesCited byDetails Composing Scalable Nonlinear Algebraic SolversPeter R. Brune, Matthew G. Knepley, Barry F. Smith, and Xuemin Tu5 November 2015 | SIAM Review, Vol. 57, No. 4AbstractPDF (1397 KB)A New Conjugate Gradient Method with Guaranteed Descent and an Efficient Line SearchWilliam W. Hager and Hongchao Zhang28 July 2006 | SIAM Journal on Optimization, Vol. 16, No. 1AbstractPDF (299 KB)Convergence of the Iterates of Descent Methods for Analytic Cost FunctionsP. A. Absil, R. Mahony, and B. Andrews28 July 2006 | SIAM Journal on Optimization, Vol. 16, No. 2AbstractPDF (277 KB)A Subspace Decomposition Principle for Scaled Gradient Projection Methods: Global TheoryJ. C. Dunn1 August 2006 | SIAM Journal on Control and Optimization, Vol. 29, No. 5AbstractPDF (1438 KB)Estimating Nonlinear Models by Maximum Likelihood for the Exponential FamilyM. R. Osborne14 July 2006 | SIAM Journal on Scientific and Statistical Computing, Vol. 8, No. 3AbstractPDF (1056 KB)Convergence Properties of Algorithms for Nonlinear OptimizationM J. D. Powell18 July 2006 | SIAM Review, Vol. 28, No. 4AbstractPDF (1604 KB)Analysis of Newton's Method at Irregular SingularitiesA. Griewank and M. R. Osborne17 July 2006 | SIAM Journal on Numerical Analysis, Vol. 20, No. 4AbstractPDF (2302 KB)Global and Asymptotic Convergence Rate Estimates for a Class of Projected Gradient ProcessesJ. C. Dunn1 August 2006 | SIAM Journal on Control and Optimization, Vol. 19, No. 3AbstractPDF (2616 KB)Newton's Method and the Goldstein Step-Length Rule for Constrained Minimization ProblemsJ. C. Dunn18 July 2006 | SIAM Journal on Control and Optimization, Vol. 18, No. 6AbstractPDF (1556 KB)Convergence Rates for Conditional Gradient Sequences Generated by Implicit Step Length RulesJ. C. Dunn18 July 2006 | SIAM Journal on Control and Optimization, Vol. 18, No. 5AbstractPDF (1434 KB)Quasi-Newton Methods, Motivation and TheoryJ. E. Dennis, Jr. and Jorge J. Moré18 July 2006 | SIAM Review, Vol. 19, No. 1AbstractPDF (4566 KB)An Historical Survey of Computational Methods in Optimal ControlE. Polak2 August 2006 | SIAM Review, Vol. 15, No. 2AbstractPDF (3413 KB)A Convergence Theory for a Class of Nonlinear Programming ProblemsSteven W. Rauch14 July 2006 | SIAM Journal on Numerical Analysis, Vol. 10, No. 1AbstractPDF (2034 KB)Finite-Dimensional Approximations of State-Constrained Continuous Optimal Control ProblemsJane Cullum18 July 2006 | SIAM Journal on Control, Vol. 10, No. 4AbstractPDF (2259 KB)On the Convergence of Some Feasible Direction Algorithms for Nonlinear ProgrammingDonald M. Topkis and Arthur F. Veinott, Jr.18 July 2006 | SIAM Journal on Control, Vol. 5, No. 2AbstractPDF (1369 KB)Minimizing Functionals on Normed-Linear SpacesA. A. Goldstein18 July 2006 | SIAM Journal on Control, Vol. 4, No. 1AbstractPDF (814 KB)Convex Programming and Optimal ControlA. A. Goldstein1 August 2006 | Journal of the Society for Industrial and Applied Mathematics Series A Control, Vol. 3, No. 1AbstractPDF (496 KB) Volume 3, Issue 1| 1965Journal of the Society for Industrial and Applied Mathematics Series A Control History Submitted:27 April 1965Published online:01 August 2006 InformationCopyright © 1965 © Society for Industrial and Applied MathematicsPDF Download Article & Publication DataArticle DOI:10.1137/0303013Article page range:pp. 147-151ISSN (print):0887-4603ISSN (online):2168-359XPublisher:Society for Industrial and Applied Mathematics
This paper considers in general the problem of finding the minimum of a given functional <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="f left-parenthesis u right-parenthesis"> <mml:semantics> <mml:mrow> <mml:mi>f</mml:mi> <mml:mo stretchy="false">(</mml:mo> <mml:mi>u</mml:mi> <mml:mo … This paper considers in general the problem of finding the minimum of a given functional <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="f left-parenthesis u right-parenthesis"> <mml:semantics> <mml:mrow> <mml:mi>f</mml:mi> <mml:mo stretchy="false">(</mml:mo> <mml:mi>u</mml:mi> <mml:mo stretchy="false">)</mml:mo> </mml:mrow> <mml:annotation encoding="application/x-tex">f(u)</mml:annotation> </mml:semantics> </mml:math> </inline-formula> over a set <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="upper B"> <mml:semantics> <mml:mi>B</mml:mi> <mml:annotation encoding="application/x-tex">B</mml:annotation> </mml:semantics> </mml:math> </inline-formula> by approximately minimizing a sequence of functionals <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="f Subscript n Baseline left-parenthesis u Subscript n Baseline right-parenthesis"> <mml:semantics> <mml:mrow> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:msub> <mml:mi>f</mml:mi> <mml:mi>n</mml:mi> </mml:msub> </mml:mrow> <mml:mo stretchy="false">(</mml:mo> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:msub> <mml:mi>u</mml:mi> <mml:mi>n</mml:mi> </mml:msub> </mml:mrow> <mml:mo stretchy="false">)</mml:mo> </mml:mrow> <mml:annotation encoding="application/x-tex">{f_n}({u_n})</mml:annotation> </mml:semantics> </mml:math> </inline-formula> over a "discretized" set <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="upper B Subscript n"> <mml:semantics> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:msub> <mml:mi>B</mml:mi> <mml:mi>n</mml:mi> </mml:msub> </mml:mrow> <mml:annotation encoding="application/x-tex">{B_n}</mml:annotation> </mml:semantics> </mml:math> </inline-formula>; theorems are given proving the convergence of the approximating points <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="u Subscript n"> <mml:semantics> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:msub> <mml:mi>u</mml:mi> <mml:mi>n</mml:mi> </mml:msub> </mml:mrow> <mml:annotation encoding="application/x-tex">{u_n}</mml:annotation> </mml:semantics> </mml:math> </inline-formula> in <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="upper B Subscript n"> <mml:semantics> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:msub> <mml:mi>B</mml:mi> <mml:mi>n</mml:mi> </mml:msub> </mml:mrow> <mml:annotation encoding="application/x-tex">{B_n}</mml:annotation> </mml:semantics> </mml:math> </inline-formula> to the desired point <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="u"> <mml:semantics> <mml:mi>u</mml:mi> <mml:annotation encoding="application/x-tex">u</mml:annotation> </mml:semantics> </mml:math> </inline-formula> in <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="upper B"> <mml:semantics> <mml:mi>B</mml:mi> <mml:annotation encoding="application/x-tex">B</mml:annotation> </mml:semantics> </mml:math> </inline-formula>. Applications are given to the Rayleigh-Ritz method, regularization, Chebyshev solution of differential equations, and the calculus of variations.
In unconstrained minimization of a functions, the method of Davidon-Fletcher- Powell (a variable-metric method) enables the inverse of the Hessian H of f to be ap- proximated stepwise, using only … In unconstrained minimization of a functions, the method of Davidon-Fletcher- Powell (a variable-metric method) enables the inverse of the Hessian H of f to be ap- proximated stepwise, using only values of the gradient of f. It is shown here that, by solv- ing a certain variational problem, formulas for the successive corrections to H can be de- rived which closely resemble Davidon's. A symmetric correction matrix is sought which minimizes a weighted Euclidean norm, and also satisfies the condition. Numerical tests are described, comparing the performance (on four standard test functions) of two variationally-derived formulas with Davidon's. A proof by Y. Bard, modelled on Fletcher and Powell's, showing that the new formulas give the exact H after N steps, is included in an appendix. 1. The DFP Method. The class of gradient methods for finding the uncon- strained minimum of a function f(x)* in which the direction Sk of the next iterative step from Xk to Xk+l is computed from a formula such as:
There is a family of gradient algorithms (Broyden, 1970) that includes many useful methods for calculating the least value of a function F(x), and some of these algorithms have been … There is a family of gradient algorithms (Broyden, 1970) that includes many useful methods for calculating the least value of a function F(x), and some of these algorithms have been extended to solve linearly constrained problems (Fletcher, 1971). Some new and fundamental properties of these algorithms are given, in the case that F(x) is a positive definite quadratic function. In particular these properties are relevant to the case when only some of the iterations of an algorithm make a complete linear search. They suggest that Goldfarb's (1969) algorithm for linearly constrained problems has excellent quadratic termination properties, and it is proved that these properties are better than has been stated in previously published papers. Also a new technique is identified for unconstrained minimization without linear searches.
An approach to variable metric algorithms has been investigated in which the linear search sub-problem no longer becomes necessary. The property of quadratic termination has been replaced by one of … An approach to variable metric algorithms has been investigated in which the linear search sub-problem no longer becomes necessary. The property of quadratic termination has been replaced by one of monotonic convergence of the eigenvalues of the approximating matrix to the inverse hessian. A convex class of updating formulae which possess this property has been established, and a strategy has been indicated for choosing a member of the class so as to keep the approximation away from both singularity and unboundedness. A FORTRAN program has been tested extensively with encouraging results.
Journal Article On the Local and Superlinear Convergence of Quasi-Newton Methods Get access C. G. BROYDEN, C. G. BROYDEN Department of Computer Science, Cornell UniversityIthaca, N. Y. 15850, U.S.A. Search … Journal Article On the Local and Superlinear Convergence of Quasi-Newton Methods Get access C. G. BROYDEN, C. G. BROYDEN Department of Computer Science, Cornell UniversityIthaca, N. Y. 15850, U.S.A. Search for other works by this author on: Oxford Academic Google Scholar J. E. DENNIS, Jr., J. E. DENNIS, Jr. Department of Computer Science, Cornell UniversityIthaca, N. Y. 15850, U.S.A. Search for other works by this author on: Oxford Academic Google Scholar JORGE J. MORÉ JORGE J. MORÉ Department of Computer Science, Cornell UniversityIthaca, N. Y. 15850, U.S.A. Search for other works by this author on: Oxford Academic Google Scholar IMA Journal of Applied Mathematics, Volume 12, Issue 3, December 1973, Pages 223–245, https://doi.org/10.1093/imamat/12.3.223 Published: 01 December 1973 Article history Received: 26 March 1973 Published: 01 December 1973
Preface to the Classics Edition Preface Acknowledgments Glossary of Symbols Introduction Part I. Background Material. 1. Sample Problems 2. Linear Algebra 3. Analysis Part II. Nonconstructive Existence Theorems. 4. Gradient … Preface to the Classics Edition Preface Acknowledgments Glossary of Symbols Introduction Part I. Background Material. 1. Sample Problems 2. Linear Algebra 3. Analysis Part II. Nonconstructive Existence Theorems. 4. Gradient Mappings and Minimization 5. Contractions and the Continuation Property 6. The Degree of a Mapping Part III. Iterative Methods. 7. General Iterative Methods 8. Minimization Methods Part IV. Local Convergence. 9. Rates of Convergence-General 10. One-Step Stationary Methods 11. Multistep Methods and Additional One-Step Methods Part V. Semilocal and Global Convergence. 12. Contractions and Nonlinear Majorants 13. Convergence under Partial Ordering 14. Convergence of Minimization Methods An Annotated List of Basic Reference Books Bibliography Author Index Subject Index.
Abstract Das betrachtete Verfahren gehört zu der Klasse von Minimterungsverfahren, die das Minimum einer quadratischen Funktion von n Argumenten in höchstens n Schritten liefern. Für nichtquadratische konvexe Zielfunktionen wird gezeigt, … Abstract Das betrachtete Verfahren gehört zu der Klasse von Minimterungsverfahren, die das Minimum einer quadratischen Funktion von n Argumenten in höchstens n Schritten liefern. Für nichtquadratische konvexe Zielfunktionen wird gezeigt, daß das Verfahren mindestens die Konvergenzordnung n √2 besitzt.
Let <italic>F</italic> be a mapping from real <italic>n</italic>-dimensional Euclidean space into itself. Most practical algorithms for finding a zero of <italic>F</italic> are of the form <disp-formula content-type="math/mathml"> \[ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" … Let <italic>F</italic> be a mapping from real <italic>n</italic>-dimensional Euclidean space into itself. Most practical algorithms for finding a zero of <italic>F</italic> are of the form <disp-formula content-type="math/mathml"> \[ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="x Subscript k plus 1 Baseline equals x Subscript k Baseline minus upper B Subscript k Superscript negative 1 Baseline upper F x Subscript k Baseline comma"> <mml:semantics> <mml:mrow> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:msub> <mml:mi>x</mml:mi> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:mi>k</mml:mi> <mml:mo>+</mml:mo> <mml:mn>1</mml:mn> </mml:mrow> </mml:msub> </mml:mrow> <mml:mo>=</mml:mo> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:msub> <mml:mi>x</mml:mi> <mml:mi>k</mml:mi> </mml:msub> </mml:mrow> <mml:mo>−<!-- − --></mml:mo> <mml:msubsup> <mml:mi>B</mml:mi> <mml:mi>k</mml:mi> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:mo>−<!-- − --></mml:mo> <mml:mn>1</mml:mn> </mml:mrow> </mml:msubsup> <mml:mi>F</mml:mi> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:msub> <mml:mi>x</mml:mi> <mml:mi>k</mml:mi> </mml:msub> </mml:mrow> <mml:mo>,</mml:mo> </mml:mrow> <mml:annotation encoding="application/x-tex">{x_{k + 1}} = {x_k} - B_k^{ - 1}F{x_k},</mml:annotation> </mml:semantics> </mml:math> \] </disp-formula> where <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="left-brace upper B Subscript k Baseline right-brace"> <mml:semantics> <mml:mrow> <mml:mo fence="false" stretchy="false">{</mml:mo> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:msub> <mml:mi>B</mml:mi> <mml:mi>k</mml:mi> </mml:msub> </mml:mrow> <mml:mo fence="false" stretchy="false">}</mml:mo> </mml:mrow> <mml:annotation encoding="application/x-tex">\{ {B_k}\}</mml:annotation> </mml:semantics> </mml:math> </inline-formula> is a sequence of nonsingular matrices. The main result of this paper is a characterization theorem for the superlinear convergence to a zero of <italic>F</italic> of sequences of the above form. This result is then used to give a unified treatment of the results on the superlinear convergence of the Davidon-Fletcher-Powell method obtained by Powell for the case in which exact line searches are used, and by Broyden, Dennis, and Moré for the case without line searches. As a by-product, several results on the asymptotic behavior of the sequence <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="left-brace upper B Subscript k Baseline right-brace"> <mml:semantics> <mml:mrow> <mml:mo fence="false" stretchy="false">{</mml:mo> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:msub> <mml:mi>B</mml:mi> <mml:mi>k</mml:mi> </mml:msub> </mml:mrow> <mml:mo fence="false" stretchy="false">}</mml:mo> </mml:mrow> <mml:annotation encoding="application/x-tex">\{ {B_k}\}</mml:annotation> </mml:semantics> </mml:math> </inline-formula> are obtained. An interesting aspect of these results is that superlinear convergence is obtained without any consistency conditions; i.e., without requiring that the sequence <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="left-brace upper B Subscript k Baseline right-brace"> <mml:semantics> <mml:mrow> <mml:mo fence="false" stretchy="false">{</mml:mo> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:msub> <mml:mi>B</mml:mi> <mml:mi>k</mml:mi> </mml:msub> </mml:mrow> <mml:mo fence="false" stretchy="false">}</mml:mo> </mml:mrow> <mml:annotation encoding="application/x-tex">\{ {B_k}\}</mml:annotation> </mml:semantics> </mml:math> </inline-formula> converge to the Jacobian matrix of <italic>F</italic> at the zero. In fact, a modification of an example due to Powell shows that most of the known quasi-Newton methods are not, in general, consistent. Finally, it is pointed out that the above-mentioned characterization theorem applies to other single and double rank quasi-Newton methods, and that the results of this paper can be used to obtain their superlinear convergence.
Journal Article On the Convergence of the Variable Metric Algorithm Get access M. J. D. POWELL M. J. D. POWELL Theoretical Physics Division, U.K.A.E.A. Research Group, Atomic Energy Research EstablishmentHarwell … Journal Article On the Convergence of the Variable Metric Algorithm Get access M. J. D. POWELL M. J. D. POWELL Theoretical Physics Division, U.K.A.E.A. Research Group, Atomic Energy Research EstablishmentHarwell Search for other works by this author on: Oxford Academic Google Scholar IMA Journal of Applied Mathematics, Volume 7, Issue 1, February 1971, Pages 21–36, https://doi.org/10.1093/imamat/7.1.21 Published: 01 February 1971 Article history Received: 10 November 1969 Published: 01 February 1971
A revised algorithm is given for unconstrained optimization using quasi-Newton methods. The method is based on recurring the factorization of an approximation to the Hessian matrix. Knowledge of this factorization … A revised algorithm is given for unconstrained optimization using quasi-Newton methods. The method is based on recurring the factorization of an approximation to the Hessian matrix. Knowledge of this factorization allows greater flexibility when choosing the direction of search while minimizing the adverse effects of rounding error. The control of rounding error is particularly important when analytical derivatives are unavailable, and a modification of the algorithm to accept finite-difference approximations to the derivatives is given.