Type: Article
Publication Date: 1978-07-01
Citations: 9
DOI: https://doi.org/10.2307/2006489
This paper presents a numerical comparison between algorithms for unconstrained optimization that take account of sparsity in the second derivative matrix of the objective function.Some of the methods included in the comparison use difference approximation schemes to evaluate the second derivative matrix and others use an approximation to it which is updated regularly using the changes in the gradient.These results show what method to use in what circumstances and also suggest interesting future developments.1. Introduction.Numerical procedures for finding the minimum of a general function of several variables have been studied extensively in the last few years; and many powerful algorithms have been proposed, which require the calculation of the vector of first derivatives (see Davidon [2], Fletcher and Powell [4], Huang [6],Powell [8], for example).One common feature of these procedures is the fact that they use a matrix which can be regarded as an approximation to the second derivative matrix of the objective function.This matrix is updated regularly, so that the updated matrix satisfies a linear equation which is known usually as the "quasi-Newton equation" or "DFP condition".Because all the classical updating formulae revise all the elements of the matrix, the size of the problem that can be treated is often limited by the amount of computer storage that is available.However, a way to take any present sparsity into account has been suggested by Powell [11].It does not change the elements of the approximating matrix corresponding to known zero entries in the hessian of the objective function.Toint [14] shows that the calculation can be arranged so that its main part is only the solution of a sparse, symmetric and positive definite system of linear equations.The main purpose of this paper is to present numerical results that compare this procedure with some other methods that also use first derivative information, in order to show the actual and computational value of the proposed method.The other methods included in the comparison are Powell's method [8], since the new formula may be viewed as an extension of it to the sparse case, and methods using difference approximations to the hessian.When sparsity conditions