Iterative Solution of Nonlinear Equations in Several Variables

Type: Article
Publication Date: 1971-04-01
Citations: 4471
DOI: https://doi.org/10.2307/2004942

Locations

  • Mathematics of Computation
Preface to the Classics Edition Preface Acknowledgments Glossary of Symbols Introduction Part I. Background Material. 1. Sample Problems 2. Linear Algebra 3. Analysis Part II. Nonconstructive Existence Theorems. 4. Gradient … Preface to the Classics Edition Preface Acknowledgments Glossary of Symbols Introduction Part I. Background Material. 1. Sample Problems 2. Linear Algebra 3. Analysis Part II. Nonconstructive Existence Theorems. 4. Gradient Mappings and Minimization 5. Contractions and the Continuation Property 6. The Degree of a Mapping Part III. Iterative Methods. 7. General Iterative Methods 8. Minimization Methods Part IV. Local Convergence. 9. Rates of Convergence-General 10. One-Step Stationary Methods 11. Multistep Methods and Additional One-Step Methods Part V. Semilocal and Global Convergence. 12. Contractions and Nonlinear Majorants 13. Convergence under Partial Ordering 14. Convergence of Minimization Methods An Annotated List of Basic Reference Books Bibliography Author Index Subject Index.
The problem of iteratively solving linear equations of the form Ax = b, for a solution x, given b and an operator A, arises in several contexts in the circuits … The problem of iteratively solving linear equations of the form Ax = b, for a solution x, given b and an operator A, arises in several contexts in the circuits and systems area. The author presents a theorem for the iterative solution of such linear equations.< <ETX xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">&gt;</ETX>
In the given algorithm, we propose a development to the evaluations of Newton's numerical algorithm. Derivation of the standard method (Newton Raphson method) involves first derivative of the function. It … In the given algorithm, we propose a development to the evaluations of Newton's numerical algorithm. Derivation of the standard method (Newton Raphson method) involves first derivative of the function. It is shown that the number of iterations of the new method is six and determined results support this technique. The results obtained show that the new proposed method is more accurate, easy to use, and efficient than other numerical methods are indicated.
A summary is not available for this content so a preview has been provided. Please use the Get access link above for information on how to access this content. A summary is not available for this content so a preview has been provided. Please use the Get access link above for information on how to access this content.
We present another simple way of deriving several iterative methods for solving nonlinear equations numerically. The presented approach of deriving these methods is based on exponentially fitted osculating straight line. … We present another simple way of deriving several iterative methods for solving nonlinear equations numerically. The presented approach of deriving these methods is based on exponentially fitted osculating straight line. These methods are the modifications of Newton′s method. Also, we obtain well‐known methods as special cases, for example, Halley′s method, super‐Halley method, Ostrowski′s square‐root method, Chebyshev′s method, and so forth. Further, new classes of third‐order multipoint iterative methods free from a second‐order derivative are derived by semidiscrete modifications of cubically convergent iterative methods. Furthermore, a simple linear combination of two third‐order multipoint iterative methods is used for designing new optimal methods of order four.
We proposed two three-step iteration methods for solving system of nonlinear equations and it is showed that the methods have sixth-order convergence.Compared with Jae Heon Yun.[J.H.Yun,a note on three-step iterative … We proposed two three-step iteration methods for solving system of nonlinear equations and it is showed that the methods have sixth-order convergence.Compared with Jae Heon Yun.[J.H.Yun,a note on three-step iterative method for nonlinear equation,Appl Math Comput,2008,202: 401-405] proposed three-step iterative method with fourth-order convergence,our methods have a significant improvement.Some numerical experiments are given to illustrate the performance of our three-step iterative method.
In this paper, a published algorithm is investigated that proposes a three-step iterative method for solving nonlinear equations.This method is considered to be efficient with third order of convergence and … In this paper, a published algorithm is investigated that proposes a three-step iterative method for solving nonlinear equations.This method is considered to be efficient with third order of convergence and an improvement to previous methods.This paper proves that the order of convergence of the previous scheme is two, and the efficiency index is less than the corresponding Newton's method.In addition, the three-step iterative method of the scheme is implemented, and the previously published numerical results are found to be incorrect.Furthermore, this paper presents a new three-step iterative method with third order of convergence for solving nonlinear equations.The same numerical examples previously presented in literature are used in this study to correct those results and to illustrate the efficiency and performance of the new method.
The author found that certain solutions of some nonlinear partial differential equations can be obtained easily by an iteration method. As examples, the Korteweg-de Vries equation (also the 'modified' version), … The author found that certain solutions of some nonlinear partial differential equations can be obtained easily by an iteration method. As examples, the Korteweg-de Vries equation (also the 'modified' version), the 1D and 2D Burgers equation and the Kadomtsev-Petviashvili equation have been studied. For the Liouville equation the author found the general solution.
In this paper, we proposed a new three steps iterative method of order six for solving nonlinear equations.The method uses predictor-corrector technique, is constructed based on a Newton iterative method … In this paper, we proposed a new three steps iterative method of order six for solving nonlinear equations.The method uses predictor-corrector technique, is constructed based on a Newton iterative method and the weight combination of mid-point with Simpson quadrature formulas.Several numerical examples are given to illustrate the efficiency and performance of the iterative methods; the methods are also compared with well known existing iterative method.
Download This Paper Open PDF in Browser Add Paper to My Library Share: Permalink Using these links will ensure access to this page indefinitely Copy URL Copy DOI Download This Paper Open PDF in Browser Add Paper to My Library Share: Permalink Using these links will ensure access to this page indefinitely Copy URL Copy DOI
The explosive growth in popularity of social networking leads to the problematic usage. An increasing number of social network mental disorders (SNMDs), such as Cyber-Relationship Addiction, Information Overload, and Net … The explosive growth in popularity of social networking leads to the problematic usage. An increasing number of social network mental disorders (SNMDs), such as Cyber-Relationship Addiction, Information Overload, and Net Compulsion, have been recently noted. Symptoms of these mental disorders are usually observed passively today, resulting in delayed clinical intervention. In this paper, we argue that mining online social behavior provides an opportunity to actively identify SNMDs at an early stage. It is challenging to detect SNMDs because the mental status cannot be directly observed from online social activity logs. Our approach, new and innovative to the practice of SNMD detection, does not rely on self-revealing of those mental factors via questionnaires in Psychology. Instead, we propose a machine learning framework, namely, Social Network Mental Disorder Detection (SNMDD), that exploits features extracted from social network data to accurately identify potential cases of SNMDs. We also exploit multi-source learning in SNMDD and propose a new SNMD-based Tensor Model (STM) to improve the accuracy. To increase the scalability of STM, we further improve the efficiency with performance guarantee. Our framework is evaluated via a user study with 3,126 online social network users. We conduct a feature analysis, and also apply SNMDD on large-scale datasets and analyze the characteristics of the three SNMD types. The results manifest that SNMDD is promising for identifying online social network users with potential SNMDs.
<para xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> This paper describes a new <formula formulatype="inline"><tex Notation="TeX">$A$</tex></formula>- and <formula formulatype="inline"><tex Notation="TeX">$L$</tex></formula>-stable integration method for simulating the time-domain transient response of nonlinear circuits. The proposed method, which … <para xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> This paper describes a new <formula formulatype="inline"><tex Notation="TeX">$A$</tex></formula>- and <formula formulatype="inline"><tex Notation="TeX">$L$</tex></formula>-stable integration method for simulating the time-domain transient response of nonlinear circuits. The proposed method, which is based on the Obreshkov formula, can be made of arbitrary high order while maintaining the <formula formulatype="inline"><tex Notation="TeX">$A$</tex></formula>-stability property. The new method allows for the adoption of higher order integration methods for the transient analysis of electronic circuits while enabling them to take larger step sizes without violating the stability, leading to faster simulations. The method can be run in an <formula formulatype="inline"><tex Notation="TeX">$L$</tex></formula>-stable mode to handle circuits with extremely stiff equations. Necessary theoretical foundations, implementation details, error-control mechanisms, and computational results are presented. </para>
In this paper, we design two parametric classes of iterative methods without memory to solve nonlinear systems, whose convergence order is 4 and 7, respectively. From their error equations and … In this paper, we design two parametric classes of iterative methods without memory to solve nonlinear systems, whose convergence order is 4 and 7, respectively. From their error equations and to increase the convergence order without performing new functional evaluations, memory is introduced in these families of different forms. That allows us to increase from 4 to 6 the convergence order in the first family and from 7 to 11 in the second one. We perform some numerical experiments with big size systems for confirming the theoretical results and comparing the proposed methods along other known schemes.
This work analyzes the alternating minimization (AM) method for solving double sparsity constrained minimization problem, where the decision variable vector is split into two blocks. The objective function is a … This work analyzes the alternating minimization (AM) method for solving double sparsity constrained minimization problem, where the decision variable vector is split into two blocks. The objective function is a separable smooth function in terms of the two blocks. We analyze the convergence of the method for the non-convex objective function and prove a rate of convergence of the norms of the partial gradient mappings. Then, we establish a non-asymptotic sub-linear rate of convergence under the assumption of convexity and the Lipschitz continuity of the gradient of the objective function. To solve the sub-problems of the AM method, we adopt the so-called iterative thresholding method and study their analytical properties. Finally, some future works are discussed.
There is a plethora of k ‐step solvers for equations involving operators on Banach spaces. Their convergence is estimated by adopting hypotheses on high‐order derivatives which are not even in … There is a plethora of k ‐step solvers for equations involving operators on Banach spaces. Their convergence is estimated by adopting hypotheses on high‐order derivatives which are not even in these iterative solvers. In addition, no computable error bounds or information on the uniqueness of the solution based on Lipschitz‐type functions are given. Moreover, the choice of the initial guess is like shooting in the dark. Finally, the criteria of convergence differ from solver to solver, so no comparison can be made between their convergence domains. The novelty of our work is that we address these problems by introducing a generalized k ‐step solver containing all previous k ‐step solvers. Moreover, we address all previously stated problems in the earlier papers and utilizing weaker conditions that require only the continuity of the involved operator. Applications are also presented where we test the convergence criteria.
Quasi-birth-death processes are commonly used Markov chain models in queueing theory, computer performance, teletraffic modeling and other areas. We provide a new, simple algorithm for the matrix-geometric rate matrix. We … Quasi-birth-death processes are commonly used Markov chain models in queueing theory, computer performance, teletraffic modeling and other areas. We provide a new, simple algorithm for the matrix-geometric rate matrix. We demonstrate that it has quadratic convergence. We show theoretically and through numerical examples that it converges very fast and provides extremely accurate results even for almost unstable models.
For the Lagrangian-DNN relaxation of quadratic optimization problems (QOPs), we propose a Newton-bracketing method to improve the performance of the bisection-projection method implemented in BBCPOP [ACM Tran. Softw., 45(3):34 (2019)]. … For the Lagrangian-DNN relaxation of quadratic optimization problems (QOPs), we propose a Newton-bracketing method to improve the performance of the bisection-projection method implemented in BBCPOP [ACM Tran. Softw., 45(3):34 (2019)]. The relaxation problem is converted into the problem of finding the largest zero y∗ of a continuously differentiable (except at y∗) convex function g:R→R such that g(y)=0 if y≤y∗ and g(y)>0 otherwise. In theory, the method generates lower and upper bounds of y∗ both converging to y∗. Their convergence is quadratic if the right derivative of g at y∗ is positive. Accurate computation of g′(y) is necessary for the robustness of the method, but it is difficult to achieve in practice. As an alternative, we present a secant-bracketing method. We demonstrate that the method improves the quality of the lower bounds obtained by BBCPOP and SDPNAL+ for binary QOP instances from BIQMAC. Moreover, new lower bounds for the unknown optimal values of large scale QAP instances from QAPLIB are reported.
We prove the uniqueness of the quadrature formula with minimal error in the space $\tilde W_q^r[a,b],1 < q < \infty$, of $(b - a)$-periodic differentiable functions among all quadratures with … We prove the uniqueness of the quadrature formula with minimal error in the space $\tilde W_q^r[a,b],1 < q < \infty$, of $(b - a)$-periodic differentiable functions among all quadratures with n free nodes $\{ {x_k}\} _1^n$, $a = {x_1} < \cdots < {x_n} < b$, of fixed multiplicities $\{ {v_k}\} _1^n$, respectively. As a corollary, we get that the equidistant nodes are optimal in $\tilde W_q^r[a,b]$ for $1 \leqslant q \leqslant \infty$ if ${v_1} = \cdots = {v_n}$.
Pseudoconvexity and strict pseudoconvexity concepts are extended to nondifferentiable non-locally Lipschitz setting in ℝ n by means of Clarke's generalized gradients and Rock;rfellar's asymptotic gradients. New generalized convexity concepts that … Pseudoconvexity and strict pseudoconvexity concepts are extended to nondifferentiable non-locally Lipschitz setting in ℝ n by means of Clarke's generalized gradients and Rock;rfellar's asymptotic gradients. New generalized convexity concepts that emerge in a parallel manner, called protoconvexity. weak protoconvexity and weak pseudoconvexity are identified. Also a number of inter-relationships among these concepts and between these and other commonly used ones are established.
The semi-local convergence analysis of a third order scheme for solving nonlinear equation in Banach space has not been given under Lipschitz continuity or other conditions. Our goal is to … The semi-local convergence analysis of a third order scheme for solving nonlinear equation in Banach space has not been given under Lipschitz continuity or other conditions. Our goal is to extend the applicability of the Cordero-Torregrosa scheme in the semi-local convergence under conditions on the first Fréchet derivative of the operator involved. Majorizing sequences are used for proving our results. Numerical experiments testing the convergence criteria are given in this study.
A class of methods for finding zeros of polynomials is derived which depends upon an arbitrary parameter $\rho$. The Jenkins-Traub algorithm is a special case, corresponding to the choice $\rho … A class of methods for finding zeros of polynomials is derived which depends upon an arbitrary parameter $\rho$. The Jenkins-Traub algorithm is a special case, corresponding to the choice $\rho = \infty$. Global convergence is proved for large and small values of $\rho$ and a duality between pairs of members is exhibited. Finally, we show that many members of the class (including the Jenkins-Traub method) converge with R-order at least 2.618..., which improves upon the result obtained by Jenkins and Traub [3].
Abstract Solutions to two-dimensional electromagnetic imaging problems have been investigated extensively in recent years. Three-dimensional imaging problems have not been as well researched due to the inherent complexity of the … Abstract Solutions to two-dimensional electromagnetic imaging problems have been investigated extensively in recent years. Three-dimensional imaging problems have not been as well researched due to the inherent complexity of the formulation. Moreover, most of the work on imaging has concentrated on targets embedded in free space. In this paper, a technique for three-dimensional imaging based on an iterative Born method is presented. Targets with higher dielectric contrast for which the linear Born approximation is not valid can be imaged by using multiple iterations with this method. The extension of the technique to image a target buried in an inhomogeneous background is achieved by evaluating numerically the dyadic Green's function. Numerical simulations are presented to illustrate the applicability of the imaging algorithm. The results for a simulation are compared against those obtained with the inverse source method, and the difference in the performance of the two techniques is discussed. The convergence of the Born iterative method is also investigated in this paper.
We illustrate that most existence theorems using degree theory are in principle relatively constructive. The first one presented here is the Brouwer Fixed Point Theorem. Our method is "constructive with … We illustrate that most existence theorems using degree theory are in principle relatively constructive. The first one presented here is the Brouwer Fixed Point Theorem. Our method is "constructive with probability one" and can be implemented by computer. Other existence theorems are also proved by the same method. The approach is based on a transversality theorem.
One-step collocation methods are known to be a subclass of implicit Runge-Kutta methods. Further, one-leg methods are special multistep one-point collocation methods. In this paper we extend both of these … One-step collocation methods are known to be a subclass of implicit Runge-Kutta methods. Further, one-leg methods are special multistep one-point collocation methods. In this paper we extend both of these collocation ideas to multistep collocation methods with <italic>k</italic> previous meshpoints and <italic>m</italic> collocation points. By construction, the order is at least <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="m plus k minus 1"> <mml:semantics> <mml:mrow> <mml:mi>m</mml:mi> <mml:mo>+</mml:mo> <mml:mi>k</mml:mi> <mml:mo>−<!-- − --></mml:mo> <mml:mn>1</mml:mn> </mml:mrow> <mml:annotation encoding="application/x-tex">m + k - 1</mml:annotation> </mml:semantics> </mml:math> </inline-formula>. However, by choosing the collocation points in the right way, order <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="2 m plus k minus 1"> <mml:semantics> <mml:mrow> <mml:mn>2</mml:mn> <mml:mi>m</mml:mi> <mml:mo>+</mml:mo> <mml:mi>k</mml:mi> <mml:mo>−<!-- − --></mml:mo> <mml:mn>1</mml:mn> </mml:mrow> <mml:annotation encoding="application/x-tex">2m + k - 1</mml:annotation> </mml:semantics> </mml:math> </inline-formula> is obtained as the maximum. There are <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="StartBinomialOrMatrix m plus k minus 1 Choose k minus 1 EndBinomialOrMatrix"> <mml:semantics> <mml:mrow> <mml:mo>(</mml:mo> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:mtable rowspacing="4pt" columnspacing="1em"> <mml:mtr> <mml:mtd> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:mi>m</mml:mi> <mml:mo>+</mml:mo> <mml:mi>k</mml:mi> <mml:mo>−<!-- − --></mml:mo> <mml:mn>1</mml:mn> </mml:mrow> </mml:mtd> </mml:mtr> <mml:mtr> <mml:mtd> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:mi>k</mml:mi> <mml:mo>−<!-- − --></mml:mo> <mml:mn>1</mml:mn> </mml:mrow> </mml:mtd> </mml:mtr> </mml:mtable> </mml:mrow> <mml:mo>)</mml:mo> </mml:mrow> <mml:annotation encoding="application/x-tex">\left ( {\begin {array}{*{20}{c}} {m + k - 1} \\ {k - 1} \\ \end {array} } \right )</mml:annotation> </mml:semantics> </mml:math> </inline-formula> sets of such "multistep Gaussian" collocation points.