Author Description

Login to generate an author description

Ask a Question About This Mathematician

In recent papers we have considered the numerical solution of the Hammerstein equation \[y(t) = f(t) + \int_a^b {k(t,s)g(s,y(s))ds,} \quad t \in [a,b],\] by a method that first applies the 
 In recent papers we have considered the numerical solution of the Hammerstein equation \[y(t) = f(t) + \int_a^b {k(t,s)g(s,y(s))ds,} \quad t \in [a,b],\] by a method that first applies the standard collocation procedure to an equivalent equation for $z(t): = g(t,y(t))$, and then obtains an approximation to y by use of the equation \[ y(t) = f(t) + \int_a^b {k(t,s)z(s)ds,} \quad t \in [a,b].\] We study here a discretized version of the above method. This arises when numerical quadrature is used to approximate the definite integrals occurring in the said method. We consider the use of interpolatory quadrature rules, and we seek the discrete collocation approximation to z in certain piecewise-polynomial function spaces. Our principal result gives the precision required of a quadrature rule to guarantee the best possible (super)convergence rate for the discrete approximation to y. In the case where the kernel k is sufficiently smooth, this rate is of the same order as the rate for an approximation to y obtained via the exact method.
A plethora of higher order iterative methods, involving derivatives in algorithms, are available in the literature for finding multiple roots. Contrary to this fact, the higher order methods without derivatives 
 A plethora of higher order iterative methods, involving derivatives in algorithms, are available in the literature for finding multiple roots. Contrary to this fact, the higher order methods without derivatives in the iteration are difficult to construct, and hence, such methods are almost non-existent. This motivated us to explore a derivative-free iterative scheme with optimal fourth order convergence. The applicability of the new scheme is shown by testing on different functions, which illustrates the excellent convergence. Moreover, the comparison of the performance shows that the new technique is a good competitor to existing optimal fourth order Newton-like techniques.
Many optimal order multiple root techniques involving derivatives have been proposed in literature. On the contrary, optimal order multiple root techniques without derivatives are almost nonexistent. With this as a 
 Many optimal order multiple root techniques involving derivatives have been proposed in literature. On the contrary, optimal order multiple root techniques without derivatives are almost nonexistent. With this as a motivational factor, here we develop a family of optimal fourth-order derivative-free iterative schemes for computing multiple roots. The procedure is based on two steps of which the first is Traub–Steffensen iteration and second is Traub–Steffensen-like iteration. Theoretical results proved for particular cases of the family are symmetric to each other. This feature leads us to prove the general result that shows the fourth-order convergence. Efficacy is demonstrated on different test problems that verifies the efficient convergent nature of the new methods. Moreover, the comparison of performance has proven the presented derivative-free techniques as good competitors to the existing optimal fourth-order methods that use derivatives.
A number of optimal order multiple root techniques that require derivative evaluations in the formulas have been proposed in literature. However, derivative-free optimal techniques for multiple roots are seldom obtained. 
 A number of optimal order multiple root techniques that require derivative evaluations in the formulas have been proposed in literature. However, derivative-free optimal techniques for multiple roots are seldom obtained. By considering this factor as motivational, here we present a class of optimal fourth order methods for computing multiple roots without using derivatives in the iteration. The iterative formula consists of two steps in which the first step is a well-known Traub–Steffensen scheme whereas second step is a Traub–Steffensen-like scheme. The Methodology is based on two steps of which the first is Traub–Steffensen iteration and the second is Traub–Steffensen-like iteration. Effectiveness is validated on different problems that shows the robust convergent behavior of the proposed methods. It has been proven that the new derivative-free methods are good competitors to their existing counterparts that need derivative information.
Abstract In recent papers we have considered the numerical solution of the Hammerstein equation by a method which first applies the standard collocation procedure to an equivalent equation for z(t):= 
 Abstract In recent papers we have considered the numerical solution of the Hammerstein equation by a method which first applies the standard collocation procedure to an equivalent equation for z(t):= g(t, y(t)) , and then obtains an approximation to y by use of the equation In this paper we approximate z by a polynomial z n of degree ≀ n − 1, with coefficients determined by collocation at the zeros of the n th degree Chebyshev polynomial of the first kind. We then define the approximation to y to be and establish that, under suitable conditions, , uniformly in t .
A number of higher order iterative methods with derivative evaluations are developed in literature for computing multiple zeros. However, higher order methods without derivative for multiple zeros are difficult to 
 A number of higher order iterative methods with derivative evaluations are developed in literature for computing multiple zeros. However, higher order methods without derivative for multiple zeros are difficult to obtain and hence such methods are rare in literature. Motivated by this fact, we present a family of eighth order derivative-free methods for computing multiple zeros. Per iteration the methods require only four function evaluations, therefore, these are optimal in the sense of Kung-Traub conjecture. Stability of the proposed class is demonstrated by means of using a graphical tool, namely, basins of attraction. Boundaries of the basins are fractal like shapes through which basins are symmetric. Applicability of the methods is demonstrated on different nonlinear functions which illustrates the efficient convergence behavior. Comparison of the numerical results shows that the new derivative-free methods are good competitors to the existing optimal eighth-order techniques which require derivative evaluations.
Many optimal order multiple root techniques, which use derivatives in the algorithm, have been proposed in literature. But contrarily, derivative free optimal order techniques for multiple root are almost nonexistent. 
 Many optimal order multiple root techniques, which use derivatives in the algorithm, have been proposed in literature. But contrarily, derivative free optimal order techniques for multiple root are almost nonexistent. By this as an inspirational factor, here we present a family of optimal fourth order derivative-free techniques for computing multiple roots of nonlinear equations. At the beginning the convergence analysis is executed for particular values of multiplicity afterwards it concludes in general form. Behl et. al derivative-free method is seen as special case of the family. Moreover, the applicability and comparison is demonstrated on different nonlinear problems that certifies the efficient convergent nature of the new methods. Finally, we conclude that our new methods consume the lowest CPU time as compared to the existing ones. This illuminates the theoretical outcomes to a great extent of this study.
In this paper a simple turning point (<inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="y equals y Superscript c"> <mml:semantics> <mml:mrow> <mml:mi>y</mml:mi> <mml:mo>=</mml:mo> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:msup> <mml:mi>y</mml:mi> <mml:mi>c</mml:mi> </mml:msup> </mml:mrow> </mml:mrow> <mml:annotation encoding="application/x-tex">y 
 In this paper a simple turning point (<inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="y equals y Superscript c"> <mml:semantics> <mml:mrow> <mml:mi>y</mml:mi> <mml:mo>=</mml:mo> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:msup> <mml:mi>y</mml:mi> <mml:mi>c</mml:mi> </mml:msup> </mml:mrow> </mml:mrow> <mml:annotation encoding="application/x-tex">y = {y^c}</mml:annotation> </mml:semantics> </mml:math> </inline-formula>, <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="lamda equals lamda Superscript c"> <mml:semantics> <mml:mrow> <mml:mi>λ<!-- λ --></mml:mi> <mml:mo>=</mml:mo> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:msup> <mml:mi>λ<!-- λ --></mml:mi> <mml:mi>c</mml:mi> </mml:msup> </mml:mrow> </mml:mrow> <mml:annotation encoding="application/x-tex">\lambda = {\lambda ^c}</mml:annotation> </mml:semantics> </mml:math> </inline-formula>) of the parameter-dependent Hammerstein equation <disp-formula content-type="math/mathml"> \[ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="y left-parenthesis t right-parenthesis equals f left-parenthesis t right-parenthesis plus lamda integral Subscript a Superscript b Baseline k left-parenthesis t comma s right-parenthesis g left-parenthesis s comma y left-parenthesis s right-parenthesis right-parenthesis d s comma t element-of left-bracket a comma b right-bracket comma"> <mml:semantics> <mml:mrow> <mml:mi>y</mml:mi> <mml:mo stretchy="false">(</mml:mo> <mml:mi>t</mml:mi> <mml:mo stretchy="false">)</mml:mo> <mml:mo>=</mml:mo> <mml:mi>f</mml:mi> <mml:mo stretchy="false">(</mml:mo> <mml:mi>t</mml:mi> <mml:mo stretchy="false">)</mml:mo> <mml:mo>+</mml:mo> <mml:mi>λ<!-- λ --></mml:mi> <mml:msubsup> <mml:mo>∫<!-- ∫ --></mml:mo> <mml:mi>a</mml:mi> <mml:mi>b</mml:mi> </mml:msubsup> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:mi>k</mml:mi> <mml:mo stretchy="false">(</mml:mo> <mml:mi>t</mml:mi> <mml:mo>,</mml:mo> <mml:mi>s</mml:mi> <mml:mo stretchy="false">)</mml:mo> <mml:mi>g</mml:mi> <mml:mo stretchy="false">(</mml:mo> <mml:mi>s</mml:mi> <mml:mo>,</mml:mo> <mml:mi>y</mml:mi> <mml:mo stretchy="false">(</mml:mo> <mml:mi>s</mml:mi> <mml:mo stretchy="false">)</mml:mo> <mml:mo stretchy="false">)</mml:mo> <mml:mspace width="thickmathspace" /> <mml:mi>d</mml:mi> <mml:mi>s</mml:mi> <mml:mo>,</mml:mo> <mml:mspace width="1em" /> <mml:mi>t</mml:mi> <mml:mo>∈<!-- ∈ --></mml:mo> <mml:mo stretchy="false">[</mml:mo> <mml:mi>a</mml:mi> <mml:mo>,</mml:mo> <mml:mi>b</mml:mi> <mml:mo stretchy="false">]</mml:mo> <mml:mo>,</mml:mo> </mml:mrow> </mml:mrow> <mml:annotation encoding="application/x-tex">y(t) = f(t) + \lambda \int _a^b {k(t,s)g(s,y(s))\;ds,\quad t \in [a,b],}</mml:annotation> </mml:semantics> </mml:math> \] </disp-formula> is approximated numerically in the following way. A simple turning point (<inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="z equals z Superscript c"> <mml:semantics> <mml:mrow> <mml:mi>z</mml:mi> <mml:mo>=</mml:mo> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:msup> <mml:mi>z</mml:mi> <mml:mi>c</mml:mi> </mml:msup> </mml:mrow> </mml:mrow> <mml:annotation encoding="application/x-tex">z = {z^c}</mml:annotation> </mml:semantics> </mml:math> </inline-formula>, <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="lamda equals lamda Superscript c"> <mml:semantics> <mml:mrow> <mml:mi>λ<!-- λ --></mml:mi> <mml:mo>=</mml:mo> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:msup> <mml:mi>λ<!-- λ --></mml:mi> <mml:mi>c</mml:mi> </mml:msup> </mml:mrow> </mml:mrow> <mml:annotation encoding="application/x-tex">\lambda = {\lambda ^c}</mml:annotation> </mml:semantics> </mml:math> </inline-formula>) of an equivalent equation for <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="z left-parenthesis t right-parenthesis colon equals lamda g left-parenthesis t comma y left-parenthesis t right-parenthesis right-parenthesis"> <mml:semantics> <mml:mrow> <mml:mi>z</mml:mi> <mml:mo stretchy="false">(</mml:mo> <mml:mi>t</mml:mi> <mml:mo stretchy="false">)</mml:mo> <mml:mo>:=</mml:mo> <mml:mi>λ<!-- λ --></mml:mi> <mml:mi>g</mml:mi> <mml:mo stretchy="false">(</mml:mo> <mml:mi>t</mml:mi> <mml:mo>,</mml:mo> <mml:mi>y</mml:mi> <mml:mo stretchy="false">(</mml:mo> <mml:mi>t</mml:mi> <mml:mo stretchy="false">)</mml:mo> <mml:mo stretchy="false">)</mml:mo> </mml:mrow> <mml:annotation encoding="application/x-tex">z(t):=\lambda g(t,y(t))</mml:annotation> </mml:semantics> </mml:math> </inline-formula> is computed first. This is done by solving a discretized version of a certain system of equations which has (<inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="z Superscript c"> <mml:semantics> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:msup> <mml:mi>z</mml:mi> <mml:mi>c</mml:mi> </mml:msup> </mml:mrow> <mml:annotation encoding="application/x-tex">{z^c}</mml:annotation> </mml:semantics> </mml:math> </inline-formula>, <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="lamda Superscript c"> <mml:semantics> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:msup> <mml:mi>λ<!-- λ --></mml:mi> <mml:mi>c</mml:mi> </mml:msup> </mml:mrow> <mml:annotation encoding="application/x-tex">{\lambda ^c}</mml:annotation> </mml:semantics> </mml:math> </inline-formula>) as part of an isolated solution. The particular discretization used here is standard piecewise polynomial collocation. Finally, an approximation to <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="y Superscript c"> <mml:semantics> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:msup> <mml:mi>y</mml:mi> <mml:mi>c</mml:mi> </mml:msup> </mml:mrow> <mml:annotation encoding="application/x-tex">{y^c}</mml:annotation> </mml:semantics> </mml:math> </inline-formula> is obtained by use of the (exact) equation <disp-formula content-type="math/mathml"> \[ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="y left-parenthesis t right-parenthesis equals f left-parenthesis t right-parenthesis plus integral Subscript a Superscript b Baseline k left-parenthesis t comma s right-parenthesis z left-parenthesis s right-parenthesis d s comma t element-of left-bracket a comma b right-bracket period"> <mml:semantics> <mml:mrow> <mml:mi>y</mml:mi> <mml:mo stretchy="false">(</mml:mo> <mml:mi>t</mml:mi> <mml:mo stretchy="false">)</mml:mo> <mml:mo>=</mml:mo> <mml:mi>f</mml:mi> <mml:mo stretchy="false">(</mml:mo> <mml:mi>t</mml:mi> <mml:mo stretchy="false">)</mml:mo> <mml:mo>+</mml:mo> <mml:msubsup> <mml:mo>∫<!-- ∫ --></mml:mo> <mml:mi>a</mml:mi> <mml:mi>b</mml:mi> </mml:msubsup> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:mi>k</mml:mi> <mml:mo stretchy="false">(</mml:mo> <mml:mi>t</mml:mi> <mml:mo>,</mml:mo> <mml:mi>s</mml:mi> <mml:mo stretchy="false">)</mml:mo> <mml:mi>z</mml:mi> <mml:mo stretchy="false">(</mml:mo> <mml:mi>s</mml:mi> <mml:mo stretchy="false">)</mml:mo> <mml:mspace width="thickmathspace" /> <mml:mi>d</mml:mi> <mml:mi>s</mml:mi> <mml:mo>,</mml:mo> <mml:mspace width="1em" /> <mml:mi>t</mml:mi> <mml:mo>∈<!-- ∈ --></mml:mo> <mml:mo stretchy="false">[</mml:mo> <mml:mi>a</mml:mi> <mml:mo>,</mml:mo> <mml:mi>b</mml:mi> <mml:mo stretchy="false">]</mml:mo> <mml:mo>.</mml:mo> </mml:mrow> </mml:mrow> <mml:annotation encoding="application/x-tex">y(t) = f(t) + \int _a^b {k(t,s)z(s)\;ds,\quad t \in [a,b].}</mml:annotation> </mml:semantics> </mml:math> \] </disp-formula> The main result of the paper is that, under suitable conditions, the approximations to <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="y Superscript c"> <mml:semantics> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:msup> <mml:mi>y</mml:mi> <mml:mi>c</mml:mi> </mml:msup> </mml:mrow> <mml:annotation encoding="application/x-tex">{y^c}</mml:annotation> </mml:semantics> </mml:math> </inline-formula> and <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="lamda Superscript c"> <mml:semantics> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:msup> <mml:mi>λ<!-- λ --></mml:mi> <mml:mi>c</mml:mi> </mml:msup> </mml:mrow> <mml:annotation encoding="application/x-tex">{\lambda ^c}</mml:annotation> </mml:semantics> </mml:math> </inline-formula> are both superconvergent, that is, they both converge to their respective exact values at a faster rate than the collocation approximation (of <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="z Superscript c"> <mml:semantics> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:msup> <mml:mi>z</mml:mi> <mml:mi>c</mml:mi> </mml:msup> </mml:mrow> <mml:annotation encoding="application/x-tex">{z^c}</mml:annotation> </mml:semantics> </mml:math> </inline-formula>) does to <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="z Superscript c"> <mml:semantics> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:msup> <mml:mi>z</mml:mi> <mml:mi>c</mml:mi> </mml:msup> </mml:mrow> <mml:annotation encoding="application/x-tex">{z^c}</mml:annotation> </mml:semantics> </mml:math> </inline-formula>.
We propose a derivative free one-point method with memory of order 1.84 for solving nonlinear equations. The formula requires only one function evaluation and, therefore, the efficiency index is also 
 We propose a derivative free one-point method with memory of order 1.84 for solving nonlinear equations. The formula requires only one function evaluation and, therefore, the efficiency index is also 1.84. The methodology is carried out by approximating the derivative in Newton’s iteration using a rational linear function. Unlike the existing methods of a similar nature, the scheme of the new method is easy to remember and can also be implemented for systems of nonlinear equations. The applicability of the method is demonstrated on some practical as well as academic problems of a scalar and multi-dimensional nature. In addition, to check the efficacy of the new technique, a comparison of its performance with the existing techniques of the same order is also provided.
Abstract Many optimal order multiple root techniques, which use derivatives in the algorithm, have been proposed in literature. Many researchers tried to construct an optimal family of derivative-free methods for 
 Abstract Many optimal order multiple root techniques, which use derivatives in the algorithm, have been proposed in literature. Many researchers tried to construct an optimal family of derivative-free methods for multiple roots, but they did not get success in this direction. With this as a motivation factor, here, we present a new optimal class of derivative-free methods for obtaining multiple roots of nonlinear functions. This procedure involves Traub–Steffensen iteration in the first step and Traub–Steffensen-like iteration in the second step. Efficacy is checked on a good number of relevant numerical problems that verifies the efficient convergent nature of the new methods. Moreover, we find that the new derivative-free methods are just as competent as the other existing robust methods that use derivatives.
We developed a new family of optimal eighth-order derivative-free iterative methods for finding simple roots of nonlinear equations based on King’s scheme and Lagrange interpolation. By incorporating four self-accelerating parameters 
 We developed a new family of optimal eighth-order derivative-free iterative methods for finding simple roots of nonlinear equations based on King’s scheme and Lagrange interpolation. By incorporating four self-accelerating parameters and a weight function in a single variable, we extend the proposed family to an efficient iterative scheme with memory. Without performing additional functional evaluations, the order of convergence is boosted from 8 to 15.51560, and the efficiency index is raised from 1.6817 to 1.9847. To compare the performance of the proposed and existing schemes, some real-world problems are selected, such as the eigenvalue problem, continuous stirred-tank reactor problem, and energy distribution for Planck’s radiation. The stability and regions of convergence of the proposed iterative schemes are investigated through graphical tools, such as 2D symmetric basins of attractions for the case of memory-based schemes and 3D stereographic projections in the case of schemes without memory. The stability analysis demonstrates that our newly developed schemes have wider symmetric regions of convergence than the existing schemes in their respective domains.
In this paper, we describe iterative derivative-free algorithms for multiple roots of a nonlinear equation. Many researchers have evaluated the multiple roots of a nonlinear equation using the first- or 
 In this paper, we describe iterative derivative-free algorithms for multiple roots of a nonlinear equation. Many researchers have evaluated the multiple roots of a nonlinear equation using the first- or second-order derivative of functions. However, calculating the function’s derivative at each iteration is laborious. So, taking this as motivation, we develop second-order algorithms without using the derivatives. The convergence analysis is first carried out for particular values of multiple roots before coming to a general conclusion. According to the Kung–Traub hypothesis, the new algorithms will have optimal convergence since only two functions need to be evaluated at every step. The order of convergence is investigated using Taylor’s series expansion. Moreover, the applicability and comparisons with existing methods are demonstrated on three real-life problems (e.g., Kepler’s, Van der Waals, and continuous-stirred tank reactor problems) and three standard academic problems that contain the root clustering and complex root problems. Finally, we see from the computational outcomes that our approaches use the least amount of processing time compared with the ones already in use. This effectively displays the theoretical conclusions of this study.
In this paper, a class of efficient iterative methods with increasing order of convergence for solving systems of nonlinear equations is developed and analyzed. The methodology uses well-known third-order Potra–Pták 
 In this paper, a class of efficient iterative methods with increasing order of convergence for solving systems of nonlinear equations is developed and analyzed. The methodology uses well-known third-order Potra–Pták iteration in the first step and Newton-like iterations in the subsequent steps. Novelty of the methods is the increase in convergence order by an amount three per step at the cost of only one additional function evaluation. In addition, the algorithm uses a single inverse operator in each iteration, which makes it computationally more efficient and attractive. Local convergence is studied in the more general setting of a Banach space under suitable assumptions. Theoretical results of convergence and computational efficiency are verified through numerical experimentation. Comparison of numerical results indicates that the developed algorithms outperform the other similar algorithms available in the literature, particularly when applied to solve the large systems of equations. The basins of attraction of some of the existing methods along with the proposed method are given to exhibit their performance.
We introduce a new faster two-step King–Werner-type iterative method for solving nonlinear equations. The methodology is based on rational Hermite interpolation. The local as well as semi-local convergence analyses are 
 We introduce a new faster two-step King–Werner-type iterative method for solving nonlinear equations. The methodology is based on rational Hermite interpolation. The local as well as semi-local convergence analyses are presented under weak center Lipschitz and Lipschitz conditions. The convergence order is increased from 1+2 to 3 without any additional function calculations. Another advantage is the convenient fact that this method does not use derivatives. Numerical examples further validate the theoretical results.
We generalize a family of optimal eighth order weighted-Newton methods to Banach spaces and study their local convergence. In a previous study, the Taylor expansion of higher order derivatives is 
 We generalize a family of optimal eighth order weighted-Newton methods to Banach spaces and study their local convergence. In a previous study, the Taylor expansion of higher order derivatives is employed which may not exist or may be very expensive to compute. However, the hypotheses of the present study are based on the first Fréchet-derivative only, thereby the application of methods is expanded. New analysis also provides the radius of convergence, error bounds and estimates on the uniqueness of the solution. Such estimates are not provided in the approaches that use Taylor expansions of derivatives of higher order. Moreover, the order of convergence for the methods is verified by using computational order of convergence or approximate computational order of convergence without using higher order derivatives. Numerical examples are provided to verify the theoretical results and to show the good convergence behavior.
We study the local convergence analysis of a fifth order method and its multi-step version in Banach spaces. The hypotheses used are based on the first Fréchet-derivative only. The new 
 We study the local convergence analysis of a fifth order method and its multi-step version in Banach spaces. The hypotheses used are based on the first Fréchet-derivative only. The new approach provides a computable radius of convergence, error bounds on the distances involved, and estimates on the uniqueness of the solution. Such estimates are not provided in the approaches using Taylor expansions of higher order derivatives, which may not exist or may be very expensive or impossible to compute. Numerical examples are provided to validate the theoretical results. Convergence domains of the methods are also checked through complex geometry shown by drawing basins of attraction. The boundaries of the basins show fractal-like shapes through which the basins are symmetric.
&lt;abstract&gt;&lt;p&gt;In this paper, a derivative-free one-point iterative technique is proposed, with memory for finding multiple roots of practical problems, such as van der Waals and continuous stirred tank reactor problems, 
 &lt;abstract&gt;&lt;p&gt;In this paper, a derivative-free one-point iterative technique is proposed, with memory for finding multiple roots of practical problems, such as van der Waals and continuous stirred tank reactor problems, whose multiplicity is unknown in the literature. The new technique has an order of convergence of 1.84 and requires two function evaluations. It can be used as a seed to produce higher-order methods with similar properties, and it increases the efficiency of a similar procedure without memory due to Schröder. After studying its order of convergence, its stability is checked by applying it to the considered problems and comparing with the technique of the same nature for finding multiple roots. The geometrical behavior of the numerical results of the techniques is also studied.&lt;/p&gt;&lt;/abstract&gt;
In the study of systems’ dynamics the presence of symmetry dramatically reduces the complexity, while in chemistry, symmetry plays a central role in the analysis of the structure, bonding, and 
 In the study of systems’ dynamics the presence of symmetry dramatically reduces the complexity, while in chemistry, symmetry plays a central role in the analysis of the structure, bonding, and spectroscopy of molecules. In a more general context, the principle of equivalence, a principle of local symmetry, dictated the dynamics of gravity, of space-time itself. In certain instances, especially in the presence of symmetry, we end up having to deal with an equation with multiple roots. A variety of optimal methods have been proposed in the literature for multiple roots with known multiplicity, all of which need derivative evaluations in the formulations. However, in the literature, optimal methods without derivatives are few. Motivated by this feature, here we present a novel optimal family of fourth-order methods for multiple roots with known multiplicity, which do not use any derivative. The scheme of the new iterative family consists of two steps, namely Traub-Steffensen and Traub-Steffensen-like iterations with weight factor. According to the Kung-Traub hypothesis, the new algorithms satisfy the optimality criterion. Taylor’s series expansion is used to examine order of convergence. We also demonstrate the application of new algorithms to real-life problems, i.e., Van der Waals problem, Manning problem, Planck law radiation problem, and Kepler’s problem. Furthermore, the performance comparisons have shown that the given derivative-free algorithms are competitive with existing optimal fourth-order algorithms that require derivative information.
Nonlinear equations are frequently encountered in many areas of applied science and engineering, and they require efficient numerical methods to solve. To ensure quick and precise root approximation, this study 
 Nonlinear equations are frequently encountered in many areas of applied science and engineering, and they require efficient numerical methods to solve. To ensure quick and precise root approximation, this study presents derivative-free iterative methods for finding multiple zeros with an ideal fourth-order convergence rate. Furthermore, the study explores applications of the methods in both real-life and academic contexts. In particular, we examine the convergence of the methods by applying them to the problems, namely Van der Waals equation of state, Planck’s law of radiation, the Manning equation for isentropic supersonic flow and some academic problems. Numerical results reveal that the proposed derivative-free methods are more efficient and consistent than existing methods.
We discuss the local convergence of a derivative-free eighth order method in a Banach space setting. The present study provides the radius of convergence and bounds on errors under the 
 We discuss the local convergence of a derivative-free eighth order method in a Banach space setting. The present study provides the radius of convergence and bounds on errors under the hypothesis based on the first Fréchet-derivative only. The approaches of using Taylor expansions, containing higher order derivatives, do not provide such estimates since the derivatives may be nonexistent or costly to compute. By using only first derivative, the method can be applied to a wider class of functions and hence its applications are expanded. Numerical experiments show that the present results are applicable to the cases wherein previous results cannot be applied.
There are a good number of higher-order iterative methods for computing multiple zeros of nonlinear equations in the available literature. Most of them required first or higher-order derivatives of the 
 There are a good number of higher-order iterative methods for computing multiple zeros of nonlinear equations in the available literature. Most of them required first or higher-order derivatives of the involved function. No doubt, high-order derivative-free methods for multiple zeros are more difficult to obtain in comparison with simple zeros and with first order derivatives. This study presents an optimal family of fourth order derivative-free techniques for multiple zeros that requires just three evaluations of function ϕ, per iteration. The approximations of the derivative/s are based on symmetric divided differences. We also demonstrate the application of new algorithms on Van der Waals, Planck law radiation, Manning for isentropic supersonic flow and complex root problems. Numerical results reveal that the proposed derivative-free techniques are more efficient in comparison terms of CPU, residual error, computational order of convergence, number of iterations and the difference between two consecutive iterations with other existing methods.
High-order iterative techniques without derivatives for multiple roots have wide-ranging applications in the following: optimization tasks, where the objective function lacks explicit derivatives or is computationally expensive to evaluate; engineering; 
 High-order iterative techniques without derivatives for multiple roots have wide-ranging applications in the following: optimization tasks, where the objective function lacks explicit derivatives or is computationally expensive to evaluate; engineering; design finance; data science; and computational physics. The versatility and robustness of derivative-free fourth-order methods make them a valuable tool for tackling complex real-world optimization challenges. An optimal extension of the Traub–Steffensen technique for finding multiple roots is presented in this work. In contrast to past studies, the new expanded technique effectively handles functions with multiple zeros. In addition, a theorem is presented to analyze the convergence order of the proposed technique. We also examine the convergence analysis for four real-life problems, namely, Planck’s law radiation, Van der Waals, the Manning equation for isentropic supersonic flow, the blood rheology model, and two well-known academic problems. The efficiency of the approach and its convergence behavior are studied, providing valuable insights for practical and academic applications.
In nonlinear problems where function’s derivatives are difficult or expensive to compute, derivative-free iterative methods are good options to find the numerical solution. One of the important parts in the 
 In nonlinear problems where function’s derivatives are difficult or expensive to compute, derivative-free iterative methods are good options to find the numerical solution. One of the important parts in the development of such methods is to study their convergence properties. In this paper, we review the concepts of local and semi-local convergence for a derivative-free method for nonlinear equations. In the earlier study of the considered method, the convergence analysis was carried out assuming the existence of higher order derivatives while no derivative is used in the method. Such assumptions certainly restrict its applicability. The present study further provides the estimate of convergence radius and bounds on the error for the given method. Thus, the applicability of the method clearly seems to be extended over the wider class of problems. We also review some of the recent developments in this area. The results presented in this paper can be useful for practitioners and researchers in developing and analyzing derivative-free numerical algorithms.
We present a new family of optimal eighth-order numerical methods for finding the multiple zeros of nonlinear functions. The methodology used for constructing the iterative scheme is based on the 
 We present a new family of optimal eighth-order numerical methods for finding the multiple zeros of nonlinear functions. The methodology used for constructing the iterative scheme is based on the approach called the ‘weight factor approach’. This approach ingeniously combines weight functions to enhance convergence properties and stability. An extensive convergence analysis is conducted to prove that the proposed scheme achieves optimal eighth-order convergence, providing a significant improvement in efficiency over lower-order methods. Furthermore, the applicability of these novel methods to some real-world problems is demonstrated, showcasing their superior performance in terms of speed and accuracy. This is illustrated through a series of three examples involving basins of attraction with reflection symmetry, confirming the dominance of the new methods over existing counterparts. The examples highlight not only the robustness and precision of the proposed methods but also their practical utility in solving the complex nonlinear equations encountered in various scientific and engineering domains. Consequently, these eighth-order methods hold great promise for advancing computational techniques in fields that require the resolution of multiple roots with high precision.
There are a few optimal eighth order methods in literature for computing multiple zeros of a nonlinear function. Therefore, in this work our main focus is on developing a new 
 There are a few optimal eighth order methods in literature for computing multiple zeros of a nonlinear function. Therefore, in this work our main focus is on developing a new family of optimal eighth order iterative methods for multiple zeros. The applicability of proposed methods is demonstrated on some real life and academic problems that illustrate the efficient convergence behavior. It is shown that the newly developed schemes are able to compete with other methods in terms of numerical error, convergence and computational time. Stability is also demonstrated by means of a pictorial tool, namely, basins of attraction that have the fractal-like shapes along the borders through which basins are symmetric.
Numerous techniques are available in literature for finding the multiple roots of nonlinear equations. These techniques are categorized by the their order, informational efficiency and efficiency index. Another important criterion 
 Numerous techniques are available in literature for finding the multiple roots of nonlinear equations. These techniques are categorized by the their order, informational efficiency and efficiency index. Another important criterion for comparing the techniques is to study their complex dynamics using the graphical tool, namely basins of attraction. In this paper, we consider several techniques of order three and characterize their basins of attraction by applying them on different polynomials.
Derivative-free iterative methods are useful to approximate the numerical solutions when the given function lacks explicit derivative information or when the derivatives are too expensive to compute. Exploring the convergence 
 Derivative-free iterative methods are useful to approximate the numerical solutions when the given function lacks explicit derivative information or when the derivatives are too expensive to compute. Exploring the convergence properties of such methods is crucial in their development. The convergence behavior of such approaches and determining their practical applicability require conducting local as well as semi-local convergence analysis. In this study, we explore the convergence properties of a sixth-order derivative-free method. Previous local convergence studies assumed the existence of derivatives of high order even when the method itself was not utilizing any derivatives. These assumptions imposed limitations on its applicability. In this paper, we extend the local analysis by providing estimates for the error bounds of the method. Consequently, its applicability expands across a broader range of problems. Moreover, the more important and challenging semi-local convergence not investigated in earlier studies is also developed. Additionally, we survey recent advancements in this field. The outcomes presented in this paper can be proved valuable to practitioners and researchers engaged in the development and analysis of derivative-free numerical algorithms. Numerical tests illuminate and validate further the theoretical results.
Some optimal and non-optimal iterative approaches for computing multiple zeros of nonlinear functions have recently been published in the literature when the multiplicity ξ of the root is known. Here, 
 Some optimal and non-optimal iterative approaches for computing multiple zeros of nonlinear functions have recently been published in the literature when the multiplicity ξ of the root is known. Here, we present a new family of iterative algorithms for multiple zeros that are distinct from the existing approaches. Some special cases of the new family are presented and it is found that existing Liu-Zhou methods are the special cases of the new family. To check the consistency and stability of the new methods, we consider the continuous stirred tank reactor problem, isentropic supersonic flow problem, eigenvalue problem, complex root problem, and standard test problem in the numerical section and we find that the new methods are more competitive with other existing fourth-order methods. In the numerical section, the error of the new methods confirms their robust character.
A multi-step derivative-free iterative technique is developed by extending the well-known Traub-Steffensen iteration for solving the systems of nonlinear equations. Keeping in mind the computational aspects, the general idea to 
 A multi-step derivative-free iterative technique is developed by extending the well-known Traub-Steffensen iteration for solving the systems of nonlinear equations. Keeping in mind the computational aspects, the general idea to construct the scheme is to utilize the single inverse operator per iteration. In fact, these type of techniques are hardly found in literature. Under the standard assumption, the proposed technique is found to possess the fifth order of convergence. In order to demonstrate the computational complexity, the efficiency index is computed and further compared with the efficiency of existing methods of similar nature. The complexity analysis suggests that the developed method is computationally more efficient than their existing counterparts. Furthermore, the performance of method is examined numerically through locating the solutions to a variety of systems of nonlinear equations. Numerical results regarding accuracy, convergence behavior and elapsed CPU time confirm the efficient behavior of the proposed technique.
A multi-step derivative-free iterative technique is developed by extending the well-known Traub-Steffensen iteration for solving the systems of nonlinear equations. Keeping in mind the computational aspects, the general idea to 
 A multi-step derivative-free iterative technique is developed by extending the well-known Traub-Steffensen iteration for solving the systems of nonlinear equations. Keeping in mind the computational aspects, the general idea to construct the scheme is to utilize the single inverse operator per iteration. In fact, these type of techniques are hardly found in literature. Under the standard assumption, the proposed technique is found to possess the fifth order of convergence. In order to demonstrate the computational complexity, the efficiency index is computed and further compared with the efficiency of existing methods of similar nature. The complexity analysis suggests that the developed method is computationally more efficient than their existing counterparts. Furthermore, the performance of method is examined numerically through locating the solutions to a variety of systems of nonlinear equations. Numerical results regarding accuracy, convergence behavior and elapsed CPU time confirm the efficient behavior of the proposed technique.
Some optimal and non-optimal iterative approaches for computing multiple zeros of nonlinear functions have recently been published in the literature when the multiplicity ξ of the root is known. Here, 
 Some optimal and non-optimal iterative approaches for computing multiple zeros of nonlinear functions have recently been published in the literature when the multiplicity ξ of the root is known. Here, we present a new family of iterative algorithms for multiple zeros that are distinct from the existing approaches. Some special cases of the new family are presented and it is found that existing Liu-Zhou methods are the special cases of the new family. To check the consistency and stability of the new methods, we consider the continuous stirred tank reactor problem, isentropic supersonic flow problem, eigenvalue problem, complex root problem, and standard test problem in the numerical section and we find that the new methods are more competitive with other existing fourth-order methods. In the numerical section, the error of the new methods confirms their robust character.
We present a new family of optimal eighth-order numerical methods for finding the multiple zeros of nonlinear functions. The methodology used for constructing the iterative scheme is based on the 
 We present a new family of optimal eighth-order numerical methods for finding the multiple zeros of nonlinear functions. The methodology used for constructing the iterative scheme is based on the approach called the ‘weight factor approach’. This approach ingeniously combines weight functions to enhance convergence properties and stability. An extensive convergence analysis is conducted to prove that the proposed scheme achieves optimal eighth-order convergence, providing a significant improvement in efficiency over lower-order methods. Furthermore, the applicability of these novel methods to some real-world problems is demonstrated, showcasing their superior performance in terms of speed and accuracy. This is illustrated through a series of three examples involving basins of attraction with reflection symmetry, confirming the dominance of the new methods over existing counterparts. The examples highlight not only the robustness and precision of the proposed methods but also their practical utility in solving the complex nonlinear equations encountered in various scientific and engineering domains. Consequently, these eighth-order methods hold great promise for advancing computational techniques in fields that require the resolution of multiple roots with high precision.
Nonlinear equations are frequently encountered in many areas of applied science and engineering, and they require efficient numerical methods to solve. To ensure quick and precise root approximation, this study 
 Nonlinear equations are frequently encountered in many areas of applied science and engineering, and they require efficient numerical methods to solve. To ensure quick and precise root approximation, this study presents derivative-free iterative methods for finding multiple zeros with an ideal fourth-order convergence rate. Furthermore, the study explores applications of the methods in both real-life and academic contexts. In particular, we examine the convergence of the methods by applying them to the problems, namely Van der Waals equation of state, Planck’s law of radiation, the Manning equation for isentropic supersonic flow and some academic problems. Numerical results reveal that the proposed derivative-free methods are more efficient and consistent than existing methods.
In nonlinear problems where function’s derivatives are difficult or expensive to compute, derivative-free iterative methods are good options to find the numerical solution. One of the important parts in the 
 In nonlinear problems where function’s derivatives are difficult or expensive to compute, derivative-free iterative methods are good options to find the numerical solution. One of the important parts in the development of such methods is to study their convergence properties. In this paper, we review the concepts of local and semi-local convergence for a derivative-free method for nonlinear equations. In the earlier study of the considered method, the convergence analysis was carried out assuming the existence of higher order derivatives while no derivative is used in the method. Such assumptions certainly restrict its applicability. The present study further provides the estimate of convergence radius and bounds on the error for the given method. Thus, the applicability of the method clearly seems to be extended over the wider class of problems. We also review some of the recent developments in this area. The results presented in this paper can be useful for practitioners and researchers in developing and analyzing derivative-free numerical algorithms.
Derivative-free iterative methods are useful to approximate the numerical solutions when the given function lacks explicit derivative information or when the derivatives are too expensive to compute. Exploring the convergence 
 Derivative-free iterative methods are useful to approximate the numerical solutions when the given function lacks explicit derivative information or when the derivatives are too expensive to compute. Exploring the convergence properties of such methods is crucial in their development. The convergence behavior of such approaches and determining their practical applicability require conducting local as well as semi-local convergence analysis. In this study, we explore the convergence properties of a sixth-order derivative-free method. Previous local convergence studies assumed the existence of derivatives of high order even when the method itself was not utilizing any derivatives. These assumptions imposed limitations on its applicability. In this paper, we extend the local analysis by providing estimates for the error bounds of the method. Consequently, its applicability expands across a broader range of problems. Moreover, the more important and challenging semi-local convergence not investigated in earlier studies is also developed. Additionally, we survey recent advancements in this field. The outcomes presented in this paper can be proved valuable to practitioners and researchers engaged in the development and analysis of derivative-free numerical algorithms. Numerical tests illuminate and validate further the theoretical results.
High-order iterative techniques without derivatives for multiple roots have wide-ranging applications in the following: optimization tasks, where the objective function lacks explicit derivatives or is computationally expensive to evaluate; engineering; 
 High-order iterative techniques without derivatives for multiple roots have wide-ranging applications in the following: optimization tasks, where the objective function lacks explicit derivatives or is computationally expensive to evaluate; engineering; design finance; data science; and computational physics. The versatility and robustness of derivative-free fourth-order methods make them a valuable tool for tackling complex real-world optimization challenges. An optimal extension of the Traub–Steffensen technique for finding multiple roots is presented in this work. In contrast to past studies, the new expanded technique effectively handles functions with multiple zeros. In addition, a theorem is presented to analyze the convergence order of the proposed technique. We also examine the convergence analysis for four real-life problems, namely, Planck’s law radiation, Van der Waals, the Manning equation for isentropic supersonic flow, the blood rheology model, and two well-known academic problems. The efficiency of the approach and its convergence behavior are studied, providing valuable insights for practical and academic applications.
In the study of systems’ dynamics the presence of symmetry dramatically reduces the complexity, while in chemistry, symmetry plays a central role in the analysis of the structure, bonding, and 
 In the study of systems’ dynamics the presence of symmetry dramatically reduces the complexity, while in chemistry, symmetry plays a central role in the analysis of the structure, bonding, and spectroscopy of molecules. In a more general context, the principle of equivalence, a principle of local symmetry, dictated the dynamics of gravity, of space-time itself. In certain instances, especially in the presence of symmetry, we end up having to deal with an equation with multiple roots. A variety of optimal methods have been proposed in the literature for multiple roots with known multiplicity, all of which need derivative evaluations in the formulations. However, in the literature, optimal methods without derivatives are few. Motivated by this feature, here we present a novel optimal family of fourth-order methods for multiple roots with known multiplicity, which do not use any derivative. The scheme of the new iterative family consists of two steps, namely Traub-Steffensen and Traub-Steffensen-like iterations with weight factor. According to the Kung-Traub hypothesis, the new algorithms satisfy the optimality criterion. Taylor’s series expansion is used to examine order of convergence. We also demonstrate the application of new algorithms to real-life problems, i.e., Van der Waals problem, Manning problem, Planck law radiation problem, and Kepler’s problem. Furthermore, the performance comparisons have shown that the given derivative-free algorithms are competitive with existing optimal fourth-order algorithms that require derivative information.
We developed a new family of optimal eighth-order derivative-free iterative methods for finding simple roots of nonlinear equations based on King’s scheme and Lagrange interpolation. By incorporating four self-accelerating parameters 
 We developed a new family of optimal eighth-order derivative-free iterative methods for finding simple roots of nonlinear equations based on King’s scheme and Lagrange interpolation. By incorporating four self-accelerating parameters and a weight function in a single variable, we extend the proposed family to an efficient iterative scheme with memory. Without performing additional functional evaluations, the order of convergence is boosted from 8 to 15.51560, and the efficiency index is raised from 1.6817 to 1.9847. To compare the performance of the proposed and existing schemes, some real-world problems are selected, such as the eigenvalue problem, continuous stirred-tank reactor problem, and energy distribution for Planck’s radiation. The stability and regions of convergence of the proposed iterative schemes are investigated through graphical tools, such as 2D symmetric basins of attractions for the case of memory-based schemes and 3D stereographic projections in the case of schemes without memory. The stability analysis demonstrates that our newly developed schemes have wider symmetric regions of convergence than the existing schemes in their respective domains.
&lt;abstract&gt;&lt;p&gt;In this paper, a derivative-free one-point iterative technique is proposed, with memory for finding multiple roots of practical problems, such as van der Waals and continuous stirred tank reactor problems, 
 &lt;abstract&gt;&lt;p&gt;In this paper, a derivative-free one-point iterative technique is proposed, with memory for finding multiple roots of practical problems, such as van der Waals and continuous stirred tank reactor problems, whose multiplicity is unknown in the literature. The new technique has an order of convergence of 1.84 and requires two function evaluations. It can be used as a seed to produce higher-order methods with similar properties, and it increases the efficiency of a similar procedure without memory due to Schröder. After studying its order of convergence, its stability is checked by applying it to the considered problems and comparing with the technique of the same nature for finding multiple roots. The geometrical behavior of the numerical results of the techniques is also studied.&lt;/p&gt;&lt;/abstract&gt;
In this paper, we describe iterative derivative-free algorithms for multiple roots of a nonlinear equation. Many researchers have evaluated the multiple roots of a nonlinear equation using the first- or 
 In this paper, we describe iterative derivative-free algorithms for multiple roots of a nonlinear equation. Many researchers have evaluated the multiple roots of a nonlinear equation using the first- or second-order derivative of functions. However, calculating the function’s derivative at each iteration is laborious. So, taking this as motivation, we develop second-order algorithms without using the derivatives. The convergence analysis is first carried out for particular values of multiple roots before coming to a general conclusion. According to the Kung–Traub hypothesis, the new algorithms will have optimal convergence since only two functions need to be evaluated at every step. The order of convergence is investigated using Taylor’s series expansion. Moreover, the applicability and comparisons with existing methods are demonstrated on three real-life problems (e.g., Kepler’s, Van der Waals, and continuous-stirred tank reactor problems) and three standard academic problems that contain the root clustering and complex root problems. Finally, we see from the computational outcomes that our approaches use the least amount of processing time compared with the ones already in use. This effectively displays the theoretical conclusions of this study.
There are a good number of higher-order iterative methods for computing multiple zeros of nonlinear equations in the available literature. Most of them required first or higher-order derivatives of the 
 There are a good number of higher-order iterative methods for computing multiple zeros of nonlinear equations in the available literature. Most of them required first or higher-order derivatives of the involved function. No doubt, high-order derivative-free methods for multiple zeros are more difficult to obtain in comparison with simple zeros and with first order derivatives. This study presents an optimal family of fourth order derivative-free techniques for multiple zeros that requires just three evaluations of function ϕ, per iteration. The approximations of the derivative/s are based on symmetric divided differences. We also demonstrate the application of new algorithms on Van der Waals, Planck law radiation, Manning for isentropic supersonic flow and complex root problems. Numerical results reveal that the proposed derivative-free techniques are more efficient in comparison terms of CPU, residual error, computational order of convergence, number of iterations and the difference between two consecutive iterations with other existing methods.
Numerous techniques are available in literature for finding the multiple roots of nonlinear equations. These techniques are categorized by the their order, informational efficiency and efficiency index. Another important criterion 
 Numerous techniques are available in literature for finding the multiple roots of nonlinear equations. These techniques are categorized by the their order, informational efficiency and efficiency index. Another important criterion for comparing the techniques is to study their complex dynamics using the graphical tool, namely basins of attraction. In this paper, we consider several techniques of order three and characterize their basins of attraction by applying them on different polynomials.
We study the local convergence analysis of a fifth order method and its multi-step version in Banach spaces. The hypotheses used are based on the first Fréchet-derivative only. The new 
 We study the local convergence analysis of a fifth order method and its multi-step version in Banach spaces. The hypotheses used are based on the first Fréchet-derivative only. The new approach provides a computable radius of convergence, error bounds on the distances involved, and estimates on the uniqueness of the solution. Such estimates are not provided in the approaches using Taylor expansions of higher order derivatives, which may not exist or may be very expensive or impossible to compute. Numerical examples are provided to validate the theoretical results. Convergence domains of the methods are also checked through complex geometry shown by drawing basins of attraction. The boundaries of the basins show fractal-like shapes through which the basins are symmetric.
In this paper, a class of efficient iterative methods with increasing order of convergence for solving systems of nonlinear equations is developed and analyzed. The methodology uses well-known third-order Potra–Pták 
 In this paper, a class of efficient iterative methods with increasing order of convergence for solving systems of nonlinear equations is developed and analyzed. The methodology uses well-known third-order Potra–Pták iteration in the first step and Newton-like iterations in the subsequent steps. Novelty of the methods is the increase in convergence order by an amount three per step at the cost of only one additional function evaluation. In addition, the algorithm uses a single inverse operator in each iteration, which makes it computationally more efficient and attractive. Local convergence is studied in the more general setting of a Banach space under suitable assumptions. Theoretical results of convergence and computational efficiency are verified through numerical experimentation. Comparison of numerical results indicates that the developed algorithms outperform the other similar algorithms available in the literature, particularly when applied to solve the large systems of equations. The basins of attraction of some of the existing methods along with the proposed method are given to exhibit their performance.
Abstract Many optimal order multiple root techniques, which use derivatives in the algorithm, have been proposed in literature. Many researchers tried to construct an optimal family of derivative-free methods for 
 Abstract Many optimal order multiple root techniques, which use derivatives in the algorithm, have been proposed in literature. Many researchers tried to construct an optimal family of derivative-free methods for multiple roots, but they did not get success in this direction. With this as a motivation factor, here, we present a new optimal class of derivative-free methods for obtaining multiple roots of nonlinear functions. This procedure involves Traub–Steffensen iteration in the first step and Traub–Steffensen-like iteration in the second step. Efficacy is checked on a good number of relevant numerical problems that verifies the efficient convergent nature of the new methods. Moreover, we find that the new derivative-free methods are just as competent as the other existing robust methods that use derivatives.
Many optimal order multiple root techniques, which use derivatives in the algorithm, have been proposed in literature. But contrarily, derivative free optimal order techniques for multiple root are almost nonexistent. 
 Many optimal order multiple root techniques, which use derivatives in the algorithm, have been proposed in literature. But contrarily, derivative free optimal order techniques for multiple root are almost nonexistent. By this as an inspirational factor, here we present a family of optimal fourth order derivative-free techniques for computing multiple roots of nonlinear equations. At the beginning the convergence analysis is executed for particular values of multiplicity afterwards it concludes in general form. Behl et. al derivative-free method is seen as special case of the family. Moreover, the applicability and comparison is demonstrated on different nonlinear problems that certifies the efficient convergent nature of the new methods. Finally, we conclude that our new methods consume the lowest CPU time as compared to the existing ones. This illuminates the theoretical outcomes to a great extent of this study.
There are a few optimal eighth order methods in literature for computing multiple zeros of a nonlinear function. Therefore, in this work our main focus is on developing a new 
 There are a few optimal eighth order methods in literature for computing multiple zeros of a nonlinear function. Therefore, in this work our main focus is on developing a new family of optimal eighth order iterative methods for multiple zeros. The applicability of proposed methods is demonstrated on some real life and academic problems that illustrate the efficient convergence behavior. It is shown that the newly developed schemes are able to compete with other methods in terms of numerical error, convergence and computational time. Stability is also demonstrated by means of a pictorial tool, namely, basins of attraction that have the fractal-like shapes along the borders through which basins are symmetric.
A number of optimal order multiple root techniques that require derivative evaluations in the formulas have been proposed in literature. However, derivative-free optimal techniques for multiple roots are seldom obtained. 
 A number of optimal order multiple root techniques that require derivative evaluations in the formulas have been proposed in literature. However, derivative-free optimal techniques for multiple roots are seldom obtained. By considering this factor as motivational, here we present a class of optimal fourth order methods for computing multiple roots without using derivatives in the iteration. The iterative formula consists of two steps in which the first step is a well-known Traub–Steffensen scheme whereas second step is a Traub–Steffensen-like scheme. The Methodology is based on two steps of which the first is Traub–Steffensen iteration and the second is Traub–Steffensen-like iteration. Effectiveness is validated on different problems that shows the robust convergent behavior of the proposed methods. It has been proven that the new derivative-free methods are good competitors to their existing counterparts that need derivative information.
A plethora of higher order iterative methods, involving derivatives in algorithms, are available in the literature for finding multiple roots. Contrary to this fact, the higher order methods without derivatives 
 A plethora of higher order iterative methods, involving derivatives in algorithms, are available in the literature for finding multiple roots. Contrary to this fact, the higher order methods without derivatives in the iteration are difficult to construct, and hence, such methods are almost non-existent. This motivated us to explore a derivative-free iterative scheme with optimal fourth order convergence. The applicability of the new scheme is shown by testing on different functions, which illustrates the excellent convergence. Moreover, the comparison of the performance shows that the new technique is a good competitor to existing optimal fourth order Newton-like techniques.
We discuss the local convergence of a derivative-free eighth order method in a Banach space setting. The present study provides the radius of convergence and bounds on errors under the 
 We discuss the local convergence of a derivative-free eighth order method in a Banach space setting. The present study provides the radius of convergence and bounds on errors under the hypothesis based on the first Fréchet-derivative only. The approaches of using Taylor expansions, containing higher order derivatives, do not provide such estimates since the derivatives may be nonexistent or costly to compute. By using only first derivative, the method can be applied to a wider class of functions and hence its applications are expanded. Numerical experiments show that the present results are applicable to the cases wherein previous results cannot be applied.
Many optimal order multiple root techniques involving derivatives have been proposed in literature. On the contrary, optimal order multiple root techniques without derivatives are almost nonexistent. With this as a 
 Many optimal order multiple root techniques involving derivatives have been proposed in literature. On the contrary, optimal order multiple root techniques without derivatives are almost nonexistent. With this as a motivational factor, here we develop a family of optimal fourth-order derivative-free iterative schemes for computing multiple roots. The procedure is based on two steps of which the first is Traub–Steffensen iteration and second is Traub–Steffensen-like iteration. Theoretical results proved for particular cases of the family are symmetric to each other. This feature leads us to prove the general result that shows the fourth-order convergence. Efficacy is demonstrated on different test problems that verifies the efficient convergent nature of the new methods. Moreover, the comparison of performance has proven the presented derivative-free techniques as good competitors to the existing optimal fourth-order methods that use derivatives.
We propose a derivative free one-point method with memory of order 1.84 for solving nonlinear equations. The formula requires only one function evaluation and, therefore, the efficiency index is also 
 We propose a derivative free one-point method with memory of order 1.84 for solving nonlinear equations. The formula requires only one function evaluation and, therefore, the efficiency index is also 1.84. The methodology is carried out by approximating the derivative in Newton’s iteration using a rational linear function. Unlike the existing methods of a similar nature, the scheme of the new method is easy to remember and can also be implemented for systems of nonlinear equations. The applicability of the method is demonstrated on some practical as well as academic problems of a scalar and multi-dimensional nature. In addition, to check the efficacy of the new technique, a comparison of its performance with the existing techniques of the same order is also provided.
A number of higher order iterative methods with derivative evaluations are developed in literature for computing multiple zeros. However, higher order methods without derivative for multiple zeros are difficult to 
 A number of higher order iterative methods with derivative evaluations are developed in literature for computing multiple zeros. However, higher order methods without derivative for multiple zeros are difficult to obtain and hence such methods are rare in literature. Motivated by this fact, we present a family of eighth order derivative-free methods for computing multiple zeros. Per iteration the methods require only four function evaluations, therefore, these are optimal in the sense of Kung-Traub conjecture. Stability of the proposed class is demonstrated by means of using a graphical tool, namely, basins of attraction. Boundaries of the basins are fractal like shapes through which basins are symmetric. Applicability of the methods is demonstrated on different nonlinear functions which illustrates the efficient convergence behavior. Comparison of the numerical results shows that the new derivative-free methods are good competitors to the existing optimal eighth-order techniques which require derivative evaluations.
We generalize a family of optimal eighth order weighted-Newton methods to Banach spaces and study their local convergence. In a previous study, the Taylor expansion of higher order derivatives is 
 We generalize a family of optimal eighth order weighted-Newton methods to Banach spaces and study their local convergence. In a previous study, the Taylor expansion of higher order derivatives is employed which may not exist or may be very expensive to compute. However, the hypotheses of the present study are based on the first Fréchet-derivative only, thereby the application of methods is expanded. New analysis also provides the radius of convergence, error bounds and estimates on the uniqueness of the solution. Such estimates are not provided in the approaches that use Taylor expansions of derivatives of higher order. Moreover, the order of convergence for the methods is verified by using computational order of convergence or approximate computational order of convergence without using higher order derivatives. Numerical examples are provided to verify the theoretical results and to show the good convergence behavior.
We introduce a new faster two-step King–Werner-type iterative method for solving nonlinear equations. The methodology is based on rational Hermite interpolation. The local as well as semi-local convergence analyses are 
 We introduce a new faster two-step King–Werner-type iterative method for solving nonlinear equations. The methodology is based on rational Hermite interpolation. The local as well as semi-local convergence analyses are presented under weak center Lipschitz and Lipschitz conditions. The convergence order is increased from 1+2 to 3 without any additional function calculations. Another advantage is the convenient fact that this method does not use derivatives. Numerical examples further validate the theoretical results.
Abstract In recent papers we have considered the numerical solution of the Hammerstein equation by a method which first applies the standard collocation procedure to an equivalent equation for z(t):= 
 Abstract In recent papers we have considered the numerical solution of the Hammerstein equation by a method which first applies the standard collocation procedure to an equivalent equation for z(t):= g(t, y(t)) , and then obtains an approximation to y by use of the equation In this paper we approximate z by a polynomial z n of degree ≀ n − 1, with coefficients determined by collocation at the zeros of the n th degree Chebyshev polynomial of the first kind. We then define the approximation to y to be and establish that, under suitable conditions, , uniformly in t .
In recent papers we have considered the numerical solution of the Hammerstein equation \[y(t) = f(t) + \int_a^b {k(t,s)g(s,y(s))ds,} \quad t \in [a,b],\] by a method that first applies the 
 In recent papers we have considered the numerical solution of the Hammerstein equation \[y(t) = f(t) + \int_a^b {k(t,s)g(s,y(s))ds,} \quad t \in [a,b],\] by a method that first applies the standard collocation procedure to an equivalent equation for $z(t): = g(t,y(t))$, and then obtains an approximation to y by use of the equation \[ y(t) = f(t) + \int_a^b {k(t,s)z(s)ds,} \quad t \in [a,b].\] We study here a discretized version of the above method. This arises when numerical quadrature is used to approximate the definite integrals occurring in the said method. We consider the use of interpolatory quadrature rules, and we seek the discrete collocation approximation to z in certain piecewise-polynomial function spaces. Our principal result gives the precision required of a quadrature rule to guarantee the best possible (super)convergence rate for the discrete approximation to y. In the case where the kernel k is sufficiently smooth, this rate is of the same order as the rate for an approximation to y obtained via the exact method.
In this paper a simple turning point (<inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="y equals y Superscript c"> <mml:semantics> <mml:mrow> <mml:mi>y</mml:mi> <mml:mo>=</mml:mo> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:msup> <mml:mi>y</mml:mi> <mml:mi>c</mml:mi> </mml:msup> </mml:mrow> </mml:mrow> <mml:annotation encoding="application/x-tex">y 
 In this paper a simple turning point (<inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="y equals y Superscript c"> <mml:semantics> <mml:mrow> <mml:mi>y</mml:mi> <mml:mo>=</mml:mo> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:msup> <mml:mi>y</mml:mi> <mml:mi>c</mml:mi> </mml:msup> </mml:mrow> </mml:mrow> <mml:annotation encoding="application/x-tex">y = {y^c}</mml:annotation> </mml:semantics> </mml:math> </inline-formula>, <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="lamda equals lamda Superscript c"> <mml:semantics> <mml:mrow> <mml:mi>λ<!-- λ --></mml:mi> <mml:mo>=</mml:mo> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:msup> <mml:mi>λ<!-- λ --></mml:mi> <mml:mi>c</mml:mi> </mml:msup> </mml:mrow> </mml:mrow> <mml:annotation encoding="application/x-tex">\lambda = {\lambda ^c}</mml:annotation> </mml:semantics> </mml:math> </inline-formula>) of the parameter-dependent Hammerstein equation <disp-formula content-type="math/mathml"> \[ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="y left-parenthesis t right-parenthesis equals f left-parenthesis t right-parenthesis plus lamda integral Subscript a Superscript b Baseline k left-parenthesis t comma s right-parenthesis g left-parenthesis s comma y left-parenthesis s right-parenthesis right-parenthesis d s comma t element-of left-bracket a comma b right-bracket comma"> <mml:semantics> <mml:mrow> <mml:mi>y</mml:mi> <mml:mo stretchy="false">(</mml:mo> <mml:mi>t</mml:mi> <mml:mo stretchy="false">)</mml:mo> <mml:mo>=</mml:mo> <mml:mi>f</mml:mi> <mml:mo stretchy="false">(</mml:mo> <mml:mi>t</mml:mi> <mml:mo stretchy="false">)</mml:mo> <mml:mo>+</mml:mo> <mml:mi>λ<!-- λ --></mml:mi> <mml:msubsup> <mml:mo>∫<!-- ∫ --></mml:mo> <mml:mi>a</mml:mi> <mml:mi>b</mml:mi> </mml:msubsup> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:mi>k</mml:mi> <mml:mo stretchy="false">(</mml:mo> <mml:mi>t</mml:mi> <mml:mo>,</mml:mo> <mml:mi>s</mml:mi> <mml:mo stretchy="false">)</mml:mo> <mml:mi>g</mml:mi> <mml:mo stretchy="false">(</mml:mo> <mml:mi>s</mml:mi> <mml:mo>,</mml:mo> <mml:mi>y</mml:mi> <mml:mo stretchy="false">(</mml:mo> <mml:mi>s</mml:mi> <mml:mo stretchy="false">)</mml:mo> <mml:mo stretchy="false">)</mml:mo> <mml:mspace width="thickmathspace" /> <mml:mi>d</mml:mi> <mml:mi>s</mml:mi> <mml:mo>,</mml:mo> <mml:mspace width="1em" /> <mml:mi>t</mml:mi> <mml:mo>∈<!-- ∈ --></mml:mo> <mml:mo stretchy="false">[</mml:mo> <mml:mi>a</mml:mi> <mml:mo>,</mml:mo> <mml:mi>b</mml:mi> <mml:mo stretchy="false">]</mml:mo> <mml:mo>,</mml:mo> </mml:mrow> </mml:mrow> <mml:annotation encoding="application/x-tex">y(t) = f(t) + \lambda \int _a^b {k(t,s)g(s,y(s))\;ds,\quad t \in [a,b],}</mml:annotation> </mml:semantics> </mml:math> \] </disp-formula> is approximated numerically in the following way. A simple turning point (<inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="z equals z Superscript c"> <mml:semantics> <mml:mrow> <mml:mi>z</mml:mi> <mml:mo>=</mml:mo> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:msup> <mml:mi>z</mml:mi> <mml:mi>c</mml:mi> </mml:msup> </mml:mrow> </mml:mrow> <mml:annotation encoding="application/x-tex">z = {z^c}</mml:annotation> </mml:semantics> </mml:math> </inline-formula>, <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="lamda equals lamda Superscript c"> <mml:semantics> <mml:mrow> <mml:mi>λ<!-- λ --></mml:mi> <mml:mo>=</mml:mo> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:msup> <mml:mi>λ<!-- λ --></mml:mi> <mml:mi>c</mml:mi> </mml:msup> </mml:mrow> </mml:mrow> <mml:annotation encoding="application/x-tex">\lambda = {\lambda ^c}</mml:annotation> </mml:semantics> </mml:math> </inline-formula>) of an equivalent equation for <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="z left-parenthesis t right-parenthesis colon equals lamda g left-parenthesis t comma y left-parenthesis t right-parenthesis right-parenthesis"> <mml:semantics> <mml:mrow> <mml:mi>z</mml:mi> <mml:mo stretchy="false">(</mml:mo> <mml:mi>t</mml:mi> <mml:mo stretchy="false">)</mml:mo> <mml:mo>:=</mml:mo> <mml:mi>λ<!-- λ --></mml:mi> <mml:mi>g</mml:mi> <mml:mo stretchy="false">(</mml:mo> <mml:mi>t</mml:mi> <mml:mo>,</mml:mo> <mml:mi>y</mml:mi> <mml:mo stretchy="false">(</mml:mo> <mml:mi>t</mml:mi> <mml:mo stretchy="false">)</mml:mo> <mml:mo stretchy="false">)</mml:mo> </mml:mrow> <mml:annotation encoding="application/x-tex">z(t):=\lambda g(t,y(t))</mml:annotation> </mml:semantics> </mml:math> </inline-formula> is computed first. This is done by solving a discretized version of a certain system of equations which has (<inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="z Superscript c"> <mml:semantics> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:msup> <mml:mi>z</mml:mi> <mml:mi>c</mml:mi> </mml:msup> </mml:mrow> <mml:annotation encoding="application/x-tex">{z^c}</mml:annotation> </mml:semantics> </mml:math> </inline-formula>, <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="lamda Superscript c"> <mml:semantics> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:msup> <mml:mi>λ<!-- λ --></mml:mi> <mml:mi>c</mml:mi> </mml:msup> </mml:mrow> <mml:annotation encoding="application/x-tex">{\lambda ^c}</mml:annotation> </mml:semantics> </mml:math> </inline-formula>) as part of an isolated solution. The particular discretization used here is standard piecewise polynomial collocation. Finally, an approximation to <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="y Superscript c"> <mml:semantics> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:msup> <mml:mi>y</mml:mi> <mml:mi>c</mml:mi> </mml:msup> </mml:mrow> <mml:annotation encoding="application/x-tex">{y^c}</mml:annotation> </mml:semantics> </mml:math> </inline-formula> is obtained by use of the (exact) equation <disp-formula content-type="math/mathml"> \[ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="y left-parenthesis t right-parenthesis equals f left-parenthesis t right-parenthesis plus integral Subscript a Superscript b Baseline k left-parenthesis t comma s right-parenthesis z left-parenthesis s right-parenthesis d s comma t element-of left-bracket a comma b right-bracket period"> <mml:semantics> <mml:mrow> <mml:mi>y</mml:mi> <mml:mo stretchy="false">(</mml:mo> <mml:mi>t</mml:mi> <mml:mo stretchy="false">)</mml:mo> <mml:mo>=</mml:mo> <mml:mi>f</mml:mi> <mml:mo stretchy="false">(</mml:mo> <mml:mi>t</mml:mi> <mml:mo stretchy="false">)</mml:mo> <mml:mo>+</mml:mo> <mml:msubsup> <mml:mo>∫<!-- ∫ --></mml:mo> <mml:mi>a</mml:mi> <mml:mi>b</mml:mi> </mml:msubsup> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:mi>k</mml:mi> <mml:mo stretchy="false">(</mml:mo> <mml:mi>t</mml:mi> <mml:mo>,</mml:mo> <mml:mi>s</mml:mi> <mml:mo stretchy="false">)</mml:mo> <mml:mi>z</mml:mi> <mml:mo stretchy="false">(</mml:mo> <mml:mi>s</mml:mi> <mml:mo stretchy="false">)</mml:mo> <mml:mspace width="thickmathspace" /> <mml:mi>d</mml:mi> <mml:mi>s</mml:mi> <mml:mo>,</mml:mo> <mml:mspace width="1em" /> <mml:mi>t</mml:mi> <mml:mo>∈<!-- ∈ --></mml:mo> <mml:mo stretchy="false">[</mml:mo> <mml:mi>a</mml:mi> <mml:mo>,</mml:mo> <mml:mi>b</mml:mi> <mml:mo stretchy="false">]</mml:mo> <mml:mo>.</mml:mo> </mml:mrow> </mml:mrow> <mml:annotation encoding="application/x-tex">y(t) = f(t) + \int _a^b {k(t,s)z(s)\;ds,\quad t \in [a,b].}</mml:annotation> </mml:semantics> </mml:math> \] </disp-formula> The main result of the paper is that, under suitable conditions, the approximations to <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="y Superscript c"> <mml:semantics> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:msup> <mml:mi>y</mml:mi> <mml:mi>c</mml:mi> </mml:msup> </mml:mrow> <mml:annotation encoding="application/x-tex">{y^c}</mml:annotation> </mml:semantics> </mml:math> </inline-formula> and <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="lamda Superscript c"> <mml:semantics> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:msup> <mml:mi>λ<!-- λ --></mml:mi> <mml:mi>c</mml:mi> </mml:msup> </mml:mrow> <mml:annotation encoding="application/x-tex">{\lambda ^c}</mml:annotation> </mml:semantics> </mml:math> </inline-formula> are both superconvergent, that is, they both converge to their respective exact values at a faster rate than the collocation approximation (of <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="z Superscript c"> <mml:semantics> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:msup> <mml:mi>z</mml:mi> <mml:mi>c</mml:mi> </mml:msup> </mml:mrow> <mml:annotation encoding="application/x-tex">{z^c}</mml:annotation> </mml:semantics> </mml:math> </inline-formula>) does to <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="z Superscript c"> <mml:semantics> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:msup> <mml:mi>z</mml:mi> <mml:mi>c</mml:mi> </mml:msup> </mml:mrow> <mml:annotation encoding="application/x-tex">{z^c}</mml:annotation> </mml:semantics> </mml:math> </inline-formula>.
The problem is to calculate a simple zero of a nonlinear function ƒ by iteration. There is exhibited a family of iterations of order 2 n -1 which use n 
 The problem is to calculate a simple zero of a nonlinear function ƒ by iteration. There is exhibited a family of iterations of order 2 n -1 which use n evaluations of ƒ and no derivative evaluations, as well as a second family of iterations of order 2 n -1 based on n — 1 evaluations of ƒ and one of ƒâ€Č. In particular, with four evaluations an iteration of eighth order is constructed. The best previous result for four evaluations was fifth order. It is proved that the optimal order of one general class of multipoint iterations is 2 n -1 and that an upper bound on the order of a multipoint iteration based on n evaluations of ƒ (no derivatives) is 2 n . It is conjectured that a multipoint iteration without memory based on n evaluations has optimal order 2 n -1 .
General Preliminaries: 1.1 Introduction 1.2 Basic concepts and notations General Theorems on Iteration Functions: 2.1 The solution of a fixed-point problem 2.2 Linear and superlinear convergence 2.3 The iteration calculus 
 General Preliminaries: 1.1 Introduction 1.2 Basic concepts and notations General Theorems on Iteration Functions: 2.1 The solution of a fixed-point problem 2.2 Linear and superlinear convergence 2.3 The iteration calculus The Mathematics of Difference Relations: 3.1 Convergence of difference inequalities 3.2 A theorem on the solutions of certain inhomogeneous difference equations 3.3 On the roots of certain indicial equations 3.4 The asymptotic behavior of the solutions of certain difference equations Interpolatory Iteration Functions: 4.1 Interpolation and the solution of equations 4.2 The order of interpolatory iteration functions 4.3 Examples One-Point Iteration Functions: 5.1 The basic sequence $E_s$ 5.2 Rational approximations to $E_s$ 5.3 A basic sequence of iteration functions generated by direct interpolation 5.4 The fundamental theorem of one-point iteration functions 5.5 The coefficients of the error series of $E_s$ One-Point Iteration Functions With Memory: 6.1 Interpolatory iteration functions 6.2 Derivative-estimated one-point iteration functions with memory 6.3 Discussion of one-point iteration functions with memory Multiple Roots: 7.1 Introduction 7.2 The order of $E_s$ 7.3 The basic sequence $\scr{E}_s$ 7.4 The coefficients of the error series of $\scr{E}_s$ 7.5 Iteration functions generated by direct interpolation 7.6 One-point iteration functions with memory 7.7 Some general results 7.8 An iteration function of incommensurate order Multipoint Iteration Functions: 8.1 The advantages of multipoint iteration functions 8.2 A new interpolation problem 8.3 Recursively formed iteration functions 8.4 Multipoint iteration functions generated by derivative estimation 8.5 Multipoint iteration functions generated by composition 8.6 Multipoint iteration functions with memory Multipoint Iteration Functions: Continuation: 9.1 Introduction 9.2 Multipoint iteration functions of type 1 9.3 Multipoint iteration functions of type 2 9.4 Discussion of criteria for the selection of an iteration function Iteration Functions Which Require No Evaluation of Derivatives: 10.1 Introduction 10.2 Interpolatory iteration functions 10.3 Some additional iteration functions Systems of Equations: 11.1 Introduction 11.2 The generation of vector-valued iteration functions by inverse interpolation 11.3 Error estimates for some vector-valued iteration functions 11.4 Vector-valued iteration functions which require no derivative evaluations A Compilation of Iteration Functions: 12.1 Introduction 12.2 One-point iteration functions 12.3 One-point iteration functions with memory 12.4 Multiple roots 12.5 Multipoint iteration functions 12.6 Multipoint iteration functions with memory 12.7 Systems of equations Appendices: A. Interpolation B. On the $j$th derivative of the inverse function C. Significant figures and computational efficiency D. Acceleration of convergence E. Numerical examples F. Areas for future research Bibliography Index.
An optimal method is developed for approximating the multiple zeros of a nonlinear function, when the multiplicity is known. Analysis of convergence for the proposed technique is studied to reveal 
 An optimal method is developed for approximating the multiple zeros of a nonlinear function, when the multiplicity is known. Analysis of convergence for the proposed technique is studied to reveal the fourth-order convergence. We further investigate the dynamics of such multiple zero finders by using basins of attraction and their corresponding fractals in the complex plane. A fourth-order method will also be presented, when the multiplicity m is not known. Numerical comparisons will be made to support the underlying theory of this paper.
In this paper, we have derived a family of multipoint iterative functions for finding multiple roots of equations. In addition, comparison of computational results are made with other well-known methods. In this paper, we have derived a family of multipoint iterative functions for finding multiple roots of equations. In addition, comparison of computational results are made with other well-known methods.
A method of order three for finding multiple zeros of nonlinear functions is developed. The method requires two evaluations of the function and one evaluation of the derivative per step. A method of order three for finding multiple zeros of nonlinear functions is developed. The method requires two evaluations of the function and one evaluation of the derivative per step.
We construct an optimal eighth-order scheme which will work for multiple zeros with multiplicity [Formula: see text], for the first time. Earlier, the maximum convergence order of multi-point iterative schemes 
 We construct an optimal eighth-order scheme which will work for multiple zeros with multiplicity [Formula: see text], for the first time. Earlier, the maximum convergence order of multi-point iterative schemes was six for multiple zeros in the available literature. So, the main contribution of this study is to present a new higher-order and as well as optimal scheme for multiple zeros for the first time. In addition, we present an extensive convergence analysis with the main theorem which confirms theoretically eighth-order convergence of the proposed scheme. Moreover, we consider several real life problems which contain simple as well as multiple zeros in order to compare with the existing robust iterative schemes. Finally, we conclude on the basis of obtained numerical results that our iterative methods perform far better than the existing methods in terms of residual error, computational order of convergence and difference between the two consecutive iterations.
Many optimal order multiple root techniques involving derivatives have been proposed in literature. On the contrary, optimal order multiple root techniques without derivatives are almost nonexistent. With this as a 
 Many optimal order multiple root techniques involving derivatives have been proposed in literature. On the contrary, optimal order multiple root techniques without derivatives are almost nonexistent. With this as a motivational factor, here we develop a family of optimal fourth-order derivative-free iterative schemes for computing multiple roots. The procedure is based on two steps of which the first is Traub–Steffensen iteration and second is Traub–Steffensen-like iteration. Theoretical results proved for particular cases of the family are symmetric to each other. This feature leads us to prove the general result that shows the fourth-order convergence. Efficacy is demonstrated on different test problems that verifies the efficient convergent nature of the new methods. Moreover, the comparison of performance has proven the presented derivative-free techniques as good competitors to the existing optimal fourth-order methods that use derivatives.
A plethora of higher order iterative methods, involving derivatives in algorithms, are available in the literature for finding multiple roots. Contrary to this fact, the higher order methods without derivatives 
 A plethora of higher order iterative methods, involving derivatives in algorithms, are available in the literature for finding multiple roots. Contrary to this fact, the higher order methods without derivatives in the iteration are difficult to construct, and hence, such methods are almost non-existent. This motivated us to explore a derivative-free iterative scheme with optimal fourth order convergence. The applicability of the new scheme is shown by testing on different functions, which illustrates the excellent convergence. Moreover, the comparison of the performance shows that the new technique is a good competitor to existing optimal fourth order Newton-like techniques.
We suggest a derivative-free optimal method of second order which is a new version of a modification of Newton’s method for achieving the multiple zeros of nonlinear single variable functions. 
 We suggest a derivative-free optimal method of second order which is a new version of a modification of Newton’s method for achieving the multiple zeros of nonlinear single variable functions. Iterative methods without derivatives for multiple zeros are not easy to obtain, and hence such methods are rare in literature. Inspired by this fact, we worked on a family of optimal second order derivative-free methods for multiple zeros that require only two function evaluations per iteration. The stability of the methods was validated through complex geometry by drawing basins of attraction. Moreover, applicability of the methods is demonstrated herein on different functions. The study of numerical results shows that the new derivative-free methods are good alternatives to the existing optimal second-order techniques that require derivative calculations.
Part I Basic tools of numerical analysis: systems of linear algebraic equations eigenproblems solution of nonlinear equations polynomial approximation and interpolation numerical differention and difference formulas numerical integration. Part II 
 Part I Basic tools of numerical analysis: systems of linear algebraic equations eigenproblems solution of nonlinear equations polynomial approximation and interpolation numerical differention and difference formulas numerical integration. Part II Ordinary differential equations: solution of one-dimensional initial-value problems solution of one-dimensional boundary-value problems. Part III Partial differential equations: elliptic partial differential equations - the Laplace equation finite difference methods for propagation problems parabolic partial differential equations - the convection equation coordinate transformations and grid generation parabolic partial differential equations - the convection-diffusion equation hyperbolic partial differential equations - the wave equation. Appendix: the Taylor series.
A number of higher order iterative methods with derivative evaluations are developed in literature for computing multiple zeros. However, higher order methods without derivative for multiple zeros are difficult to 
 A number of higher order iterative methods with derivative evaluations are developed in literature for computing multiple zeros. However, higher order methods without derivative for multiple zeros are difficult to obtain and hence such methods are rare in literature. Motivated by this fact, we present a family of eighth order derivative-free methods for computing multiple zeros. Per iteration the methods require only four function evaluations, therefore, these are optimal in the sense of Kung-Traub conjecture. Stability of the proposed class is demonstrated by means of using a graphical tool, namely, basins of attraction. Boundaries of the basins are fractal like shapes through which basins are symmetric. Applicability of the methods is demonstrated on different nonlinear functions which illustrates the efficient convergence behavior. Comparison of the numerical results shows that the new derivative-free methods are good competitors to the existing optimal eighth-order techniques which require derivative evaluations.
In this paper, we introduce a new family of efficient and optimal iterative methods for finding multiple roots of nonlinear equations with known multiplicity ( m ≄ 1 ) . 
 In this paper, we introduce a new family of efficient and optimal iterative methods for finding multiple roots of nonlinear equations with known multiplicity ( m ≄ 1 ) . We use the weight function approach involving one and two parameters to develop the new family. A comprehensive convergence analysis is studied to demonstrate the optimal eighth-order convergence of the suggested scheme. Finally, numerical and dynamical tests are presented, which validates the theoretical results formulated in this paper and illustrates that the suggested family is efficient among the domain of multiple root finding methods.
Here, we suggest a high-order optimal variant/modification of Schröder’s method for obtaining the multiple zeros of nonlinear uni-variate functions. Based on quadratically convergent Schröder’s method, we derive the new family 
 Here, we suggest a high-order optimal variant/modification of Schröder’s method for obtaining the multiple zeros of nonlinear uni-variate functions. Based on quadratically convergent Schröder’s method, we derive the new family of fourth -order multi-point methods having optimal convergence order. Additionally, we discuss the theoretical convergence order and the properties of the new scheme. The main finding of the present work is that one can develop several new and some classical existing methods by adjusting one of the parameters. Numerical results are given to illustrate the execution of our multi-point methods. We observed that our schemes are equally competent to other existing methods.
In this manuscript, we present a new general family of optimal iterative methods for finding multiple roots of nonlinear equations with known multiplicity using weight functions. An extensive convergence analysis 
 In this manuscript, we present a new general family of optimal iterative methods for finding multiple roots of nonlinear equations with known multiplicity using weight functions. An extensive convergence analysis is presented to verify the optimal eighth order convergence of the new family. Some special cases of the family are also presented which require only three functions and one derivative evaluation at each iteration to reach optimal eighth order convergence. A variety of numerical test functions along with some real-world problems such as beam designing model and Van der Waals’ equation of state are presented to ensure that the newly developed family efficiently competes with the other existing methods. The dynamical analysis of the proposed methods is also presented to validate the theoretical results by using graphical tools, termed as the basins of attraction.
This paper considers the numerical solution of Hammerstein equations of the form y(t)=f(t)+∫abk(t,s)g(s,y(s))ds,t∈[a,b], by a collocation method applied not to this equation, but rather to an equivalent equation for z(t) 
 This paper considers the numerical solution of Hammerstein equations of the form y(t)=f(t)+∫abk(t,s)g(s,y(s))ds,t∈[a,b], by a collocation method applied not to this equation, but rather to an equivalent equation for z(t) :=g(t, y(t)). The desired approximation to y is then obtained by use of the (exact) equation y(t)=f(t)+∫abk(t,s)z(s)ds,t∈[a,b]. In an earlier paper, questions of existence and optimal convergence of the respective approximations to z and y were examined. In this sequel, collocation approximations to z are sought in certain piecewise polynomial function spaces, and analogous of known superconvergence results for the iterated collocation solution of (linear) second-kind Fredhoim integral equations are stated and proved for the approximation to y.
We compare the Ostrowski efficiency of some methods for solving systems of nonlinear equations without explicitly using derivatives. The methods considered include the discrete Newton method, Shamanskii’s method, the two-point 
 We compare the Ostrowski efficiency of some methods for solving systems of nonlinear equations without explicitly using derivatives. The methods considered include the discrete Newton method, Shamanskii’s method, the two-point secant method, and Brown’s methods. We introduce a class of secant methods and a class of methods related to Brown’s methods, but using orthogonal rather than stabilized elementary transformations. The idea of these methods is to avoid finding a new approximation to the Jacobian matrix of the system at each step, and thus increase the efficiency. Local convergence theorems are proved, and the efficiencies of the methods are calculated. Numerical results are given, and some possible extensions are mentioned.
We consider Hammerstein equations of the form \[ y(t) = f(t) + \int _a^b {k(t,s)g(s,y(s)) ds,\quad t \in [a,b],} \] and present a new method for solving them numerically. The 
 We consider Hammerstein equations of the form \[ y(t) = f(t) + \int _a^b {k(t,s)g(s,y(s)) ds,\quad t \in [a,b],} \] and present a new method for solving them numerically. The method is a collocation method applied not to the equation in its original form, but rather to an equivalent equation for $z(t): = g(t,y(t))$. The desired approximation to y is then obtained by use of the (exact) equation \[ y(t) = f(t) + \int _a^b {k(t,s)z(s) ds,\quad t \in [a,b].} \] Advantages of this method, compared with the direct collocation approximation for y, are discussed. The main result in the paper is that, under suitable conditions, the resulting approximation to y converges to the exact solution at a rate at least equal to that of the best approximation to z from the space in which the collocation solution is sought.