Author Description

Login to generate an author description

Ask a Question About This Mathematician

Data from many scientific areas often come with measurement error. Density or distribution function estimation from contaminated data and nonparametric regression with errors in variables are two important topics in … Data from many scientific areas often come with measurement error. Density or distribution function estimation from contaminated data and nonparametric regression with errors in variables are two important topics in measurement error models. In this paper, we present a new software package <b>decon</b> for <b>R</b>, which contains a collection of functions that use the deconvolution kernel methods to deal with the measurement error problems. The functions allow the errors to be either homoscedastic or heteroscedastic. To make the deconvolution estimators computationally more efficient in <b>R</b>, we adapt the fast Fourier transform algorithm for density estimation with error-free data to the deconvolution kernel estimation. We discuss the practical selection of the smoothing parameter in deconvolution methods and illustrate the use of the package through both simulated and real examples.
Empirical Standards are natural-language models of a scientific community's expectations for a specific kind of study (e.g. a questionnaire survey). The ACM SIGSOFT Paper and Peer Review Quality Initiative generated … Empirical Standards are natural-language models of a scientific community's expectations for a specific kind of study (e.g. a questionnaire survey). The ACM SIGSOFT Paper and Peer Review Quality Initiative generated empirical standards for research methods commonly used in software engineering. These living documents, which should be continuously revised to reflect evolving consensus around research best practices, will improve research quality and make peer review more effective, reliable, transparent and fair.
Data from all reported cases of 2009 pandemic influenza A (H1N1) were obtained from the China Information System for Disease Control and Prevention. The spatiotemporal distribution patterns of cases were … Data from all reported cases of 2009 pandemic influenza A (H1N1) were obtained from the China Information System for Disease Control and Prevention. The spatiotemporal distribution patterns of cases were characterized through spatial analysis. The impact of travel-related risk factors on invasion of the disease was analyzed using survival analysis, and climatic factors related to local transmission were identified using multilevel Poisson regression, both at the county level. The results showed that the epidemic spanned a large geographic area, with the most affected areas being in western China. Significant differences in incidence were found among age groups, with incidences peaking in school-age children. Overall, the epidemic spread from southeast to northwest. Proximity to airports and being intersected by national highways or freeways but not railways were variables associated with the presence of the disease in a county. Lower temperature and lower relative humidity were the climatic factors facilitating local transmission after correction for the effects of school summer vacation and public holidays, as well as population density and the density of medical facilities. These findings indicate that interventions focused on domestic travel, population density, and climatic factors could play a role in mitigating the public health impact of future influenza pandemics.
In this paper, we present a family of three-parameter derivative-free iterative methods with and without memory for solving nonlinear equations. The convergence order of the new method without memory is … In this paper, we present a family of three-parameter derivative-free iterative methods with and without memory for solving nonlinear equations. The convergence order of the new method without memory is four requiring three functional evaluations. Based on the new fourth-order method without memory, we present a family of derivative-free methods with memory. Using three self-accelerating parameters, calculated by Newton interpolatory polynomials, the convergence order of the new methods with memory are increased from 4 to 7.0174 and 7.5311 without any additional calculations. Compared with the existing methods with memory, the new method with memory can obtain higher convergence order by using relatively simple self-accelerating parameters. Numerical comparisons are made with some known methods by using the basins of attraction and through numerical computations to demonstrate the efficiency and the performance of the presented methods.
In this paper, we present some three-point Newton-type iterative methods without memory for solving nonlinear equations by using undetermined coefficients method. The order of convergence of the new methods without … In this paper, we present some three-point Newton-type iterative methods without memory for solving nonlinear equations by using undetermined coefficients method. The order of convergence of the new methods without memory is eight requiring the evaluations of three functions and one first-order derivative in per full iteration. Hence, the new methods are optimal according to Kung and Traubs conjecture. Based on the presented methods without memory, we present two families of Newton-type iterative methods with memory. Further accelerations of convergence speed are obtained by using a self-accelerating parameter. This self-accelerating parameter is calculated by the Hermite interpolating polynomial and is applied to improve the order of convergence of the Newton-type method. The corresponding R-order of convergence is increased from 8 to 9, [Formula: see text] and 10. The increase of convergence order is attained without any additional calculations so that the two families of the methods with memory possess a very high computational efficiency. Numerical examples are demonstrated to confirm theoretical results.
In this paper, we present a new family of two-step Newton-type iterative methods with memory for solving nonlinear equations. In order to obtain a Newton-type method with memory, we first … In this paper, we present a new family of two-step Newton-type iterative methods with memory for solving nonlinear equations. In order to obtain a Newton-type method with memory, we first present an optimal two-parameter fourth-order Newton-type method without memory. Then, based on the two-parameter method without memory, we present a new two-parameter Newton-type method with memory. Using two self-correcting parameters calculated by Hermite interpolatory polynomials, the R-order of convergence of a new Newton-type method with memory is increased from 4 to 5.7016 without any additional calculations. Numerical comparisons are made with some known methods by using the basins of attraction and through numerical computations to demonstrate the efficiency and the performance of the presented methods.
In this manuscript, by using undetermined parameter method, an efficient iterative method with eighth-order is designed to solve nonlinear systems. The new method requires one matrix inversion per iteration, which … In this manuscript, by using undetermined parameter method, an efficient iterative method with eighth-order is designed to solve nonlinear systems. The new method requires one matrix inversion per iteration, which means that computational cost of our method is low. The theoretical efficiency of the proposed method is analyzed, which is superior to other methods. Numerical results show that the proposed method can reduce the computational time, remarkably. New method is applied to solve the numerical solution of nonlinear ordinary differential equations (ODEs) and partial differential equations (PDEs). The nonlinear ODEs and PDEs are discretized by finite difference method. The validity of the new method is verified by comparison with analytic solutions.
A new Newton method with memory is proposed by using a variable self-accelerating parameter. Firstly, a modified Newton method without memory with invariant parameter is constructed for solving nonlinear equations. … A new Newton method with memory is proposed by using a variable self-accelerating parameter. Firstly, a modified Newton method without memory with invariant parameter is constructed for solving nonlinear equations. Substituting the invariant parameter of Newton method without memory by a variable self-accelerating parameter, we obtain a novel Newton method with memory. The convergence order of the new Newton method with memory is 1 + 2 . The acceleration of the convergence rate is attained without any additional function evaluations. The main innovation is that the self-accelerating parameter is constructed by a simple way. Numerical experiments show the presented method has faster convergence speed than existing methods.
In this paper, we present a new sixth-order iterative method for solving nonlinear systems and prove a local convergence result. The new method requires solving five linear systems per iteration. … In this paper, we present a new sixth-order iterative method for solving nonlinear systems and prove a local convergence result. The new method requires solving five linear systems per iteration. An important feature of the new method is that the LU (lower upper, also called LU factorization) decomposition of the Jacobian matrix is computed only once in each iteration. The computational efficiency index of the new method is compared to that of some known methods. Numerical results are given to show that the convergence behavior of the new method is similar to the existing methods. The new method can be applied to small- and medium-sized nonlinear systems.
In this work, two multi-step derivative-free iterative methods are presented for solving system of nonlinear equations. The new methods have high computational efficiency and low computational cost. The order of … In this work, two multi-step derivative-free iterative methods are presented for solving system of nonlinear equations. The new methods have high computational efficiency and low computational cost. The order of convergence of the new methods is proved by a development of an inverse first-order divided difference operator. The computational efficiency is compared with the existing methods. Numerical experiments support the theoretical results. Experimental results show that the new methods remarkably reduce the computing time in the process of high-precision computing.
Some Kurchatov-type accelerating parameters are used to construct some derivative-free iterative methods with memory for solving nonlinear systems. New iterative methods are developed from an initial scheme without memory with … Some Kurchatov-type accelerating parameters are used to construct some derivative-free iterative methods with memory for solving nonlinear systems. New iterative methods are developed from an initial scheme without memory with order of convergence three. New methods have the convergence order 2+5≈4.236 and 5, respectively. The application of new methods can solve standard nonlinear systems and nonlinear ordinary differential equations (ODEs) in numerical experiments. Numerical results support the theoretical results.
In this paper, a general family of n-point Newton type iterative methods for solving nonlinear equations is constructed by using direct Hermite interpolation. The order of convergence of the new … In this paper, a general family of n-point Newton type iterative methods for solving nonlinear equations is constructed by using direct Hermite interpolation. The order of convergence of the new n-point iterative methods without memory is 2n requiring the evaluations of n functions and one first-order derivative in per full iteration, which implies that this family is optimal according to Kung and Traub’s conjecture (1974). Its error equations and asymptotic convergence constants are obtained. The n-point iterative methods with memory are obtained by using a self-accelerating parameter, which achieve much faster convergence than the corresponding n-point methods without memory. The increase of convergence order is attained without any additional calculations so that the n-point Newton type iterative methods with memory possess a very high computational efficiency. Numerical examples are demonstrated to confirm theoretical results.
Two novel Kurchatov-type first-order divided difference operators were designed, which were used for constructing the variable parameter of three derivative-free iterative methods. The convergence orders of the new derivative-free methods … Two novel Kurchatov-type first-order divided difference operators were designed, which were used for constructing the variable parameter of three derivative-free iterative methods. The convergence orders of the new derivative-free methods are 3, (5+17)/2≈4.56 and 5. The new derivative-free iterative methods with memory were applied to solve nonlinear ordinary differential equations (ODEs) and partial differential equations (PDEs) in numerical experiments. The dynamical behavior of our new methods with memory was studied by using dynamical plane. The dynamical planes showed that our methods had good stability.
In this paper, a family of Newton-type iterative methods with memory is obtained for solving nonlinear equations, which uses some special self-accelerating parameters. To this end, we first present two … In this paper, a family of Newton-type iterative methods with memory is obtained for solving nonlinear equations, which uses some special self-accelerating parameters. To this end, we first present two optimal fourth-order iterative methods with memory for solving nonlinear equations. Then we give a novel way to construct the self-accelerating parameter and obtain a family of Newton-type iterative methods with memory. The self-accelerating parameters have the properties of simple structure and easy calculation, which do not increase the computational cost of the iterative methods. The convergence order of the new iterative method is increased from 4 to 2+7≈4.64575. Numerical comparisons are made with some known methods by using the basins of attraction and through numerical computations to demonstrate the efficiency and the performance of the new methods. Experiment results show that, compared with the existing methods, the new iterative methods with memory have the advantage of costing less computing time.
We analyze the dynamical behavior of an eighth-order Sharma’s iterative scheme, which contains a single parameter, with respect to an arbitrary quadratic polynomial using complex analysis. The eighth-order Sharma’s iterative … We analyze the dynamical behavior of an eighth-order Sharma’s iterative scheme, which contains a single parameter, with respect to an arbitrary quadratic polynomial using complex analysis. The eighth-order Sharma’s iterative scheme is analytically conjugated to a rational operator on the Riemann sphere. We discuss the strange fixed points of the rational operator and present its stable region graph. Additionally, we briefly investigate the superattracting point and the critical point, which have an impact on the Sharma’s iterative scheme discussed. Finally, we present the dynamical planes for different parameter values using the complex dynamics tool, which helps us select more effective members of the Sharma’s iterative scheme. Numerical experiments are conducted to verify the theoretical results.
&lt;abstract&gt;&lt;p&gt;In this paper, the semilocal convergence of the eighth order iterative method is proved in Banach space by using the recursive relation, and the proof process does not need high … &lt;abstract&gt;&lt;p&gt;In this paper, the semilocal convergence of the eighth order iterative method is proved in Banach space by using the recursive relation, and the proof process does not need high order derivative. By selecting the appropriate initial point and applying the Lipschitz condition to the first order Fréchet derivative in the whole region, the existence and uniqueness domain are obtained. In addition, the theoretical results of semilocal convergence are applied to two nonlinear systems, and satisfactory results are obtained.&lt;/p&gt;&lt;/abstract&gt;
&lt;abstract&gt;&lt;p&gt;A new eighth-order Chebyshev-Halley type iteration is proposed for solving nonlinear equations and matrix sign function. Basins of attraction show that several special cases of the new method are globally … &lt;abstract&gt;&lt;p&gt;A new eighth-order Chebyshev-Halley type iteration is proposed for solving nonlinear equations and matrix sign function. Basins of attraction show that several special cases of the new method are globally convergent. It is analytically proven that the new method is asymptotically stable and the new method has the order of convergence eight as well. The effectiveness of the theoretical results are illustrated by numerical experiments. In numerical experiments, the new method is applied to a random matrix, Wilson matrix and continuous-time algebraic Riccati equation. Numerical results show that, compared with some well-known methods, the new method achieves the accuracy requirement in the minimum computing time and the minimum number of iterations.&lt;/p&gt;&lt;/abstract&gt;
In this paper, by applying Petković’s iterative method to the Möbius conjugate mapping of a quadratic polynomial function, we attain an optimal eighth-order rational operator with a single parameter r … In this paper, by applying Petković’s iterative method to the Möbius conjugate mapping of a quadratic polynomial function, we attain an optimal eighth-order rational operator with a single parameter r and research the stability of this method by using complex dynamics tools on the basis of fractal theory. Through analyzing the stability of the fixed point and drawing the parameter space related to the critical point, the parameter family which can make the behavior of the corresponding iterative method stable or unstable is obtained. Lastly, the consequence is verified by showing their corresponding dynamical planes.
In this paper, the stability of a class of Liu–Wang’s optimal eighth-order single-parameter iterative methods for solving simple roots of nonlinear equations was studied by applying them to arbitrary quadratic … In this paper, the stability of a class of Liu–Wang’s optimal eighth-order single-parameter iterative methods for solving simple roots of nonlinear equations was studied by applying them to arbitrary quadratic polynomials. Under the Riemann sphere and scaling theorem, the complex dynamic behavior of the iterative method was analyzed by fractals. We discuss the stability of all fixed points and the parameter spaces starting from the critical points with the Mathematica software. The dynamical planes of the elements with good and bad dynamical behavior are given, and the optimal parameter element with stable behavior was obtained. Finally, a numerical experiment and practical application were carried out to prove the conclusion.
In this paper, we obtain two iterative methods with memory by using inverse interpolation. Firstly, using three function evaluations, we present a two-step iterative method with memory, which has the … In this paper, we obtain two iterative methods with memory by using inverse interpolation. Firstly, using three function evaluations, we present a two-step iterative method with memory, which has the convergence order 4.5616. Secondly, a three-step iterative method of order 10.1311 is obtained, which requires four function evaluations per iteration. Herzberger’s matrix method is used to prove the convergence order of new methods. Finally, numerical comparisons are made with some known methods by using the basins of attraction and through numerical computations to demonstrate the efficiency and the performance of the presented methods.
&lt;abstract&gt;&lt;p&gt;There are numerous applications for finding zero of derivatives in function optimization. In this paper, a two-step fourth-order method was presented for finding a zero of the derivative. In the … &lt;abstract&gt;&lt;p&gt;There are numerous applications for finding zero of derivatives in function optimization. In this paper, a two-step fourth-order method was presented for finding a zero of the derivative. In the research process of iterative methods, determining the ball of convergence was one of the important issues. This paper discussed the radii of the convergence ball, uniqueness of the solution, and the measurable error distances. In particular, in contrast to Wang's method under hypotheses up to the fourth derivative, the local convergence of the new method was only analyzed under hypotheses up to the second derivative, and the convergence order of the new method was increased to four. Furthermore, different radii of the convergence ball was determined according to different weaker hypotheses. Finally, the convergence criteria was verified by three numerical examples and the new method was compared with Wang's method and the same order method by numerical experiments. The experimental results showed that the convergence order of the new method is four and the new method has higher accuracy at the same cost, so the new method is finer.&lt;/p&gt;&lt;/abstract&gt;
Integrated nested Laplace approximations (INLA) are a recently proposed approximate Bayesian approach to fit structured additive regression models with latent Gaussian field. INLA method, as an alternative to Markov chain … Integrated nested Laplace approximations (INLA) are a recently proposed approximate Bayesian approach to fit structured additive regression models with latent Gaussian field. INLA method, as an alternative to Markov chain Monte Carlo techniques, provides accurate approximations to estimate posterior marginals and avoid time-consuming sampling. We show here that two classical nonparametric smoothing problems, nonparametric regression and density estimation, can be achieved using INLA. Simulated examples and R functions are demonstrated to illustrate the use of the methods. Some discussions on potential applications of INLA are made in the paper.
In this paper, a self-accelerating type method is proposed for solving nonlinear equations, which is a modified Ren’s method. A simple way is applied to construct a variable self-accelerating parameter … In this paper, a self-accelerating type method is proposed for solving nonlinear equations, which is a modified Ren’s method. A simple way is applied to construct a variable self-accelerating parameter of the new method, which does not increase any computational costs. The highest convergence order of new method is 2 + 6 ≈ 4.4495 . Numerical experiments are made to show the performance of the new method, which supports the theoretical results.
A novel Newton-type n-point iterative method with memory is proposed for solving nonlinear equations, which is constructed by the Hermite interpolation. The proposed iterative method with memory reaches the order … A novel Newton-type n-point iterative method with memory is proposed for solving nonlinear equations, which is constructed by the Hermite interpolation. The proposed iterative method with memory reaches the order (2n+2n−1−1+22n+1+22n−2+2n+1)/2 by using n variable parameters. The computational efficiency of the proposed method is higher than that of the existing Newton-type methods with and without memory. To observe the stability of the proposed method, some complex functions are considered under basins of attraction. Basins of attraction show that the proposed method has better stability and requires a lesser number of iterations than various well-known methods. The numerical results support the theoretical results.
In this paper, a family of Ostrowski-type iterative schemes with a biparameter was analyzed. We present the dynamic view of the proposed method and study various conjugation properties. The stability … In this paper, a family of Ostrowski-type iterative schemes with a biparameter was analyzed. We present the dynamic view of the proposed method and study various conjugation properties. The stability of the strange fixed points for special parameter values is studied. The parameter spaces related to the critical points and dynamic planes are used to visualize their dynamic properties. Eventually, we find the most stable member of the biparametric family of six-order Ostrowski-type methods. Some test equations are examined for supporting the theoretical results.
In this paper, a Newton-type iterative scheme for solving nonlinear systems is designed. In the process of proving the convergence order, we use the higher derivatives of the function and … In this paper, a Newton-type iterative scheme for solving nonlinear systems is designed. In the process of proving the convergence order, we use the higher derivatives of the function and show that the convergence order of this iterative method is six. In order to avoid the influence of the existence of higher derivatives on the proof of convergence, we mainly discuss the convergence of this iterative method under weak conditions. In Banach space, the local convergence of the iterative scheme is established by using the ω-continuity condition of the first-order Fréchet derivative, and the application range of the iterative method is extended. In addition, we also give the radius of a convergence sphere and the uniqueness of its solution. Finally, the superiority of the new iterative method is illustrated by drawing attractive basins and comparing them with the average iterative times of other same-order iterative methods. Additionally, we utilize this iterative method to solve both nonlinear systems and nonlinear matrix sign functions. The applicability of this study is demonstrated by solving practical chemical problems.
In this paper, we present a new fifth-order three-step iterative method for solving non-linear systems and prove the local convergence order of the new method. The new method requires the … In this paper, we present a new fifth-order three-step iterative method for solving non-linear systems and prove the local convergence order of the new method. The new method requires the evaluations of two functions, two first derivatives and one matrix inversion in per iteration. The computational efficiency index for nonlinear systems is used to compare the efficiency of different methods. It is proved that the new method is more efficient. Numerical experiments are performed, which support the theoretical results. Numerical experiments show that the new method remarkably saves the computational time for solving large nonlinear systems.
Two Newton's variants Iteration method are given.It is at least sixth order convergence near simple root.In the end,numerical tests show that the new variant methods have some more advantages than … Two Newton's variants Iteration method are given.It is at least sixth order convergence near simple root.In the end,numerical tests show that the new variant methods have some more advantages than the other known Newton's Iteration methods.
In this paper, a modified super-Halley's method free from second derivative is presented. The new method requires three evaluations of the function per iteration and one of its first derivative, … In this paper, a modified super-Halley's method free from second derivative is presented. The new method requires three evaluations of the function per iteration and one of its first derivative, and its efficiency index is 4 √ 6 ≈ 1.565. Several numerical examples are given to illustrate that the modified method mostly performs better or equal in contrast with the optimal eighth-order methods (see (Appl. Math. Comput., 2010, 217(6): 2448-2455) and (J. Comput. Appl. Math., 2010, 233(9): 2278-2284)), such as when the initial guesses are not so close to the sought zeros. Furthermore, the convergence radius of the new method is greater than the convergence radii of the optimal eighth-order methods and its extended operational index is better in comparison with the optimal eighth-order methods. Keywords: super-Halley's method; nonlinear equation; extended computational index; efficiency index; convergence radius MR(2010) Subject Classification: 65H04; 65H05; 41A25 / CLC number: O241.7 Document code: A Article ID: 1000-0917(2015)01-0151-09
In recent years, the small initial boundary value problem of the Kirchhoff-type wave system attracts many scholars’ attention. However, the big initial boundary value problem is also a topic of … In recent years, the small initial boundary value problem of the Kirchhoff-type wave system attracts many scholars’ attention. However, the big initial boundary value problem is also a topic of theoretical significance. In this paper, we devote oneself to the well-posedness of the Kirchhoff-type wave system under the big initial boundary conditions. Combining the potential well method with an improved convex method, we establish a criterion for the well-posedness of the system with nonlinear source and dissipative and viscoelastic terms. Based on the criteria, the energy of the system is divided into different levels. For the subcritical case, we prove that there exist the global solutions when the initial value belongs to the stable set, while the finite time blow-up occurs when the initial value belongs to the unstable set. For the supercritical case, we show that the corresponding solution blows up in a finite time if the initial value satisfies some given conditions.
Abstract In this paper, a new conformable fractional Traub's method is proposed for solving nonlinear systems. It can be proved that the order of convergence is 3. Experimental results show … Abstract In this paper, a new conformable fractional Traub's method is proposed for solving nonlinear systems. It can be proved that the order of convergence is 3. Experimental results show that the new proposed method requires fewer iterations in the calculation process than conformable fractional Newton's method. And the convergence planes shows good stability.
In this paper, we focus on a class of optimal eighth-order iterative methods, initially proposed by Sharma et al., whose second step can choose any fourth-order iterative method. By selecting … In this paper, we focus on a class of optimal eighth-order iterative methods, initially proposed by Sharma et al., whose second step can choose any fourth-order iterative method. By selecting the first two steps as an optimal fourth-order iterative method, we derive an optimal eighth-order one-parameter iterative method, which can solve nonlinear systems. Employing fractal theory, we investigate the dynamic behavior of rational operators associated with the iterative method through the Scaling theorem and Möbius transformation. Subsequently, we conduct a comprehensive study of the chaotic dynamics and stability of the iterative method. Our analysis involves the examination of strange fixed points and their stability, critical points, and the parameter spaces generated on the complex plane with critical points as initial points. We utilize these findings to intuitively select parameter values from the figures. Furthermore, we generate dynamical planes for the selected parameter values and ultimately determine the range of unstable parameter values, thus obtaining the range of stable parameter values. The bifurcation diagram shows the influence of parameter selection on the iteration sequence. In addition, by drawing attractive basins, it can be seen that this iterative method is superior to the same-order iterative method in terms of convergence speed and average iterations. Finally, the matrix sign function, nonlinear equation and nonlinear system are solved by this iterative method, which shows the applicability of this iterative method.
&lt;abstract&gt;&lt;p&gt;Ostrowski's iterative method is a classical method for solving systems of nonlinear equations. However, it is not stable enough. In order to obtain a more stable Ostrowski-type method, this paper … &lt;abstract&gt;&lt;p&gt;Ostrowski's iterative method is a classical method for solving systems of nonlinear equations. However, it is not stable enough. In order to obtain a more stable Ostrowski-type method, this paper presented a new family of fourth-order single-parameter Ostrowski-type methods for solving nonlinear systems. As a generalization of the Ostrowski's methods, the Ostrowski's methods are a special case of the new family. It was proved that the order of convergence of the new iterative family was always fourth-order when the parameters take any real number. Finally, the dynamical behavior of the family was briefly analyzed using real dynamical tools. The new iterative method can be applied to solve a wide range of nonlinear equations, and it was used in numerical experiments to solve the Hammerstein equation, boundary value problem, and nonlinear system. These numerical results supported the theoretical results.&lt;/p&gt;&lt;/abstract&gt;
Purpose The purpose of this paper is to study local convergence and applications of a new seventh-order iterative method. Design/methodology/approach The order of convergence for the method is proved by … Purpose The purpose of this paper is to study local convergence and applications of a new seventh-order iterative method. Design/methodology/approach The order of convergence for the method is proved by using Taylor expansions. In addition, local convergence is studied under Lipschitz conditions with the first derivative. Findings By using Taylor expansions, we can show the convergence order of the method is seven. The specific domains of convergence and the solutions of nonlinear equations can be obtained by applying the method to practical physics problems and nonlinear systems. In this way, the uniqueness of the solution and error estimates also are analyzed. Originality/value In the proof of convergence order, Taylor expansions require third or higher derivatives. The applicability of the method is restricted. In order to extend the applicability of the method, local convergence is studied under Lipschitz conditions with the first derivative. Finally, in order to prove the applicability of the method, the method is applied to some physical problems and nonlinear systems.
In this paper, we present a novel approach to improving software quality and efficiency through a Large Language Model (LLM)-based model designed to review code and identify potential issues. Our … In this paper, we present a novel approach to improving software quality and efficiency through a Large Language Model (LLM)-based model designed to review code and identify potential issues. Our proposed LLM-based AI agent model is trained on large code repositories. This training includes code reviews, bug reports, and documentation of best practices. It aims to detect code smells, identify potential bugs, provide suggestions for improvement, and optimize the code. Unlike traditional static code analysis tools, our LLM-based AI agent has the ability to predict future potential risks in the code. This supports a dual goal of improving code quality and enhancing developer education by encouraging a deeper understanding of best practices and efficient coding techniques. Furthermore, we explore the model's effectiveness in suggesting improvements that significantly reduce post-release bugs and enhance code review processes, as evidenced by an analysis of developer sentiment toward LLM feedback. For future work, we aim to assess the accuracy and efficiency of LLM-generated documentation updates in comparison to manual methods. This will involve an empirical study focusing on manually conducted code reviews to identify code smells and bugs, alongside an evaluation of best practice documentation, augmented by insights from developer discussions and code reviews. Our goal is to not only refine the accuracy of our LLM-based tool but also to underscore its potential in streamlining the software development lifecycle through proactive code improvement and education.
Abstract In this paper, local convergence for a Chebyshev-type method free from second derivative is studied in Banach spaces. Previous studies prove convergence under conditions based on the third or … Abstract In this paper, local convergence for a Chebyshev-type method free from second derivative is studied in Banach spaces. Previous studies prove convergence under conditions based on the third or higher derivative. However, we study the convergence under Lipschitz continuity conditions based on the first derivative. In contrast to the conditions used in previous studies, the conditions of our convergence are weaker. In this way, uniqueness of the solution and the radii of convergence balls also are analyzed. Also, Taylor expansion, which is often used in convergence analysis, is avoided. Thus the applicability of the method is extended. Two numerical examples are used to prove the criteria of convergence. Mathematics Subject Classification (2010). 99Z99; 00A00.
In this paper, a family of fifth-order Chebyshev–Halley-type iterative methods with one parameter is presented. The convergence order of the new iterative method is analyzed. By obtaining rational operators associated … In this paper, a family of fifth-order Chebyshev–Halley-type iterative methods with one parameter is presented. The convergence order of the new iterative method is analyzed. By obtaining rational operators associated with iterative methods, the stability of the iterative method is studied by using fractal theory. In addition, some strange fixed points and critical points are obtained. By using the parameter space related to the critical points, some parameters with good stability are obtained. The dynamic plane corresponding to these parameters is plotted, visualizing the stability characteristics. Finally, the fractal diagrams of several iterative methods on different polynomials are compared. Both numerical results and fractal graphs show that the new iterative method has good convergence and stability when α=12.
In this paper, a Newton-type iterative scheme for solving nonlinear systems is designed. In the process of proving the convergence order, we use the higher derivatives of the function and … In this paper, a Newton-type iterative scheme for solving nonlinear systems is designed. In the process of proving the convergence order, we use the higher derivatives of the function and show that the convergence order of this iterative method is six. In order to avoid the influence of the existence of higher derivatives on the proof of convergence, we mainly discuss the convergence of this iterative method under weak conditions. In Banach space, the local convergence of the iterative scheme is established by using the ω-continuity condition of the first-order Fréchet derivative, and the application range of the iterative method is extended. In addition, we also give the radius of a convergence sphere and the uniqueness of its solution. Finally, the superiority of the new iterative method is illustrated by drawing attractive basins and comparing them with the average iterative times of other same-order iterative methods. Additionally, we utilize this iterative method to solve both nonlinear systems and nonlinear matrix sign functions. The applicability of this study is demonstrated by solving practical chemical problems.
&lt;abstract&gt;&lt;p&gt;In this paper, the semi-local convergence of the Cordero's sixth-order iterative method in Banach space was proved by the method of recursion relation. In the process of proving, the auxiliary … &lt;abstract&gt;&lt;p&gt;In this paper, the semi-local convergence of the Cordero's sixth-order iterative method in Banach space was proved by the method of recursion relation. In the process of proving, the auxiliary sequence and three increasing scalar functions can be derived using Lipschitz conditions on the first-order derivatives. By using the properties of auxiliary sequence and scalar function, it was proved that the iterative sequence obtained by the iterative method was a Cauchy sequence, then the convergence radius was obtained and its uniqueness was proven. Compared with Cordero's process of proving convergence, this paper does not need to ensure that $ \mathcal{G}(s) $ is continuously differentiable in higher order, and only the first-order Fréchet derivative was used to prove semi-local convergence. Finally, the numerical results showed that the recursion relationship is reasonable.&lt;/p&gt;&lt;/abstract&gt;
&lt;abstract&gt;&lt;p&gt;There are numerous applications for finding zero of derivatives in function optimization. In this paper, a two-step fourth-order method was presented for finding a zero of the derivative. In the … &lt;abstract&gt;&lt;p&gt;There are numerous applications for finding zero of derivatives in function optimization. In this paper, a two-step fourth-order method was presented for finding a zero of the derivative. In the research process of iterative methods, determining the ball of convergence was one of the important issues. This paper discussed the radii of the convergence ball, uniqueness of the solution, and the measurable error distances. In particular, in contrast to Wang's method under hypotheses up to the fourth derivative, the local convergence of the new method was only analyzed under hypotheses up to the second derivative, and the convergence order of the new method was increased to four. Furthermore, different radii of the convergence ball was determined according to different weaker hypotheses. Finally, the convergence criteria was verified by three numerical examples and the new method was compared with Wang's method and the same order method by numerical experiments. The experimental results showed that the convergence order of the new method is four and the new method has higher accuracy at the same cost, so the new method is finer.&lt;/p&gt;&lt;/abstract&gt;
&lt;abstract&gt;&lt;p&gt;Ostrowski's iterative method is a classical method for solving systems of nonlinear equations. However, it is not stable enough. In order to obtain a more stable Ostrowski-type method, this paper … &lt;abstract&gt;&lt;p&gt;Ostrowski's iterative method is a classical method for solving systems of nonlinear equations. However, it is not stable enough. In order to obtain a more stable Ostrowski-type method, this paper presented a new family of fourth-order single-parameter Ostrowski-type methods for solving nonlinear systems. As a generalization of the Ostrowski's methods, the Ostrowski's methods are a special case of the new family. It was proved that the order of convergence of the new iterative family was always fourth-order when the parameters take any real number. Finally, the dynamical behavior of the family was briefly analyzed using real dynamical tools. The new iterative method can be applied to solve a wide range of nonlinear equations, and it was used in numerical experiments to solve the Hammerstein equation, boundary value problem, and nonlinear system. These numerical results supported the theoretical results.&lt;/p&gt;&lt;/abstract&gt;
&lt;abstract&gt;&lt;p&gt;On the basis of Wang's method, a new fourth-order method for finding a zero of a derivative was presented. Under the hypotheses that the third and fourth order derivatives of … &lt;abstract&gt;&lt;p&gt;On the basis of Wang's method, a new fourth-order method for finding a zero of a derivative was presented. Under the hypotheses that the third and fourth order derivatives of nonlinear function were bounded, the local convergence of a new fourth-order method was studied. The error estimate, the order of convergence, and uniqueness of the solution were also discussed. In particular, Herzberger's matrix method was used to obtain the convergence order of the new method to four. By comparing the new method with Wang's method and the same order method, numerical illustrations showed that the new method has a higher order of convergence and accuracy.&lt;/p&gt;&lt;/abstract&gt;
In this paper, we focus on a class of optimal eighth-order iterative methods, initially proposed by Sharma et al., whose second step can choose any fourth-order iterative method. By selecting … In this paper, we focus on a class of optimal eighth-order iterative methods, initially proposed by Sharma et al., whose second step can choose any fourth-order iterative method. By selecting the first two steps as an optimal fourth-order iterative method, we derive an optimal eighth-order one-parameter iterative method, which can solve nonlinear systems. Employing fractal theory, we investigate the dynamic behavior of rational operators associated with the iterative method through the Scaling theorem and Möbius transformation. Subsequently, we conduct a comprehensive study of the chaotic dynamics and stability of the iterative method. Our analysis involves the examination of strange fixed points and their stability, critical points, and the parameter spaces generated on the complex plane with critical points as initial points. We utilize these findings to intuitively select parameter values from the figures. Furthermore, we generate dynamical planes for the selected parameter values and ultimately determine the range of unstable parameter values, thus obtaining the range of stable parameter values. The bifurcation diagram shows the influence of parameter selection on the iteration sequence. In addition, by drawing attractive basins, it can be seen that this iterative method is superior to the same-order iterative method in terms of convergence speed and average iterations. Finally, the matrix sign function, nonlinear equation and nonlinear system are solved by this iterative method, which shows the applicability of this iterative method.
We analyze the dynamical behavior of an eighth-order Sharma’s iterative scheme, which contains a single parameter, with respect to an arbitrary quadratic polynomial using complex analysis. The eighth-order Sharma’s iterative … We analyze the dynamical behavior of an eighth-order Sharma’s iterative scheme, which contains a single parameter, with respect to an arbitrary quadratic polynomial using complex analysis. The eighth-order Sharma’s iterative scheme is analytically conjugated to a rational operator on the Riemann sphere. We discuss the strange fixed points of the rational operator and present its stable region graph. Additionally, we briefly investigate the superattracting point and the critical point, which have an impact on the Sharma’s iterative scheme discussed. Finally, we present the dynamical planes for different parameter values using the complex dynamics tool, which helps us select more effective members of the Sharma’s iterative scheme. Numerical experiments are conducted to verify the theoretical results.
Abstract In this paper, a new conformable fractional Traub's method is proposed for solving nonlinear systems. It can be proved that the order of convergence is 3. Experimental results show … Abstract In this paper, a new conformable fractional Traub's method is proposed for solving nonlinear systems. It can be proved that the order of convergence is 3. Experimental results show that the new proposed method requires fewer iterations in the calculation process than conformable fractional Newton's method. And the convergence planes shows good stability.
In most cases of solving nonlinear problems, the problem comes down to a problem of solving nonlinear equations. In order to find the approximate solution more accurately and efficiently, we … In most cases of solving nonlinear problems, the problem comes down to a problem of solving nonlinear equations. In order to find the approximate solution more accurately and efficiently, we usually study the stability of an iterative method. Compared with the classical second and third order iterative methods, this paper mainly studies the stability of an optimal fourth order biparameter Jarratt-type method. By using fractal theory, we finally prove the influence of parameter selection on the stability of iterative method, and obtain stable parameter, thus ensuring the reliability and effectiveness of Jarratt-type method to solve nonlinear problems.
In this paper, the stability of a class of Liu–Wang’s optimal eighth-order single-parameter iterative methods for solving simple roots of nonlinear equations was studied by applying them to arbitrary quadratic … In this paper, the stability of a class of Liu–Wang’s optimal eighth-order single-parameter iterative methods for solving simple roots of nonlinear equations was studied by applying them to arbitrary quadratic polynomials. Under the Riemann sphere and scaling theorem, the complex dynamic behavior of the iterative method was analyzed by fractals. We discuss the stability of all fixed points and the parameter spaces starting from the critical points with the Mathematica software. The dynamical planes of the elements with good and bad dynamical behavior are given, and the optimal parameter element with stable behavior was obtained. Finally, a numerical experiment and practical application were carried out to prove the conclusion.
&lt;abstract&gt;&lt;p&gt;A new eighth-order Chebyshev-Halley type iteration is proposed for solving nonlinear equations and matrix sign function. Basins of attraction show that several special cases of the new method are globally … &lt;abstract&gt;&lt;p&gt;A new eighth-order Chebyshev-Halley type iteration is proposed for solving nonlinear equations and matrix sign function. Basins of attraction show that several special cases of the new method are globally convergent. It is analytically proven that the new method is asymptotically stable and the new method has the order of convergence eight as well. The effectiveness of the theoretical results are illustrated by numerical experiments. In numerical experiments, the new method is applied to a random matrix, Wilson matrix and continuous-time algebraic Riccati equation. Numerical results show that, compared with some well-known methods, the new method achieves the accuracy requirement in the minimum computing time and the minimum number of iterations.&lt;/p&gt;&lt;/abstract&gt;
&lt;abstract&gt;&lt;p&gt;In this paper, the semilocal convergence of the eighth order iterative method is proved in Banach space by using the recursive relation, and the proof process does not need high … &lt;abstract&gt;&lt;p&gt;In this paper, the semilocal convergence of the eighth order iterative method is proved in Banach space by using the recursive relation, and the proof process does not need high order derivative. By selecting the appropriate initial point and applying the Lipschitz condition to the first order Fréchet derivative in the whole region, the existence and uniqueness domain are obtained. In addition, the theoretical results of semilocal convergence are applied to two nonlinear systems, and satisfactory results are obtained.&lt;/p&gt;&lt;/abstract&gt;
In this paper, by applying Petković’s iterative method to the Möbius conjugate mapping of a quadratic polynomial function, we attain an optimal eighth-order rational operator with a single parameter r … In this paper, by applying Petković’s iterative method to the Möbius conjugate mapping of a quadratic polynomial function, we attain an optimal eighth-order rational operator with a single parameter r and research the stability of this method by using complex dynamics tools on the basis of fractal theory. Through analyzing the stability of the fixed point and drawing the parameter space related to the critical point, the parameter family which can make the behavior of the corresponding iterative method stable or unstable is obtained. Lastly, the consequence is verified by showing their corresponding dynamical planes.
A novel Newton-type n-point iterative method with memory is proposed for solving nonlinear equations, which is constructed by the Hermite interpolation. The proposed iterative method with memory reaches the order … A novel Newton-type n-point iterative method with memory is proposed for solving nonlinear equations, which is constructed by the Hermite interpolation. The proposed iterative method with memory reaches the order (2n+2n−1−1+22n+1+22n−2+2n+1)/2 by using n variable parameters. The computational efficiency of the proposed method is higher than that of the existing Newton-type methods with and without memory. To observe the stability of the proposed method, some complex functions are considered under basins of attraction. Basins of attraction show that the proposed method has better stability and requires a lesser number of iterations than various well-known methods. The numerical results support the theoretical results.
In this paper, a family of Ostrowski-type iterative schemes with a biparameter was analyzed. We present the dynamic view of the proposed method and study various conjugation properties. The stability … In this paper, a family of Ostrowski-type iterative schemes with a biparameter was analyzed. We present the dynamic view of the proposed method and study various conjugation properties. The stability of the strange fixed points for special parameter values is studied. The parameter spaces related to the critical points and dynamic planes are used to visualize their dynamic properties. Eventually, we find the most stable member of the biparametric family of six-order Ostrowski-type methods. Some test equations are examined for supporting the theoretical results.
Two novel Kurchatov-type first-order divided difference operators were designed, which were used for constructing the variable parameter of three derivative-free iterative methods. The convergence orders of the new derivative-free methods … Two novel Kurchatov-type first-order divided difference operators were designed, which were used for constructing the variable parameter of three derivative-free iterative methods. The convergence orders of the new derivative-free methods are 3, (5+17)/2≈4.56 and 5. The new derivative-free iterative methods with memory were applied to solve nonlinear ordinary differential equations (ODEs) and partial differential equations (PDEs) in numerical experiments. The dynamical behavior of our new methods with memory was studied by using dynamical plane. The dynamical planes showed that our methods had good stability.
Some Kurchatov-type accelerating parameters are used to construct some derivative-free iterative methods with memory for solving nonlinear systems. New iterative methods are developed from an initial scheme without memory with … Some Kurchatov-type accelerating parameters are used to construct some derivative-free iterative methods with memory for solving nonlinear systems. New iterative methods are developed from an initial scheme without memory with order of convergence three. New methods have the convergence order 2+5≈4.236 and 5, respectively. The application of new methods can solve standard nonlinear systems and nonlinear ordinary differential equations (ODEs) in numerical experiments. Numerical results support the theoretical results.
In this manuscript, by using undetermined parameter method, an efficient iterative method with eighth-order is designed to solve nonlinear systems. The new method requires one matrix inversion per iteration, which … In this manuscript, by using undetermined parameter method, an efficient iterative method with eighth-order is designed to solve nonlinear systems. The new method requires one matrix inversion per iteration, which means that computational cost of our method is low. The theoretical efficiency of the proposed method is analyzed, which is superior to other methods. Numerical results show that the proposed method can reduce the computational time, remarkably. New method is applied to solve the numerical solution of nonlinear ordinary differential equations (ODEs) and partial differential equations (PDEs). The nonlinear ODEs and PDEs are discretized by finite difference method. The validity of the new method is verified by comparison with analytic solutions.
In this paper, we obtain two iterative methods with memory by using inverse interpolation. Firstly, using three function evaluations, we present a two-step iterative method with memory, which has the … In this paper, we obtain two iterative methods with memory by using inverse interpolation. Firstly, using three function evaluations, we present a two-step iterative method with memory, which has the convergence order 4.5616. Secondly, a three-step iterative method of order 10.1311 is obtained, which requires four function evaluations per iteration. Herzberger’s matrix method is used to prove the convergence order of new methods. Finally, numerical comparisons are made with some known methods by using the basins of attraction and through numerical computations to demonstrate the efficiency and the performance of the presented methods.
In this paper, a self-accelerating type method is proposed for solving nonlinear equations, which is a modified Ren’s method. A simple way is applied to construct a variable self-accelerating parameter … In this paper, a self-accelerating type method is proposed for solving nonlinear equations, which is a modified Ren’s method. A simple way is applied to construct a variable self-accelerating parameter of the new method, which does not increase any computational costs. The highest convergence order of new method is 2 + 6 ≈ 4.4495 . Numerical experiments are made to show the performance of the new method, which supports the theoretical results.
A new Newton method with memory is proposed by using a variable self-accelerating parameter. Firstly, a modified Newton method without memory with invariant parameter is constructed for solving nonlinear equations. … A new Newton method with memory is proposed by using a variable self-accelerating parameter. Firstly, a modified Newton method without memory with invariant parameter is constructed for solving nonlinear equations. Substituting the invariant parameter of Newton method without memory by a variable self-accelerating parameter, we obtain a novel Newton method with memory. The convergence order of the new Newton method with memory is 1 + 2 . The acceleration of the convergence rate is attained without any additional function evaluations. The main innovation is that the self-accelerating parameter is constructed by a simple way. Numerical experiments show the presented method has faster convergence speed than existing methods.
Empirical Standards are natural-language models of a scientific community's expectations for a specific kind of study (e.g. a questionnaire survey). The ACM SIGSOFT Paper and Peer Review Quality Initiative generated … Empirical Standards are natural-language models of a scientific community's expectations for a specific kind of study (e.g. a questionnaire survey). The ACM SIGSOFT Paper and Peer Review Quality Initiative generated empirical standards for research methods commonly used in software engineering. These living documents, which should be continuously revised to reflect evolving consensus around research best practices, will improve research quality and make peer review more effective, reliable, transparent and fair.
In this paper, we present a new fifth-order three-step iterative method for solving non-linear systems and prove the local convergence order of the new method. The new method requires the … In this paper, we present a new fifth-order three-step iterative method for solving non-linear systems and prove the local convergence order of the new method. The new method requires the evaluations of two functions, two first derivatives and one matrix inversion in per iteration. The computational efficiency index for nonlinear systems is used to compare the efficiency of different methods. It is proved that the new method is more efficient. Numerical experiments are performed, which support the theoretical results. Numerical experiments show that the new method remarkably saves the computational time for solving large nonlinear systems.
In this paper, a family of Newton-type iterative methods with memory is obtained for solving nonlinear equations, which uses some special self-accelerating parameters. To this end, we first present two … In this paper, a family of Newton-type iterative methods with memory is obtained for solving nonlinear equations, which uses some special self-accelerating parameters. To this end, we first present two optimal fourth-order iterative methods with memory for solving nonlinear equations. Then we give a novel way to construct the self-accelerating parameter and obtain a family of Newton-type iterative methods with memory. The self-accelerating parameters have the properties of simple structure and easy calculation, which do not increase the computational cost of the iterative methods. The convergence order of the new iterative method is increased from 4 to 2+7≈4.64575. Numerical comparisons are made with some known methods by using the basins of attraction and through numerical computations to demonstrate the efficiency and the performance of the new methods. Experiment results show that, compared with the existing methods, the new iterative methods with memory have the advantage of costing less computing time.
In this paper, we present a new sixth-order iterative method for solving nonlinear systems and prove a local convergence result. The new method requires solving five linear systems per iteration. … In this paper, we present a new sixth-order iterative method for solving nonlinear systems and prove a local convergence result. The new method requires solving five linear systems per iteration. An important feature of the new method is that the LU (lower upper, also called LU factorization) decomposition of the Jacobian matrix is computed only once in each iteration. The computational efficiency index of the new method is compared to that of some known methods. Numerical results are given to show that the convergence behavior of the new method is similar to the existing methods. The new method can be applied to small- and medium-sized nonlinear systems.
In recent years, the small initial boundary value problem of the Kirchhoff-type wave system attracts many scholars’ attention. However, the big initial boundary value problem is also a topic of … In recent years, the small initial boundary value problem of the Kirchhoff-type wave system attracts many scholars’ attention. However, the big initial boundary value problem is also a topic of theoretical significance. In this paper, we devote oneself to the well-posedness of the Kirchhoff-type wave system under the big initial boundary conditions. Combining the potential well method with an improved convex method, we establish a criterion for the well-posedness of the system with nonlinear source and dissipative and viscoelastic terms. Based on the criteria, the energy of the system is divided into different levels. For the subcritical case, we prove that there exist the global solutions when the initial value belongs to the stable set, while the finite time blow-up occurs when the initial value belongs to the unstable set. For the supercritical case, we show that the corresponding solution blows up in a finite time if the initial value satisfies some given conditions.
Multipoint iterative methods with memory are the class of the most efficient iterative methods for solving nonlinear equations since they use already computed information to considerable increase convergence rate without … Multipoint iterative methods with memory are the class of the most efficient iterative methods for solving nonlinear equations since they use already computed information to considerable increase convergence rate without additional computational costs. The purpose of this study is to provide an overview about the multipoint iterative methods with memory by addressing recent patents and scholarly articles on the constructing and application of the iterative method. Numerical experiments are used to demonstrate the efficiency and the performance of the multipoint iterative method. The results show that the multipoint iterative methods are particularly suitable for the high-precision computing. Keywords: Computational efficiency, convergence order, multipoint iterative method, Newton type method, nonlinear equations, Steffensen type method.
In this work, two multi-step derivative-free iterative methods are presented for solving system of nonlinear equations. The new methods have high computational efficiency and low computational cost. The order of … In this work, two multi-step derivative-free iterative methods are presented for solving system of nonlinear equations. The new methods have high computational efficiency and low computational cost. The order of convergence of the new methods is proved by a development of an inverse first-order divided difference operator. The computational efficiency is compared with the existing methods. Numerical experiments support the theoretical results. Experimental results show that the new methods remarkably reduce the computing time in the process of high-precision computing.
In this paper, a general family of n-point Newton type iterative methods for solving nonlinear equations is constructed by using direct Hermite interpolation. The order of convergence of the new … In this paper, a general family of n-point Newton type iterative methods for solving nonlinear equations is constructed by using direct Hermite interpolation. The order of convergence of the new n-point iterative methods without memory is 2n requiring the evaluations of n functions and one first-order derivative in per full iteration, which implies that this family is optimal according to Kung and Traub’s conjecture (1974). Its error equations and asymptotic convergence constants are obtained. The n-point iterative methods with memory are obtained by using a self-accelerating parameter, which achieve much faster convergence than the corresponding n-point methods without memory. The increase of convergence order is attained without any additional calculations so that the n-point Newton type iterative methods with memory possess a very high computational efficiency. Numerical examples are demonstrated to confirm theoretical results.
In this paper, we present a family of three-parameter derivative-free iterative methods with and without memory for solving nonlinear equations. The convergence order of the new method without memory is … In this paper, we present a family of three-parameter derivative-free iterative methods with and without memory for solving nonlinear equations. The convergence order of the new method without memory is four requiring three functional evaluations. Based on the new fourth-order method without memory, we present a family of derivative-free methods with memory. Using three self-accelerating parameters, calculated by Newton interpolatory polynomials, the convergence order of the new methods with memory are increased from 4 to 7.0174 and 7.5311 without any additional calculations. Compared with the existing methods with memory, the new method with memory can obtain higher convergence order by using relatively simple self-accelerating parameters. Numerical comparisons are made with some known methods by using the basins of attraction and through numerical computations to demonstrate the efficiency and the performance of the presented methods.
In this paper, we present a new iterative method of convergence order five for solving nonlinear systems. Per iteration the new method requires the evaluations of two functions, two first … In this paper, we present a new iterative method of convergence order five for solving nonlinear systems. Per iteration the new method requires the evaluations of two functions, two first derivatives and one matrix inversion. The computational effici
In this paper, a modified super-Halley's method free from second derivative is presented. The new method requires three evaluations of the function per iteration and one of its first derivative, … In this paper, a modified super-Halley's method free from second derivative is presented. The new method requires three evaluations of the function per iteration and one of its first derivative, and its efficiency index is 4 √ 6 ≈ 1.565. Several numerical examples are given to illustrate that the modified method mostly performs better or equal in contrast with the optimal eighth-order methods (see (Appl. Math. Comput., 2010, 217(6): 2448-2455) and (J. Comput. Appl. Math., 2010, 233(9): 2278-2284)), such as when the initial guesses are not so close to the sought zeros. Furthermore, the convergence radius of the new method is greater than the convergence radii of the optimal eighth-order methods and its extended operational index is better in comparison with the optimal eighth-order methods. Keywords: super-Halley's method; nonlinear equation; extended computational index; efficiency index; convergence radius MR(2010) Subject Classification: 65H04; 65H05; 41A25 / CLC number: O241.7 Document code: A Article ID: 1000-0917(2015)01-0151-09
The problem is to calculate a simple zero of a nonlinear function ƒ by iteration. There is exhibited a family of iterations of order 2 n -1 which use n … The problem is to calculate a simple zero of a nonlinear function ƒ by iteration. There is exhibited a family of iterations of order 2 n -1 which use n evaluations of ƒ and no derivative evaluations, as well as a second family of iterations of order 2 n -1 based on n — 1 evaluations of ƒ and one of ƒ′. In particular, with four evaluations an iteration of eighth order is constructed. The best previous result for four evaluations was fifth order. It is proved that the optimal order of one general class of multipoint iterations is 2 n -1 and that an upper bound on the order of a multipoint iteration based on n evaluations of ƒ (no derivatives) is 2 n . It is conjectured that a multipoint iteration without memory based on n evaluations has optimal order 2 n -1 .
In this paper, we present a new family of two-step Newton-type iterative methods with memory for solving nonlinear equations. In order to obtain a Newton-type method with memory, we first … In this paper, we present a new family of two-step Newton-type iterative methods with memory for solving nonlinear equations. In order to obtain a Newton-type method with memory, we first present an optimal two-parameter fourth-order Newton-type method without memory. Then, based on the two-parameter method without memory, we present a new two-parameter Newton-type method with memory. Using two self-correcting parameters calculated by Hermite interpolatory polynomials, the R-order of convergence of a new Newton-type method with memory is increased from 4 to 5.7016 without any additional calculations. Numerical comparisons are made with some known methods by using the basins of attraction and through numerical computations to demonstrate the efficiency and the performance of the presented methods.
In this paper, we present some three-point Newton-type iterative methods without memory for solving nonlinear equations by using undetermined coefficients method. The order of convergence of the new methods without … In this paper, we present some three-point Newton-type iterative methods without memory for solving nonlinear equations by using undetermined coefficients method. The order of convergence of the new methods without memory is eight requiring the evaluations of three functions and one first-order derivative in per full iteration. Hence, the new methods are optimal according to Kung and Traubs conjecture. Based on the presented methods without memory, we present two families of Newton-type iterative methods with memory. Further accelerations of convergence speed are obtained by using a self-accelerating parameter. This self-accelerating parameter is calculated by the Hermite interpolating polynomial and is applied to improve the order of convergence of the Newton-type method. The corresponding R-order of convergence is increased from 8 to 9, [Formula: see text] and 10. The increase of convergence order is attained without any additional calculations so that the two families of the methods with memory possess a very high computational efficiency. Numerical examples are demonstrated to confirm theoretical results.
Neta's three step sixth order family of methods for solving nonlinear equations require 3 function and 1 derivative evaluation per iteration. Using exactly the same information another three step method … Neta's three step sixth order family of methods for solving nonlinear equations require 3 function and 1 derivative evaluation per iteration. Using exactly the same information another three step method can be obtained with convergence rate 10.81525 which is much better than the sixth order.
In this paper, we present a family of three-parameter derivative-free iterative methods with and without memory for solving nonlinear equations. The convergence order of the new method without memory is … In this paper, we present a family of three-parameter derivative-free iterative methods with and without memory for solving nonlinear equations. The convergence order of the new method without memory is four requiring three functional evaluations. Based on the new fourth-order method without memory, we present a family of derivative-free methods with memory. Using three self-accelerating parameters, calculated by Newton interpolatory polynomials, the convergence order of the new methods with memory are increased from 4 to 7.0174 and 7.5311 without any additional calculations. Compared with the existing methods with memory, the new method with memory can obtain higher convergence order by using relatively simple self-accelerating parameters. Numerical comparisons are made with some known methods by using the basins of attraction and through numerical computations to demonstrate the efficiency and the performance of the presented methods.
We analyze the dynamical behavior of an eighth-order Sharma’s iterative scheme, which contains a single parameter, with respect to an arbitrary quadratic polynomial using complex analysis. The eighth-order Sharma’s iterative … We analyze the dynamical behavior of an eighth-order Sharma’s iterative scheme, which contains a single parameter, with respect to an arbitrary quadratic polynomial using complex analysis. The eighth-order Sharma’s iterative scheme is analytically conjugated to a rational operator on the Riemann sphere. We discuss the strange fixed points of the rational operator and present its stable region graph. Additionally, we briefly investigate the superattracting point and the critical point, which have an impact on the Sharma’s iterative scheme discussed. Finally, we present the dynamical planes for different parameter values using the complex dynamics tool, which helps us select more effective members of the Sharma’s iterative scheme. Numerical experiments are conducted to verify the theoretical results.
In this paper, the stability of a class of Liu–Wang’s optimal eighth-order single-parameter iterative methods for solving simple roots of nonlinear equations was studied by applying them to arbitrary quadratic … In this paper, the stability of a class of Liu–Wang’s optimal eighth-order single-parameter iterative methods for solving simple roots of nonlinear equations was studied by applying them to arbitrary quadratic polynomials. Under the Riemann sphere and scaling theorem, the complex dynamic behavior of the iterative method was analyzed by fractals. We discuss the stability of all fixed points and the parameter spaces starting from the critical points with the Mathematica software. The dynamical planes of the elements with good and bad dynamical behavior are given, and the optimal parameter element with stable behavior was obtained. Finally, a numerical experiment and practical application were carried out to prove the conclusion.
General Preliminaries: 1.1 Introduction 1.2 Basic concepts and notations General Theorems on Iteration Functions: 2.1 The solution of a fixed-point problem 2.2 Linear and superlinear convergence 2.3 The iteration calculus … General Preliminaries: 1.1 Introduction 1.2 Basic concepts and notations General Theorems on Iteration Functions: 2.1 The solution of a fixed-point problem 2.2 Linear and superlinear convergence 2.3 The iteration calculus The Mathematics of Difference Relations: 3.1 Convergence of difference inequalities 3.2 A theorem on the solutions of certain inhomogeneous difference equations 3.3 On the roots of certain indicial equations 3.4 The asymptotic behavior of the solutions of certain difference equations Interpolatory Iteration Functions: 4.1 Interpolation and the solution of equations 4.2 The order of interpolatory iteration functions 4.3 Examples One-Point Iteration Functions: 5.1 The basic sequence $E_s$ 5.2 Rational approximations to $E_s$ 5.3 A basic sequence of iteration functions generated by direct interpolation 5.4 The fundamental theorem of one-point iteration functions 5.5 The coefficients of the error series of $E_s$ One-Point Iteration Functions With Memory: 6.1 Interpolatory iteration functions 6.2 Derivative-estimated one-point iteration functions with memory 6.3 Discussion of one-point iteration functions with memory Multiple Roots: 7.1 Introduction 7.2 The order of $E_s$ 7.3 The basic sequence $\scr{E}_s$ 7.4 The coefficients of the error series of $\scr{E}_s$ 7.5 Iteration functions generated by direct interpolation 7.6 One-point iteration functions with memory 7.7 Some general results 7.8 An iteration function of incommensurate order Multipoint Iteration Functions: 8.1 The advantages of multipoint iteration functions 8.2 A new interpolation problem 8.3 Recursively formed iteration functions 8.4 Multipoint iteration functions generated by derivative estimation 8.5 Multipoint iteration functions generated by composition 8.6 Multipoint iteration functions with memory Multipoint Iteration Functions: Continuation: 9.1 Introduction 9.2 Multipoint iteration functions of type 1 9.3 Multipoint iteration functions of type 2 9.4 Discussion of criteria for the selection of an iteration function Iteration Functions Which Require No Evaluation of Derivatives: 10.1 Introduction 10.2 Interpolatory iteration functions 10.3 Some additional iteration functions Systems of Equations: 11.1 Introduction 11.2 The generation of vector-valued iteration functions by inverse interpolation 11.3 Error estimates for some vector-valued iteration functions 11.4 Vector-valued iteration functions which require no derivative evaluations A Compilation of Iteration Functions: 12.1 Introduction 12.2 One-point iteration functions 12.3 One-point iteration functions with memory 12.4 Multiple roots 12.5 Multipoint iteration functions 12.6 Multipoint iteration functions with memory 12.7 Systems of equations Appendices: A. Interpolation B. On the $j$th derivative of the inverse function C. Significant figures and computational efficiency D. Acceleration of convergence E. Numerical examples F. Areas for future research Bibliography Index.
Two novel Kurchatov-type first-order divided difference operators were designed, which were used for constructing the variable parameter of three derivative-free iterative methods. The convergence orders of the new derivative-free methods … Two novel Kurchatov-type first-order divided difference operators were designed, which were used for constructing the variable parameter of three derivative-free iterative methods. The convergence orders of the new derivative-free methods are 3, (5+17)/2≈4.56 and 5. The new derivative-free iterative methods with memory were applied to solve nonlinear ordinary differential equations (ODEs) and partial differential equations (PDEs) in numerical experiments. The dynamical behavior of our new methods with memory was studied by using dynamical plane. The dynamical planes showed that our methods had good stability.
In this paper, a family of Newton-type iterative methods with memory is obtained for solving nonlinear equations, which uses some special self-accelerating parameters. To this end, we first present two … In this paper, a family of Newton-type iterative methods with memory is obtained for solving nonlinear equations, which uses some special self-accelerating parameters. To this end, we first present two optimal fourth-order iterative methods with memory for solving nonlinear equations. Then we give a novel way to construct the self-accelerating parameter and obtain a family of Newton-type iterative methods with memory. The self-accelerating parameters have the properties of simple structure and easy calculation, which do not increase the computational cost of the iterative methods. The convergence order of the new iterative method is increased from 4 to 2+7≈4.64575. Numerical comparisons are made with some known methods by using the basins of attraction and through numerical computations to demonstrate the efficiency and the performance of the new methods. Experiment results show that, compared with the existing methods, the new iterative methods with memory have the advantage of costing less computing time.
Dichotomie dynamique de Fatou et Julia. Points periodiques. Consequences du theoreme de Montel. L'ensemble de Julia est la fermeture de l'ensemble des points periodiques repulseurs. Resultats classiques sur l'ensemble de … Dichotomie dynamique de Fatou et Julia. Points periodiques. Consequences du theoreme de Montel. L'ensemble de Julia est la fermeture de l'ensemble des points periodiques repulseurs. Resultats classiques sur l'ensemble de Fatou. Classification de Sullivan de l'ensemble de Fatou. Une condition pour le developpement sur l'ensemble de Julia. La dynamique des polynomes. L'ensemble de Mandelbrot et le travail de Douady et Hubbard. Le theoreme d'application de Riemann mesurable et la dynamique analytique
In this paper, a general family of n-point Newton type iterative methods for solving nonlinear equations is constructed by using direct Hermite interpolation. The order of convergence of the new … In this paper, a general family of n-point Newton type iterative methods for solving nonlinear equations is constructed by using direct Hermite interpolation. The order of convergence of the new n-point iterative methods without memory is 2n requiring the evaluations of n functions and one first-order derivative in per full iteration, which implies that this family is optimal according to Kung and Traub’s conjecture (1974). Its error equations and asymptotic convergence constants are obtained. The n-point iterative methods with memory are obtained by using a self-accelerating parameter, which achieve much faster convergence than the corresponding n-point methods without memory. The increase of convergence order is attained without any additional calculations so that the n-point Newton type iterative methods with memory possess a very high computational efficiency. Numerical examples are demonstrated to confirm theoretical results.
A family of fourth order iterative methods for finding simple zeros of nonlinear functions is displayed. The methods require evaluation of the function and its derivative at the starting point … A family of fourth order iterative methods for finding simple zeros of nonlinear functions is displayed. The methods require evaluation of the function and its derivative at the starting point of each step and of the function alone at the so-called Newton point. A known method of Ostrowski and two new procedures are shown to be part of the family.
The complex dynamical analysis of the parametric fourth‐order Kim’s iterative family is made on quadratic polynomials, showing the MATLAB codes generated to draw the fractal images necessary to complete the … The complex dynamical analysis of the parametric fourth‐order Kim’s iterative family is made on quadratic polynomials, showing the MATLAB codes generated to draw the fractal images necessary to complete the study. The parameter spaces associated with the free critical points have been analyzed, showing the stable (and unstable) regions where the selection of the parameter will provide us the excellent schemes (or dreadful ones).