![MathJax Logo](/templates/jsp/_style2/_tandf/pb2/images/math-jax.gif)
Abstract
We present a local convergence analysis of a Newton-Traub composition method in order to approximate a locally unique solution of a nonlinear equation in a Banach space setting. The method was shown to be of convergence order five, if defined on the dimensional Euclidean space using Taylor’s expansion and hypotheses reaching up to the fifth derivative (Hueso, Martinez, & Tervel, 2015; Sharma, 2014). We expand the applicability of this method using contractive techniques and hypotheses only on the first Fréchet-derivative of the operator involved. Moreover, we provide computable radius of convergence and error estimates on the distances involved not given in the earlier studies (Hueso et al., 2015; Sharma, 2014). Numerical examples illustrating the theoretical results are also presented in this study.
Public Interest Statement
The most commonly used solution methods for finding zeros of the equation are iterative-i.e. starting from one or several initial approximations a sequence is constructed that converges to a solution of the equation. In the present paper, we present a local convergence analysis of a Newton-Traub composition method in order to approximate a locally unique solution of a nonlinear equation in a Banach space setting. The method was shown to be of convergence order five, if defined on the
dimensional Euclidean space using Taylor’s expansion and hypotheses reaching up to the fifth derivative. We expand the applicability of this method using contractive techniques and hypotheses only on the first Fréchet-derivative of the operator involved. Moreover, we provide computable radius of convergence and error estimates on the distances involved not given in the earlier studies
1. Introduction
In this paper, we are concerned with the problem of approximating a solution of the equation
(1.1)
(1.1)
where F is a Fréchet-differentiable operator defined on a convex subset D of a Banach space X with values in a Banach space Y.
Many problems in Computational Sciences and other disciplines can be brought in a form like (Equation 1.1) using mathematical modeling (Argyros, Citation2007; Argyros & Hilout, Citation2013; Kantorovich & Akilov, Citation1982; Petkovic, Neta, Petkovic, & Džunič, Citation2013; Traub, Citation1964). It is known that the solutions of these equations can rarely be found in closed form. So, most solution methods for these equations are iterative. Newton-like iterative methods (Amat, Busquier, & Gutiérrez, Citation2003; Argyros, Citation1985,Citation2004,Citation2007; Argyros & Chen, Citation1993; Argyros & Hilout, Citation2012,Citation2013; Candela & Marquina, Citation1990; Chun, Stanica, & Neta, Citation2011; Gutiérrez & Hernández, Citation1998; Hernández, Citation2001,Citation2000; Hernández & Salanova, Citation2000; Hueso et al., Citation2015; Kantorovich & Akilov, Citation1982; Magreñán, Citation2013; Petkovic et al., Citation2013; Rheinboldt, Citation1978; Sharma, Citation2014; Traub, Citation1964; Wang, Citation2013) are famous for approximating a solution of the equation (Equation 1.1). These methods are usually studied based on: semi-local and local convergence. The semi-local convergence matter is, based on the information around an initial point, to give conditions ensuring the convergence of the iterative procedure; while the local one is, based on the information around a solution, to find estimates of the radii of convergence balls (Amat et al., Citation2003; Argyros, Citation1985,Citation2004,Citation2007; Argyros & Chen, Citation1993; Argyros & Hilout, Citation2012,Citation2013; Candela & Marquina, Citation1990; Chun et al., Citation2011; Gutiérrez & Hernández, Citation1998; Hernández, Citation2001,Citation2000; Hernández & Salanova, Citation2000; Hueso et al., Citation2015; Kantorovich & Akilov, Citation1982; Magreñán, Citation2013; Petkovic et al., Citation2013; Rheinboldt, Citation1978; Sharma, citesharma; Traub, Citation1964; Wang, Citation2013).
In this paper, we study the local convergence analysis of the method defined for each by
(1.2)
(1.2)
where is an initial point, I is the identity operator on X, and
with
or
Method (Equation 1.2) was introduced and studied in Sharma (Citation2014) when
Sharma showed using Taylors expansions and hypotheses reaching up to the fifth derivative (in this special case) that method (Equation 1.2) is of convergence order five, if
and
The efficiency index of method (Equation 1.2) was compared favorably to other competing methods such as Euler’s, Halley’s, super Halley’s, Chebyshev’s (Amat et al., Citation2003; Argyros, Citation1985,Citation2004; Argyros & Chen, Citation1993; Citation2007,Citation2012,Citation2013; Candela & Marquina, Citation1990; Chun et al., Citation2011; Gutiérrez & Hernández, Citation1998; Hernández, Citation2001,Citation2000; Hernández & Salanova, Citation2000; Hueso et al., Citation2015; Kantorovich & Akilov, Citation1982; Magreñán, Citation2013; Petkovic et al., Citation2013; Rheinboldt, Citation1978; Sharma, Citation2014; Traub, Citation1964; Wang, Citation2013). Notice that method (Equation 1.2) uses four functional evaluations per step. Therefore the efficiency index
where p is the order of convergence and m is the number of functional evaluations is
The local convergence of the preceding methods has been shown under hypotheses up to the fifth derivative (or even higher). These hypotheses restrict the applicability of these methods. The hypotheses on the high order derivatives limit the applicability of these methods. As a motivational example, define function f on
by
(1.3)
(1.3)
Choose . We also have that
and
Notice that is unbounded on D. Hence, the results in Sharma (Citation2014), cannot apply to show the convergence of method (Equation 1.2) or its special cases requiring hypotheses on the third derivative of function F or higher. Notice that, in-particular there is a plethora of iterative methods for approximating solutions of nonlinear equations (Amat et al., Citation2003; Argyros, Citation1985,Citation2004,Citation2007; Argyros & Chen, Citation1993; Argyros & Hilout, Citation2012,Citation2013; Candela & Marquina, Citation1990; Chun et al., Citation2011; Gutiérrez & Hernández, Citation1998,Citation2000; Hernández, Citation2001,Citation2000; Hueso et al., Citation2015; Kantorovich & Akilov, Citation1982; Magreñán, Citation2013; Petkovic et al., Citation2013; Rheinboldt, Citation1978; Sharma, Citation2014; Traub, Citation1964; Wang, Citation2013). These results show that if the initial point
is sufficiently close to the solution
then the sequence
converges to
But how close to the solution
the initial guess
should be? These local results give no information on the radius of the convergence ball for the corresponding method.
Moreover, notice that the convergence ball of high convergence order methods is usually very small and in general decreases as the convergence order increases. Our approach establishes the local convergence result under hypotheses only on the first derivative. Our approach can give a larger convergence ball than the earlier studies, under weaker hypotheses. The same technique can be used to other methods.
The rest of the paper is organized as follows. In Section 2, we present the local convergence analysis of method (Equation 1.2). The numerical examples are given in the concluding Section 3.
2. Local convergence
We present the local convergence analysis of method (Equation 1.2) in this section. Let , and
be given parameters with
It is convenient for the local convergence analysis of method (Equation 1.2) that follows to define some scalar functions and parameters. Define functions
on the interval
by
and parameters and
by
Suppose that(2.1)
(2.1)
Then, we have by (Equation 2.1) that ,
for each
Using the definition of functions
, and
we obtain that
and
as
Then, it follows from the Intermediate Value Theorem that functions
and
have zeros in the interval
Denote by
respectively, the smallest of such zeros. Set
(2.2)
(2.2)
Then, we have that(2.3)
(2.3)
and for each (2.4)
(2.4)
(2.5)
(2.5)
and(2.6)
(2.6)
Denote by the open and closed ball in X , respectively, with center
and of radius
Next, using the above notation we present the local convergence result for method (Equation 1.2) using the preceding notation.
Theorem 2.1
Let be a Fréchet-differentiable operator. Suppose that there exist
with
such that for each
(2.7)
(2.7)
(2.8)
(2.8)
(2.9)
(2.9)
(2.10)
(2.10)
and(2.11)
(2.11)
where the radius r is given by Equation (2.2). Then, the sequence generated by method (Equation 1.2) for
is well defined, remains in
for each
and converges to
Moreover, the following estimates hold
(2.12)
(2.12)
(2.13)
(2.13)
and(2.14)
(2.14)
where the “g” functions are defined previously. Furthermore, for the limit point
is the only solution of the equation
in
Proof
The proof uses induction to show estimates (Equations 2.12–2.14). In view of (Equation 2.2), the hypothesis and (Equation 2.8), we get that
(2.15)
(2.15)
It follows from Equation (2.15) and the Banach Lemma on invertible operators (Argyros, Citation2007; Argyros & Hilout, Citation2013; Kantorovich & Akilov, Citation1982) that and
(2.16)
(2.16)
Hence, is well defined by the first sub-step of method (Equation 1.2) for
We can write by Equation (2.7) that
(2.17)
(2.17)
Notice that Hence we deduce that is
Using Equations (2.10) and (2.17) we get that
(2.18)
(2.18)
Then, using Equations (2.2, 2.3, 2.7, 2.9, 2.16, 2.17) and the first sub-step of method (Equation 1.2) for that
(2.19)
(2.19)
which shows (Equation 2.12) for and
The second sub-step of method (Equation 1.2) for can be written as
(2.20)
(2.20)
Then, using Equations (2.2, 2.3, 2.8, 2.16, 2.18, 2.19, and 2.20) we have in turn that(2.21)
(2.21)
which shows (Equation 2.13) for and
Then, we also have as in Equation (2.18) that(2.22)
(2.22)
since In view of (Equation 2.2, 2.4, 2.8, 2.16, 2.19, and 2.22) we get in turn that
(2.23)
(2.23)
which shows (Equation 2.14) for and
By simply replacing
by
in the preceding estimates we arrive at (Equations 2.12–2.14). Using the estimate
we deduce that
and
.
Finally, to show the uniqueness part, let for some
with
Using Equation (2.8) we get that
It follows that linear operator Q is invertible. Then, from the identity we conclude that
.
Remark 2.2
(a) The radius was obtained by Argyros in (Citation2004) as the convergence radius for Newton’s method under condition (Equations 2.7–2.9). Notice that the convergence radius for Newton’s method given independently by Rheinboldt (Citation1978) and Traub (Citation1964) is given by
(2.24)
(2.24)
As an example, let us consider the function . Then
. Set
. Then, we have that
, so
.
Moreover, the new error bounds (Argyros, Citation2004,Citation2007; Argyros & Hilout, Citation2012,Citation2013) are:
whereas the old ones (Kantorovich & Akilov, Citation1982; Petkovic et al., petkovic)
Clearly, the new error bounds are more precise, if . Clearly, we do not expect the radius of convergence of method (Equation 1.2) given by r to be larger than
(see Equation 2.3) . (b) The local results can be used for projection methods such as Arnoldi’s method, the generalized minimum residual method(GMREM), the generalized conjugate method(GCM) for combined Newton/finite projection methods and in connection to the mesh independence principle in order to develop the cheapest and most efficient mesh refinement strategy (Argyros, Citation2004,Citation2007; Argyros & Hilout, Citation2012,Citation2013). (c) The results can be also be used to solve equations where the operator
satisfies the autonomous differential equation (Argyros, Citation2007; Argyros & Hilout, Citation2013; Kantorovich & Akilov, Citation1982; Petkovic et al., Citation2013):
where P is a known continuous operator. Since we can apply the results without actually knowing the solution
Let as an example
Then, we can choose
and
(d) It is worth noticing that method (Equation 1.2) are not changing if we use the new instead of the old conditions (Hueso et al., Citation2015; Sharma, Citation2014). Moreover, for the error bounds in practice we can use the computational order of convergence (COC)
or the approximate computational order of convergence (ACOC)
instead of the error bounds obtained in Theorem 2.1. (e) In view of (Equation 2.8) and the estimate
condition (Equation 2.10) can be dropped and M can be replaced by
or
since
3. Numerical examples
We present numerical examples in this section. We have taken and
in our computations.
Example 3.1
Let Define F on D for
by
(3.1)
(3.1)
Then, the Fréchet-derivative is given by
Notice that Then, the parameters are
Example 3.2
Let the space of continuous functions defined on [0, 1] be and equipped with the max norm. Let
Define function F on D by
(3.2)
(3.2)
We have that
Then, we get that Then, the parameters are
Example 3.3
Returning back to the motivation example at the introduction on this paper, we have Then, the parameters are
Additional information
Funding
Notes on contributors
Ioannis K. Argyros
Ioannis K. Argyros was born in 1956 in Athens, Greece. He received a BSc from the University of Athens, Greece; and a MSc and PhD from the University of Georgia, Athens, Georgia, USA, under the supervision of Douglas N Clark. Argyros is currently a full professor of Mathematics at Cameron University, Lawton, OK, USA. He published more than 800 papers and 17 books/ monographs in his area of research, computational mathematics. He is also the editor of 20 peer-reviewed research journals in Mathematics.
Santhosh George
Santhosh George was born in 1967 in Kerala, India. He received his PhD degree in Mathematics from Goa University, under the supervision of M. T. Nair. George is a Professor of Mathematics at National Institute of Technology, Karnataka. Five students have completed their PhD under his guidance. He has many International journal and conference papers to his credit
References
- Amat, S., Busquier, S., & Gutiérrez, J. M. (2003). Geometric constructions of iterative functions to solve nonlinear equations. Journal of Computational and Applied Mathematics, 157, 197–205.
- Argyros, I. K. (1985). Quadratic equations and applications to Chandrasekhar’s and related equations. Bulletin of the Australian Mathematical Society, 32, 275–292.
- Argyros, I. K. (2004). A unifying local-semilocal convergence analysis and applications for two-point Newton-like methods in Banach space. Journal of Mathematical Analysis and Applications, 298, 374–397.
- Argyros, I. K. (2007). In C. K. Chui & L. Wuytack (Eds.), Computational theory of iterative methods ( Studies in Computational Mathematics, p. 15). New York: Elsevier.
- Argyros, I. K., & Chen, D. (1993). Results on the Chebyshev method in Banach spaces. Proyecciones, 12, 119–128.
- Argyros, I. K., & Hilout, S. (2012). Weaker conditions for the convergence of Newton’s method. Journal of Complexity, 28, 364–387.
- Argyros, I. K., & Hilout, S. (2013). Numerical methods in nonlinear analysis. Hackensack, NJ: World Scientific.
- Candela, V., & Marquina, A. (1990). Recurrence relations for rational cubic methods II: The Chebyshev method. Computing, 45, 355–367.
- Chun, C., Stanica, P., & Neta, B. (2011). Third order family of methods in Banach spaces. Computers and Mathematics with Applications, 61, 1665–1675.
- Guti\’errez, J. M., & Hern\’andez, M. A. (1998). Recurrence relations for the super-Halley method. Computers & Mathematics with Applications, 36, 1–8.
- Hern\’{a}ndez, M. A. (2001). Chebyshev’s approximation algorithms and applications. Computers & Mathematics with Applications, 41, 433–455.
- Hern{\’a}ndez, M. A. (2000). Second-derivative-free variant of the Chebyshev method for nonlinear equations. Journal of Optimization Theory and Applications, 104, 501–515.
- Hernández, M. A. & Salanova, M. A. (2000). Modification of the Kantorovich assumptions for semilocal convergence of the Chebyshev method. Journal of Computational and Applied Mathematics, 126, 131–143.
- Hueso, J. L., Martinez, E., & Tervel, C. (2015). Convergence, efficiency and dynemics of new fourth and sixth order families of iterative methods for nonlinear systems. Journal of Computational and Applied Mathematics, 275, 412–420.
- Kantorovich, L. V., & Akilov, G. P. (1982). Functional analysis. Oxford: Pergamon Press.
- Magre{\~n}{\’a}n, {\’A}. A. (2013). Estudio de la dinámica del método de Newton amortiguado (PhD Thesis). Servicio de Publicaciones, Universidad de La Rioja. Retrieved from http://dialnet.unirioja.es/servlet/tesis?codigo=38821
- Petkovic, M. S., Neta, B., Petkovic, L., & D\v{z}uni\v{c}, J. (2013). Multipoint methods for solving nonlinear equations. Waltham, MA: Elsevier.
- Rheinboldt, W. C. (1978). An adaptive continuation process for solving systems of nonlinear equations. In A. N. Tikhonov, et al. (Ed.), Mathematical models and numerical methods (Vol. 3, pp. 129–142). Warsaw: Banach Center.
- Sharma, J. R. (2014). Some fifth and sixth order iterative methods for solving nonlinear equations. International Journal of Engineering Research and Applications, 4, 268–273.
- Traub, J. F. (1964). Iterative methods for the solution of equations (Series in Automatic Computation). Englewood Cliffs, NJ: Prentice- Hall.
- Wang, X., & Kou, J. (2013). Semilocal convergence and R-order for modified Chebyshev--Halley methods. Numerical Algorithms, 64, 105–126.