![MathJax Logo](/templates/jsp/_style2/_tandf/pb2/images/math-jax.gif)
Abstract
We consider an ill-posed equation in a Hilbert space with a noisy operator and a noisy right-hand side. The noise level information is given in a general form, as a norm of a certain operator applied to the noise. We derive the monotone error rule (ME-rule) for the choice of the regularization parameter in many methods, giving parameter such that the error is monotonically increasing for larger parameters in the Tikhonov method and for smaller stopping indices in iteration methods. Regularization methods considered include -scale regularization in (iterated) Tikhonov method and in iteration methods (Landweber method, CG type methods, semi-iterative methods). We also consider modifications of the ME-rule and show in numerical experiments (test problems from Hansen’s Regularization Toolbox, including the sideways heat equation) their advantages over the discrepancy principle.
Introduction
In this paper, we consider linear ill-posed problems(1)
(1) where
is a bounded linear operator with non-closed range
and
,
are infinite-dimensional real Hilbert spaces with inner products
and norms
. We are interested in the minimum-norm solution
of problem (Equation1
(1)
(1) ) and assume that instead of exact data
and
, there are given noisy data
and
with
(2)
(2) and known noise levels
,
. Later, we also consider the case of noise level information in more general form (Equation4
(4)
(4) ).
Ill-posed problems (Equation1(1)
(1) ) arise in a wide variety of problems in applied sciences. For their stable numerical solution, regularization methods are necessary, see [Citation1, Citation2]. Regularization methods include Tikhonov regularization
with regularization parameter
and iterative and projection methods, where the stopping index
is the regularization parameter. Here,
is the adjoint operator to
. Traditional regularization methods possess the property that in the case of exact data, the error
of regularized solution
, as a function of
, is monotonically decreasing for
. This property is no longer true for the error
in the case of noisy data. The monotone decrease of the error
for growing
-values can only be guaranteed for small
. Typically,
diverges for
. Therefore, a rule for the proper choice of the regularization parameter
is necessary.
In the monotone error rule (ME-rule) for choosing a proper regularization parameter, the idea consists in searching for the largest computable regularization parameter for which we can guarantee the ME-property: the error
is monotonically decreasing for
. For the continuous regularization methods, this means that
for the iteration methods and other methods with
this means that
(3)
(3) From derivation of the ME-rule for concrete regularization methods, one can see for which perturbations of the operator and the right-hand side, the ME-rule gives the optimal regularization parameter.
Consider now the case of the exact operator. In theoretical works on ill-posed problems, often the worst-case error is considered. Here, the exact data
are fixed, supremum is over noisy data
. In applications, the exact data
are unknown and noisy data
are known. Then, we are interested in finding the parameter
which minimizes the analogue of the worst-case error
, where given data
are fixed, supremum is over “candidates of exact data”
. This analogue of the worst-case error is minimized by the parameter
. This can be seen from derivation of the ME-rule for concrete regularization methods: for
, the error
is decreasing for all “candidates of exact data”
with
, but for
there exists
with
where
.
All regularization methods considered in this paper have the property because we need this property for the formulation of the ME rule.
The ME-rule was proposed and studied for continuous methods in [Citation3–Citation8], for iterative methods in [Citation4, Citation7, Citation8], analogous rules were proposed in [Citation9, Citation10]. In this paper, we extend these results in the following directions: we consider the case of noisy operator (), the noise level information may be given in general form (4), we derive the ME-rule for a more general class of regularization methods (including the regularization in
-scale,[Citation11] also for iterative and semi-iterative methods). We also give convergence results and error estimates for the ME-rule.
In practical applications, often different regularization parameters are selected and the corresponding regularized solutions are studied online. With our ME-rule, we provide some help for the selection of suitable values of , since we can guarantee that for
always
holds. This information may also be used for improving other a posteriori rules R for the choice of the regularization parameter, since the error for the parameter
is less than or equal to the error in rule R. Unfortunately, this observation does not help to improve the discrepancy principle, which gives the parameter
, and for a smooth solution
is often too large in regularization methods with the finite qualification. In (iterated) Tikhonov method, the ME-rule is in contrast to the discrepancy principle a quasi-optimal parameter choice.
Note that typically the error of the regularized solution decreases monotonically also, somewhat further up to some . Our numerical experiments suggest to use
(or its integer part) with a certain
.
The plan of this paper is as follows. In Sections 2 and 3, well-known regularization methods and parameter choice rules are introduced, using noise-level information (Equation4(4)
(4) ). In Sections 4 and 5, the ME-rule is derived for the continouous regularization methods and for the iterative methods, respectively. In Section 6 convergence conditions and quasioptimality results of the ME-rule are given. The paper is finished by extensive numerical experiments.
Well-known regularization methods in ![](//:0)
-scale
We assume that the noise level information is given in the form(4)
(4) where
is a linear injective, possibly unbounded operator in
with domain
. We assume that
. The standard case (Equation2
(2)
(2) ) is special case with
, where
is the identity operator. In [Citation12], the operator
with
was used. Here, the operator
is a densely defined, unbounded, self-adjoint and strictly positive operator in the space
, inducing Hilbert scale
with norm
. In the standard case (Equation2
(2)
(2) ),
. The case
corresponds to large noise, the case
to small noise. The noise-level information (Equation4
(4)
(4) ) with different operators
(only the case
was considered) was used in works.[Citation13–Citation16]
In many papers, -scale is considered with the generating operator
in the space
, the equation
is transformed to the equation
and the regularized solutions are constructed in the form
. Here,
is the regularization parameter and the function
satisfies the conditions
(5)
(5) Here,
,
and
are positive constants,
is at least the norm of the operator in the argument of the function
,
and the greatest value of
, for which the inequality (Equation6
(6)
(6) ) holds is called the qualification of the method. We can formulate the ME rule for choice of parameter
only for regularized solutions
. Therefore, we prefer to use the
-scale instead of
scale. The
-scale regularization was proposed by Egger [Citation11, Citation12]. Here, the equation
is transformed into the equation
assuming
is bounded and
. If
this is always satisfied; if
, this means that
and
. The regularized solutions are constructed in the form
(6)
(6) Special cases of regularization methods of this form are the following well-known methods (cf. [Citation1, Citation2, Citation17]).
The Tikhonov method
. Here,
,
,
,
,
.
Iterative variants of the Tikhonov method. Let
,
,
– initial approximation and
Here
,
,
,
,
.
Explicit iteration scheme (Landweber’s method). Let
,
Here,
,
,
,
,
.
Implicit iteration scheme. Let
be a constant,
and
Here,
,
,
,
, where
and
.
The method of asymptotical regularization: approximation
solves the Cauchy problem
Here,
,
,
,
.
Note that regularized solutions in -scales and
-scales variants of Tikhonov method have the form
(8)
(8) and they minimize the corresponding functionals
If the operator L acts in both spaces X and Y and the operators
and L commute, then these regularized approximations coincide. Much more widespread of these two approximations is the first one (cf.[Citation20]). For the operators L and
, one may use, for example, the differentiation and integration operators. The fractional powers
and
can be implemented efficiently, e.g. via FFT or multi-level techniques.
Positive values of are good for delaying saturation in Tikhonov and iterated Tikhonov methods,[Citation12] negative values of
are good for preconditioning in the iteration methods that drastically decreases the number of iteration steps.[Citation11]
Well-known rules for choice of the regularization parameter
For the choice of the regularization parameter in any regularization method under noise information (Equation4(4)
(4) ), some estimate or approximation of value
is needed. This value may be estimated as follows:
(9)
(9) Typically, the exact value of
is unknown. Substitution of
by some upper bound
gives a rough estimate
If an upper bound
is not available, approximation of
by
(in iteration methods
) may be used.
In the following, we formulate some a posteriori rules for choosing the regularization parameter which use and are well known in the case
,
. We assume
.
Discrepancy principle.[Citation1, Citation2, Citation17, Citation21] In the continuous regularization methods, this principle (D principle) chooses the parameter
as the solution of the equation
(10)
(10) In iteration methods, the discrepancy principle finds
as the first index for which
.
Modified discrepancy principle.[Citation22] In this rule for the methods M1, M2, the parameter
is chosen as the solution of the equation
(11)
(11) with
, assuming
(if
then a sufficient condition is
). Here,
is the qualification of the regularization method, see (Equation6
(6)
(6) ). One may use also the alternative MD’ rule
(12)
(12)
ME-rule for the continuous regularization methods
Derivation of the ME-rule
Let us consider the ME-rule for the continuous regularization methods. The prominent example of these methods is Tikhonov method where the regularization parameter is traditionally denoted by . Therefore, in this section we use notation
instead of r. We assume that the corresponding function
is differentiable with respect to
.
Let us consider the regularized approximation(13)
(13) assuming that
. An example of a regularized approximation of this form is the approximation
with
from (Equation7
(7)
(7) ). The reformulation of the general idea of ME-rule in terms of
means: search in the approximation
for the smallest computable regularization parameter
for which we can guarantee that
(14)
(14) In order to guarantee this property, we estimate the derivative of the squared error
with respect to
under condition (Equation10
(10)
(10) ) as follows:
(15)
(15) This estimate leads us to following ME-rule for the continuous regularization method (Equation14
(14)
(14) ).
ME rule. Choose as the largest solution of the equation
(16)
(16) with
. If
for all
take
. If
for all
take
(it corresponds to
). For regularized approximation (Equation7
(7)
(7) ), this equation has the form
(17)
(17) Here, the assumption
is satisfied if
.
Note that for , the ME-property is not guaranteed but quasi-optimality retains (see Theorem 6.3). However,
does not need the norm of the exact solution or a bound of it and in practice may give better results than
.
From definition of follows the inequality
, therefore
if
in the discrepancy principle. The derivation of the ME-rule in (Equation16
(16)
(16) ) uses only one inequality, which turns to the equality in the case
Therefore, in this case the ME-rule gives the optimal parameter, if the equation (Equation18
(18)
(18) ) has a unique solution.
ME-rule and modifications for the (iterated) Tikhonov regularization
Let . Let
denote the discrepancy of the (iterated) Tikhonov approximation
, i.e.
.
Using the identitieswe obtain
and
From these representations and from (Equation18
(18)
(18) ), we conclude that the functions
in (Equation13
(13)
(13) ) and
for the MD’- and ME-rules have the form
(18)
(18) In the case
,
, convergence and order optimal error estimates for (iterated) Tikhonov method are proved in the case of MD’-rule in [Citation5, Citation22, Citation23] and in the case of ME-rule in [Citation3, Citation5, Citation6, Citation8, Citation24].
One may also use an analog of the ME-rule with the function(19)
(19) which coincides with ME-rule in the case
.
Consider now some modifications of the ME-rule. We know from (Equation15(15)
(15) ) that
. It means that typically somewhat a smaller parameter
with proper
is better parameter than
. For case
,
we optimized the value of
(and other constants below) in extensive numerical experiments and recommend the estimated parameter
. Note that in the case of rough estimate of the noise level, better rules than the ME-rule and the discrepancy principle are rules from the recently derived family of rules,[Citation25] see also [Citation26–Citation29].
In the extrapolated Tikhonov method (Equation8(8)
(8) )
, one may use the logarithmically uniform mesh of parameters
,
;
and to choose
by the same rules as in the iterated Tikhonov method.
Note that in [Citation30], an analog of ME-rule in case ,
was proposed for the (
times iterated) Lavrentiev method, finding the parameter
as solution of the equation
, where
,
. Numerical experiments in [Citation30, Citation31] showed that with this parameter choice, the error of regularized solution was in average only 5% larger (in modifications of this rule 3% larger) than with optimal parameter. Note also that several other analogs of ME-rule for the (
times iterated) Lavrentiev method were proposed and analysed in [Citation32].
ME-rule for asymptotical regularization
In this regularization method, the regularized solution is given by as the solution of the initial value problem
with
. In this method, we have
. Using the identities
we obtain for the method of asymptotical regularization
In the case
and
, the ME-rule coincides with the discrepancy principle for which convergence and order optimal error estimates are well known (cf. [Citation1, Citation2, Citation8, Citation17]).
ME-rule for iterative regularization methods
Derivation of the ME-rule
For solving ill-posed problems (Equation1(1)
(1) ), we now consider iteration methods of the general form
(20)
(20) with
. The elements
characterize the special iteration method. We assume that
. Simple iteration methods are explicit iteration scheme (Landweber method) M3 with
,
and implicit iteration scheme M4 with
,
, where
.
In the monotone error rule (ME-rule), we search for a largest computable iteration number for which the monotonicity property (Equation3
(3)
(3) )
(21)
(21) can be guaranteed. Exploiting (Equation21
(21)
(21) ) and (Equation10
(10)
(10) ), we obtain
(22)
(22) This estimate leads us to the following ME-rule for iteration methods (Equation21
(21)
(21) ).
ME rule. Choose as the first index
satisfying
(23)
(23) with
.
The derivation of the ME-rule in (Equation22(22)
(22) ) uses only one inequality, which turns to the equality in the case
Therefore, in this case the ME-rule gives the optimal parameter.
Note that for , the ME-property is not guaranteed but for certain methods it is known that quasi-optimality still holds (see Theorem 6.3). However, typically the monotonicity of error also holds up to somewhat larger iteration numbers, hence
may also give better results than
.
From definition of follows the inequality
.
Due to equality , the function
may be presented also in forms
(24)
(24) Inequalities (Equation23
(23)
(23) ), (Equation24
(24)
(24) ) may be used:
for stopping iterations (Equation21
(21)
(21) ), if for the first time
;
for choosing
, minimizing the function
in error estimate (Equation23
(23)
(23) ) (for example choosing steplength
in (Equation26
(26)
(26) ));
for using several iteration methods switching from some faster method (for example CGLS) to a slower method (for example Landweber method): if inequality (Equation23
(23)
(23) ) does not guarantee decrease of error in the fast method, the accuracy of this approximation may be further increased by a slower method.
ME-rule in gradient-type methods
Let us consider gradient-type methods of the form (Equation21(21)
(21) ) with
, where
is the discrepancy and
is some properly chosen stepsize:
(25)
(25) Special cases of method (Equation26
(26)
(26) ) are the Landweber method with
, the minimal error method with
and the steepest descent method with
.
Here, the ME-function (Equation25(25)
(25) ) has the form
The estimate (Equation23
(23)
(23) ) which led us to the ME-rule can also be exploited for finding stepsizes
in iteration methods (Equation26
(26)
(26) ) which guarantee that
is a better approximation for
than
. Here, the stepsize
may not only depend on y but also on the noise level
. Exploiting
, the estimate (Equation23
(23)
(23) ) obtains form
(26)
(26) Therefore,
is a better approximation for
than
, if
Minimizing the right-hand side of (Equation27
(27)
(27) ) with respect to
yields
Substituting into (Equation27
(27)
(27) ) shows that for this stepsize, the improvement of the squared error can be estimated by
In case
,
last two relations were given in [Citation8, Citation10], but convergence results and order optimal error bounds for stopping many gradient methods [Citation2, Citation9, Citation10, Citation17, Citation33] with the ME-rule were stated in [Citation4, Citation7–Citation10, Citation34].
ME-rule in conjugate gradient methods
Some gradient-type methods that do not fit into the class of methods (Equation26(26)
(26) ) are conjugate gradient methods (see [Citation9, Citation10, Citation33, Citation36]).
Methods CGLS and CGME are the conjugate gradient method applied to and
with
, respectively. In both methods, we fix the initial approximation
, the initial value
and
, and compute
Further, we compute in CGLS
, and for
iteratively
In the method CGME, one computes for
iteratively
Both CGLS and CGME method have the form (Equation21
(21)
(21) ) with
, hence ME-rule (Equation24
(24)
(24) ) can be used. In case
,
, the ME-rule for CGLS and CGME methods was proposed and numerically tested in [Citation38].
ME-rule for sequence of approximations
In contrast to iterative methods where approximation on step is computed using approximation on previous step
, one may consider an arbitrary sequence of approximations
(27)
(27) and ask which from these approximations to take as approximate solution.
Theorem 5.1
If in sequence (Equation28(28)
(28) ) the index
is chosen as the first index
satisfying
(28)
(28) with
, then ME-property (Equation3
(3)
(3) ) holds.
Proof
This follows from (Equation25(25)
(25) ) with
.
To use the functional , elements
are needed. They can be found by first computing
and later
.
Note that the last theorem also gives an alternative way to derive the ME-rule for continuous regularization methods, using the sequence ,
and taking the limit
.
The last theorem may be applied as an alternative or a complement to any other parameter choice rule for finding a regularized approximation from the sequence in the form ,
(for example in (iterated) Tikhonov method). Note that computation of the sequence of all approximate solutions for
with some
is required, for example, in the balancing principle (see [Citation11, Citation35, Citation37, Citation39]. This theorem guarantees that the stopping index
is at least as good as the index
from arbitrary balancing principle:
.
ME-rule for semi-iterative regularization methods
Let be residual polynoms orthogonal in interval
with respect to some measure (see [Citation40]). Then
,
with some constants
and
, where
. Then, semi-iterative approximations are found as
In case
, we have
with
,
(29)
(29) Thus, the iterations of the semi-iterative methods may be stopped by the ME-rule (Equation29
(29)
(29) ). We give formula for
for the following examples of semi-iterative methods (see [Citation40]).
The Chebyshev method of Stiefel
The Chebyshev method of Nemirovskii and Polyak
The
-method of Brakhage is method where
in (Equation30
(30)
(30) ) has for fixed
the coefficients
ME-rule in implicit iteration methods
Let us consider implicit iteration methods of the form(30)
(30) This method has form (Equation21
(21)
(21) ) with
(31)
(31) Therefore, the function
in (Equation25
(25)
(25) ) attains the form
In case
,
, ME-rule for method (Equation31
(31)
(31) ) was studied in [Citation4, Citation7, Citation8, Citation34].
Let us now consider the question of choosing the parameter such that the guaranteed improvement of accuracy of approximation
over
would be maximal.
Proposition 5.2
In regularization method (Equation31(31)
(31) ), the guaranteed improvement
of accuracy in estimate (Equation23
(23)
(23) ) is maximized for
in (Equation32
(32)
(32) ) with
as solution
of the equation
Proof
Substitution of in (Equation32
(32)
(32) ) to function
in (Equation23
(23)
(23) ) shows that minimization of function
is equivalent to minimization of the function
where
. Differentiation with use of formula
gives
The equality
shows that
iff
solves the equation (Equation33
(33)
(33) ).
Theorem 5.3
In the case in regularization method (Equation31
(31)
(31) ), the guaranteed improvement
of accuracy in estimate (Equation23
(23)
(23) ) is maximized, if
is chosen by the analogue of the discrepancy principle
Proof
Indeed, in the case , the equation (Equation33
(33)
(33) ) for finding
has the form
, which is equivalent to
due to the equality
.
Convergence and quasi-optimality for the ME-rule
In the ME-rule, considered in previous sections, the regularization parameter in continuous regularization methods is found as the solution (the largest solution in case of many solutions) of a certain equation
with
. In regularization methods, where the regularization parameter is a natural number as in iteration methods, we choose
as the first
,
, ... for which
. Let us denote by
the regularized solutions for exact data
,
. We give for the ME-rule, the following convergence result.
Theorem 6.1
Let . Let
,
satisfy the following properties:
as
;
(for
);
if
as
, then
as
;
if for some sequence
(
) it holds true
(
), then
(
).
Proof
For every it holds
(32)
(32) First we consider the main case where
as
. Then,
due to the assumption (1). It remains to show that
as
. Let
be a monotonically increasing function giving some a priori regularization parameter guaranteeing convergence
as
. If
, then due to assumption (2)
If
, then ME-property says that
Both estimates converge to 0 as
.
Consider now the case if (
). Then, for
both terms in the right-hand side of (Equation34
(34)
(34) ) converge as
due to assumptions (3) and (4).
Assumptions (1)–(3) of Theorem 6.1 are general concerning regularization method. Only the last assumption (4) uses concrete form of the function , for regularization method (Equation7
(7)
(7) ) it can be proved similarly to Lemma 3.2 in [Citation2, p. 66].
Consider now the case . The problem
may be rewritten in form
with operator
and for
in (Equation7
(7)
(7) )
(33)
(33) the standard regularization theory applies. We get the following results.
Theorem 6.2
Let (Equation4(4)
(4) ) hold with
. Let
be defined by one of methods M1–M5. Then, choice of
by one of rules D, MD, ME or MEe guarantees convergence
as
and under additional assumption
the order optimal error estimate
holds with
for rule D, with
for rules MD, ME and MEe.
proof
Let . Then, results of this theorem for discrepancy principle follow from the results in [Citation1, Citation2, Citation17], for MD rule in [Citation22, Citation41]. Let
in rules D, MD. In methods M1, M2 it holds
, therefore
([Citation5–Citation8]), in methods M3, M4
([Citation4, Citation7, Citation8]), in method M5
. Therefore, results for ME rule follow from results for D rule and MD rule. The same results hold for parameter
.
Let us consider the quasi-optimality results for the regularization methods. To characterize the quality of the rule of an a posteriori regularization parameter choice, we use in the following the quasi-optimality property (see [Citation41, Citation42]). We say that a rule R for a posteriori choice of the regularization parameter is quasi-optimal for a given regularization method
, if there exists a constant
(not depending on
,
,
,
) such that for
,
the error estimate
(34)
(34) holds, where the function
(35)
(35) is an upper bound of the error
. The following result holds.
Theorem 6.3
Let . The modified discrepancy principle with
and
is quasi-optimal rule for solving the problem
by methods M1–M5: the inequality (Equation36
(35)
(35) ) holds where the operators
,
are replaced by operators
,
in (37). This quasi-optimality is also guaranteed for ME-rule with
, but also with
if in (Equation17
(17)
(17) ) and (Equation24
(24)
(24) )
is replaced by
with
.
Proof
Applying results of [Citation41] (Theorem 3) (see also [Citation34](Theorem 2, Remark 2)) to the approximation (Equation35(35)
(35) ) instead of
, we get the assertions of theorem about MD rule, but also for ME rule with
if in (Equation17
(17)
(17) ) and (Equation24
(24)
(24) )
with
is used. In case
, the increase of
in right-hand side
in (Equation17
(17)
(17) ) and (Equation24
(24)
(24) ) leads to decrease of parameter
, therefore, to increase of the error
.
Numerical examples
Our tests are performed on the well-known set of test problems by Hansen [Citation43]: baart, deriv2, foxgood, gravity, heat, ilaplace, phillips, shaw, spikes, wing. In all tests, we used the discretization parameter .
For example, the test problem heat (see [Citation43, Citation44]) represents the heat conduction in one-dimensional semi-infinite rod with the thermal diffusivity 1. The initial temperature of the rod is 0 and the boundary condition at the point
is not known. Instead, one measures the temperature at
for the time
and tries to determine the temperature at
from this measurement. (NB! In this paragraph and in the following paragraph,
and
have different meaning from the rest of the paper.)
This results in the Volterra integral equation of the first kindwhere
is the measured temperature at
and
is the temperature at
.
The integral equation is discretized by means of a simple collocation method on the points and the midpoint rule with
points. An exact discrete solution
with components
is constructed, and then the right-hand side
is computed as
.
Since the performance of methods and rules generally depends on the smoothness of the exact solution, we complemented the standard solutions
of the (now discrete) test problems with smoothened solutions
(
) in all test problems (computing the right-hand side as
). After discretization, all problems were scaled (normalized) in such a way that the Euclidian norms of the operator and the right-hand side were 1. We used operator
with
. We computed
and added a normally distributed noise such that
had the values
,
,
,
,
,
,
. Here,
means the Euclidean norm.
We generated 10 noise vectors and used these vectors in all of the problems. The problems were regularized by the Tikhonov method in -scales (the second formula in (Equation9
(9)
(9) )), where the regularization parameters were chosen according to the rules that we wanted to compare. We used the
-scale with the operator L acting as
. In the discrete form, we used the approximating matrix
with the elements
,
,
,
,
otherwise.
Since in these model equations the exact solution is known, it is possible to find the regularization parameter which gives the smallest error:
. For every rule R, the error ratio
describes the performance of the rule R on this particular problem. For better comparison of the cases with different s, in the denominator we always use
. To compare the rules or to present their properties, the following tables show averages of these error ratios over various parameters of the data set (problems 1–10, noise levels
, noise vectors).
In Tables and we compare the following rules. Earlier we have introduced the discrepancy principle D (see page 5, formula (Equation11(11)
(11) )), the rule MD’ (page 7, (Equation19
(19)
(19) )), the monotone error rule ME (page 7, (Equation19
(19)
(19) )) and the rule MEa as the analog of the ME-rule (page 7, (Equation20
(20)
(20) )). In rule MEe (recall motivation on page 7), we chose the regularization parameter as
with
,
; this form of the constant was obtained by extensive numerical experiments on different values of
and
. In rule MEae, we chose the parameter
, where
for
and
for
or
.
Table 1 Error ratios for different rules.
Note that in the case , the ME rule (unlike rules D, MD’, MEa) allows mild underestimation of the noise level. In the columns ME08 and ME07, we present the error ratios obtained by using the noise levels
and
instead of
. Typically, ME08 and ME07 give better results than ME rule, but sometimes they can give large errors. If instead of the exact noise level approximate noise level is given, we recommend using the MEe-rule with
. In the case
, all considered rules failed if the noise level was underestimated.
In Table , we present the results separately for large ’s (l means
) and small
’s (s means
). We present results for
and
. For a given
in most cases, the best choice of
is
, in which case all of the considered rules gave similar results. But in the case
, wecould not obtain good results for smooth solutions, especially in case of small noise levels. Then, the error function had a very sharp decrease at its minimum, which was hard to locate. Therefore, in the case
we recommend taking
. Then, the ME and MEe rules gave good results. However, other rules failed in this case in the problems baart, foxgood, and wing.
In most problems, using other values of and
than
and
caused slightly worse results in the rules ME and MEe and essentially worse results in rules D and MD’. However, in the problems deriv2 and phillips in the case of a smooth solution (
), increasing
increased the accuracy of rules D, MD’, MEe for all
(in problem deriv2 the accuracy of rule D increased for
up to 0.5).
Table shows that the discrepancy principle works well in the case ,
but is not as good for
. Typically, the MEe-rule gives the best results. We made computations besides MD’ rule (Equation13
(13)
(13) ) also with MD rule (Equation12
(12)
(12) ), the results were almost the same.
Table presents the results for the problem heat. In column ‘min’, we present averages of minimal errors over 10 runs for concrete and
. Unlike Table , the rules D, MD’, MEa, MEae gave almost as good results as the rule MEe. These rules were not as good as the rule MEe only in the case of smooth solution (
): rule D for
and rules MD’, MEa, MEae for
,
.
Table 2 Results for the problem heat.
Both in Tables and in case ,
, the best results were obtained by using the MEae rule.
Some remarks about numerical realization of algorithms. In our numerical computations, the discretization parameter was 100. This low dimension allowed us to use the advantages of singular value decomposition of the matrix for fast computations. Note also that instead of the solution of the Equations (Equation11
(11)
(11) ), (Equation19
(19)
(19) ) and (Equation20
(20)
(20) ) we made computations on the
-sequence
,
and used for the corresponding regularization parameter the first
for which the left hand side of the equation was smaller than the right-hand side.
The computations are made with the exact operator, since the inexact operator is typically accompanied by a significant overestimation of the noise level. In this case, it is preferable to use the rules from the article [Citation25] where the exact operator was considered. We plan to extend these results to the case of a noisy operator in a forthcoming paper.
It is worthwhile to think about the possibility to choose the operator L depending on the operator A.
Note that the ME-rule can also be used in projection methods. It was studied for the exact operator in [Citation45]. Generalizations to noisy operators are to be published in a further paper.
Conclusion
We have considered a linear ill-posed problem with the operator
. A noisy right-hand side
and an operator
with noise levels
,
, where
is certain operator, are given. We derived the monotone error rule (ME rule) for parameter choice in many regularization methods in
-scale, generated by powers
of certain operator
. These regularization methods include the (iterated) Tikhonov method and many iterative methods (Landweber method, steepest descent method, conjugate gradient-type methods, semi-iterative methods, implicit iteration method). It was shown for which data the ME rule gives the optimal parameter. For a class of methods, the convergence and quasi-optimality results are given. In extensive numerical experiments, the ME-rule and its modification MEe gave good results, whereby in the case
with
these two rules allowed also moderate underestimation of the noise level (other rules fail in case of underestimated noise level).
Acknowledgments
This work was started while the first author held an appointment as Visiting Professor at the University of Applied Sciences Zittau/Görlitz from June until August 2010 and was supported by DFG (Deutsche Forschungsgemeinschaft) under a cooperative bilateral research project grant. In addition, the work was supported by the Estonian Science Foundation, Grant 9120.
References
- Engl HW, Hanke M, Neubauer A. Regularization of inverse problems. Vol. 375, Mathematics and its applications Dordrecht: Kluwer; 1996.
- Vainikko GM, Veretennikov AY. Iteration procedures in ill-posed problems. Moscow: Nauka; 1986(in Russian).
- Tautenhahn U. On a new parameter choice rule for ill-posed inverse problems. In: Lipitakis EA, editor. HERCMA ’98: Proc. 4th Hellenic-European Conference on Computer Mathematics and its Applications. Athens: Lea Publishers; 1999. p. 118–125.
- Hämarik U. Monotonicity of error and choice of the stopping index in iterative regularization methods. In: Pedas A, editor. Differential and integral equations: theory and numerical analysis. Tartu: Estonian Mathematical Society; 1999. p. 15–30.
- Hämarik U, Raus T. On the a posteriori parameter choice in regularization methods. Proc. Estonian Acad. Sci. Phys. Math. 1999;48:133–145.
- Tautenhahn U, Hämarik U. The use of monotonicity for choosing the regularization parameter in ill-posed problems. Inverse Probl. 1999;15:1487–1505.
- Hämarik U, Tautenhahn U. On the monotone error rule for parameter choice in iterative and continuous regularization methods. BIT Numerical Math. 2001;41:1029–1038.
- Hämarik U, Tautenhahn U. On the monotone error rule for choosing the regularization parameter in ill-posed problems. In: Lavrentiev MM, Kabanikhin SI, editors. Ill-posed and non-classical problems of mathematical physics and analysis. Utrecht: VSP; 2003 p. 27–55.
- Alifanov OM, Rumyantsev SV. On the stability of iterative methods for the solution of linear ill-posed problems. Soviet Math. Dokl. 1979;20:1133–1136.
- Alifanov OM, Artyukhin EA. Rumyantsev SV. Extreme methods for solving ill-posed problems with applications to inverse heat transfer problems. New York: Begell House; 1995.
- Egger H. Y-scale regularization. SIAM J. Numer. Anal. 2008;46:419–436.
- Egger H. Regularization of inverse problems with large noise. J. Phys.: Conf. Ser. 2008; 124: 9pp.
- Morozov VA. Regularization under large noise. Comput. Math. Math. Phys. 1996;36:1175–1181.
- Eggermont PPB, LaRiccia VN, Nashed MZ. On weakly bounded noise in ill-posed problems. Inverse Prob. 2009; 25: 14 p.
- Mathé P, Tautenhahn U. Enhancing linear regularization to treat large noise. J. Inverse Ill-Posed Prob. 2011;19:859–879.
- Mathé P, Tautenhahn U. Regularization under general noise assumptions. Inverse Prob. 2011; 27:035016 (15 p).
- Vainikko GM. The discrepancy principle for a class of regularization methods. USSR Comp. Math. Math. Phys. 1982;22:1–19.
- Hämarik U, Palm R, Raus T. Use of extrapolation in regularization methods. J. Inverse Ill-Posed Prob. 2007;15:277–294.
- Hämarik U, Palm R, Raus T. Extrapolation of Tikhonov regularization method. Math. Model. Anal. 2010;15:55–68.
- Tautenhahn U. Error estimates for regularization methods in Hilbert scales. SIAM J. Numer. Anal. 1996;33:2120–2130.
- Morozov VA. On the solution of functional equations by the method of regularization. Soviet Math. Dokl. 1966;7:414–417.
- Raus T. On the discrepancy principle for solution of ill-posed problems with non-selfadjoint operators. Acta et comment. Univ. Tartuensis. 1985;715: 12–20(in Russian).
- Gfrerer H. An a posteriori parameter choice for ordinary and iterated Tikhonov regularization of ill-posed problems leading to optimal convergence rates. Math. Comp. 1987;49:507–522.
- Tautenhahn U. On order optimal regularization under general source conditions. Proc. Est. Acad. Sci. Phys. Math 2004;53: 116–123.
- Hämarik U, Palm R, Raus T. A family of rules for parameter choice in Tikhonov regularization of ill-posed problems with inexact noise level. J. Comput. Appl. Math. 2012;236:2146–2157.
- Hämarik U, Palm R, Raus T. Comparison of parameter choices in regularization algorithms in case of different information about noise level. Calcolo. 2011;48:47–59.
- Hämarik U, Kangro U, Palm R, Raus T. On parameter choice in the regularization of ill-posed problems with rough estimate of the noise level of the data. In: Numerical analysis and applied mathematics ICNAAM 2012. AIP Conference Proceedings, 1479. New York (NY): American Institute of Physics; 2012. p. 2332–2335.
- Hämarik U, Raus T. On the choice of the regularization parameter in ill-posed problems with approximately given noise level of data. J. Inverse Ill-Posed Prob. 2006;14:251–266.
- Hämarik U, Palm R, Raus T. On minimization strategies for choice of the regularization parameter in ill-posed problems. Numer. Funct. Anal. Opt. 2009;30:924–950.
- Hämarik U, Raus T, Palm R. On the analog of the monotone error rule for parameter choice in the (iterated) Lavrentiev regularization. Comput. Meth. Appl. Math. 2008;8:237–252.
- Palm R. Numerical comparison of regularization algorithms for solving ill-posed problems. University of Tartu; 2010. Available from: http://hdl.handle.net/10062/14623.
- Hämarik U, Palm R, Raus T. A family of rules for the choice of the regularization parameter in the Lavrentiev method in the case of rough estimate of the noise level of the data. J. Inverse Ill-Posed Prob. 2012;20:831–854.
- Gilyazov SF, Goldman NL. Regularization of ill-posed problems by iteration methods. Vol. 499, Mathematics and its applications. Dordrecht: Kluwer; 2000.
- Hämarik U, Raus T. On the choice of the stopping index in iteration methods for solving problems with noisy data. In: Lipitakis EA, editor. HERCMA 2001: Proceedings of the 5th Hellenic-European conference on computer mathematics and its applications, Athens, Greece, September 20–22, 2001. Athens: Lea Publishers; 2002. p. 524–529.
- Mathé P, Pereverzev SV. Geometry of linear ill-posed problems in variable Hilbert scales. Inverse Prob. 2003;19:789–803.
- Hanke M. Conjugate gradient type methods for ill-posed problems. Harlow: Longman Scientific & Technical 1995.
- Pereverzev SV, Schock E. On the adaptive selection of the parameter in the regularization of ill-posed problems. SIAM J. Numer. Anal. 2005;43:2060–2076.
- Hämarik U, Palm R. On rules for stopping the conjugate gradient type methods in ill-posed problems. Math. Model. Anal. 2007;12:61–70.
- Hämarik U, Raus T. About the balancing principle for choice of the regularization parameter. Numer. Funct. Anal. Opt. 2009;30:951–970.
- Hanke M. Accelerated Landweber iterations for the solution of ill-posed equations. Numer. Math. 1991;60:341–373.
- Raus T, Hämarik U. On the quasi-optimal rules for the choice of the regularization parameter in case of a noisy operator. Adv. Comput. Math. 2012;36:221–233.
- Hämarik U, Palm R, Raus T. On the quasioptimal regularization parameter choices for solving ill-posed problems. J. Inverse Ill-Posed Prob. 2007;15:419–439.
- Hansen PC. Regularization tools: a Matlab package for analysis and solution of discrete ill-posed problems. Numerical Algorithms. 1994;6:1–35.
- Eldén L. The numerical solution of a non-characteristic Cauchy problem for a parabolic equation. In: Numerical Treatment of Inverse Problems in Differential and Integral Equations, Proceedings of an International Workshop, Heidelberg,1982 Boston: Birkhäuser; 1983. p. 246–268.
- Hämarik U, Avi E, Ganina A. On the solution of ill-posed problems by projection methods with a posteriori choice of the discretization level. Math. Model. Anal. 2002;7:241–252.