Abstract
It is well known that the problem of numerical differentiation is an ill-posed problem and one requires regularization methods to approximate the solution. The commonly practiced regularization methods are (external) parameter-based like Tikhonov regularization, which has certain inherent difficulties associated with them. In such scenarios, iterative regularization methods serve as an attractive alternative. In this paper, we propose a novel iterative regularization method where the minimizing functional does not contain the noisy data directly, but rather a smoothed or integrated version of it. The advantage, in addition to circumventing the use of noisy data directly, is that the sequence of functions constructed during the descent process tends to avoid overfitting, and hence, does not corrupt the recovery significantly. To demonstrate the effectiveness of our method we compare the numerical results obtained from our method with the numerical results obtained from certain standard regularization methods such as Tikhonov regularization, Total-variation, etc.
1991 Mathematics Subject Classifications:
Acknowledgments
I am very grateful to Prof. Ian Knowles for his support, encouragement and stimulating discussions throughout the preparation of this paper.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Notes
1 The technicality as to how far g can be weakened so that the solution of equation (Equation1(1)
(1) ) is uniquely and stably recovered (finding the ϕ inversely) will be discussed later.
2 Again the domain space for the functionals ,
and G are discussed in details later.
3 It can be further proved that it's also the first Frchet derivative of G at ψ.
4 Hence is also a linear and bounded operator in
.
5 Again, it can be proved that it's the second Frchet derivative of G at ψ.
6 Just for simplicity, we assume .
7 In our experiments, we considered and the termination condition as
.
8 As explained in the -gradient version of the descent algorithm for G, in Section 4.
9 To be consistent with , we kept
and
, but can be avoided.
10 Where the test function is on
and the data set is 100 uniformly distributed points with
.
11 the fluctuating occurs since the values of tends to decrease first (when approximating the exact g) and then increases (when making a transition from g to
) and eventually decreases (when trying to fit the noisy
, i.e. overfitting)