1,280
Views
5
CrossRef citations to date
0
Altmetric
Articles

A new regularization approach for numerical differentiation

Pages 1747-1772 | Received 15 Oct 2019, Accepted 22 Apr 2020, Published online: 04 Jun 2020
 

Abstract

It is well known that the problem of numerical differentiation is an ill-posed problem and one requires regularization methods to approximate the solution. The commonly practiced regularization methods are (external) parameter-based like Tikhonov regularization, which has certain inherent difficulties associated with them. In such scenarios, iterative regularization methods serve as an attractive alternative. In this paper, we propose a novel iterative regularization method where the minimizing functional does not contain the noisy data directly, but rather a smoothed or integrated version of it. The advantage, in addition to circumventing the use of noisy data directly, is that the sequence of functions constructed during the descent process tends to avoid overfitting, and hence, does not corrupt the recovery significantly. To demonstrate the effectiveness of our method we compare the numerical results obtained from our method with the numerical results obtained from certain standard regularization methods such as Tikhonov regularization, Total-variation, etc.

1991 Mathematics Subject Classifications:

Acknowledgments

I am very grateful to Prof. Ian Knowles for his support, encouragement and stimulating discussions throughout the preparation of this paper.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Notes

1 The technicality as to how far g can be weakened so that the solution of equation (Equation1) is uniquely and stably recovered (finding the ϕ inversely) will be discussed later.

2 Again the domain space for the functionals G1, G2 and G are discussed in details later.

3 It can be further proved that it's also the first Fre´chet derivative of G at ψ.

4 Hence TD is also a linear and bounded operator in L2[a,b].

5 Again, it can be proved that it's the second Fre´chet derivative of G at ψ.

6 Just for simplicity, we assume g(a)=g~(a).

7 In our experiments, we considered τ=1 and the termination condition as ||Tψmgδ||L2<δ.

8 As explained in the L2-gradient version of the descent algorithm for G, in Section 4.

9 To be consistent with 6, we kept g~(a)=g(a) and g~(b)=g(b), but can be avoided.

10 Where the test function is g(x)=|x0.5| on [0,1] and the data set is 100 uniformly distributed points with σ=0.01.

11 the fluctuating occurs since the values of ||g~gm||L2 tends to decrease first (when approximating the exact g) and then increases (when making a transition from g to g~) and eventually decreases (when trying to fit the noisy g~, i.e. overfitting)

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.