390
Views
5
CrossRef citations to date
0
Altmetric
Articles

Using Lagrange principle for solving two-dimensional integral equation with a positive kernel

ORCID Icon, &
Pages 811-831 | Received 09 Feb 2015, Accepted 28 Jun 2015, Published online: 18 Sep 2015

Abstract

This article is devoted to a Lagrange principle application to an inverse problem of a two-dimensional integral equation of the first kind with a positive kernel. To tackle the ill-posedness of this problem, a new numerical method is developed. The optimal and regularization properties of this method are proved. Moreover, a pseudo-optimal error of the proposed method is considered. The efficiency and applicability of this method are demonstrated in a numerical example of an image deblurring problem with noisy data.

AMS Subject Classifications:

1. Introduction

A lot of inverse problems of mathematical physics are posed as linear integral equations.[Citation1] There are many known methods of solving equations of this type. Their description can be found in several monographs, for instance, in [Citation1Citation3]. In this paper, the image deblurring problem of solving a two-dimensional integral equation of the first kind with a positive kernel is studied. This problem arises in atmospheric optics, an application described in detail in [Citation4]. Obviously, this problem belongs to the class of ill-posed problems. Mathematical techniques known as regularization methods have been developed to deal with this ill-posedness. These methods, such as variational regularization methods and iterative regularization methods,[Citation5Citation10] can be used to solve the inverse problem of two-dimensional integral equation of the first kind. But in many cases, it is possible to find a set of a priori constraints to turn the problem into the well-posed one. For instance, the well-known problem of inversion of a continuous injective operator with a compact set of a priori constraints has this property.[Citation6] Several concrete compact sets of solution function with special form, such as sets of bounded monotone function, bounded convex function, etc., were studied in papers [Citation11Citation15]. In this paper, instead of the compact we use the balanced property of the solution set, which helps us to estimate an error of the proposed method.

By the definition of regularization methods [Citation16], we see that if one found a specific regularization method, there must exist various regularization methods. Each has its specific error of an obtained approximate solution. It is desirable that the method has as small error as possible. For this purpose, the problem of reconstructing the solution of two-dimensional integral equation of the first kind is reformulated in a form of an optimal recovery problem.[Citation17Citation24] In this paper, using optimal recovery theory and Lagrange principle, we propose a new numerical method for solving two-dimensional integral equation of the first kind with a positive kernel on a fixed grid. The regularization and optimization properties of this method are proved. A corresponding pseudo-optimal error of this method is also calculated. In the end of this paper, a numerical example of the image deblurring problem with noisy data will be solved.

2. Statement of a problem

Denote the segments from R: X=[Lx,Rx], Y=[Ly,Ry], S=[Ls,Rs] and T=[Lt,Rt]. Let Z:=C(X×Y) be the space of real continuous functions and U:=L2(S×T) be the space of square-integrable functions. Let A: ZU be a linear integral operator with a positive and continuous (by all arguments) kernel K(x,y,s,t)C(X×Y×S×T). Our problem is to find a so-called (pseudo-)optimal regularized solution z(x,y)Z from the following integral equation(1) Az:=X×YK(x,y,s,t)z(x,y)dxdy=u(s,t),zMrZ,uU,(1)

whereMr:=zZ:zZr,P1,P2X×Y,z(P1)-z(P2)ZL0P1-P2X×Y

is a set of a priori information of the exact solution. Here, a bound of the solution r and a Lipschitz constant of the solution function L0 are two given positive numbers, and P=(x,y)X×Y: PX×Y=x2+y2.

In optics, function z(x,y) is called a light source. The kernel K(x,y,s,t) is known as a point spread function. The right-hand side u(s,t) is called a (blurred) continuous image. Integral equation of the first kind (Equation1) can not only be used to model the diffraction of light from the sources as it propagates through the atmosphere, but also model the distortion due to imperfections in optical devices like telescopes and microscopes.[Citation4,Citation25,Citation26]

Denote the integral operator with a positive and continuous kernel Kh(x,y,s,t) as Ah. Suppose that instead of the exact kernel K(x,y,s,t) and right-hand side u¯ (where u¯:=Az¯ and z¯ is the unknown exact solution of problem (Equation1)) we are given approximate admissible data Kh(x,y,s,t) and u such that:(2) K-KhC(X×Y×S×T)h,ΠI,Ju¯-uδ,(2)

where h,δ – error levels, u=[ui,j]RIRJ is an I×J matrix, which will be called data of measurements, · – the induced Euclidean norm; i.e. u2=i,jui,j2, and ΠI,J:URIRJ is any linear bounded projection operator. Using the above information the following inequalities hold:(3) ΠI,JAhz¯-uΠI,JAhz¯-ΠI,Ju¯+ΠI,Ju¯-uΠI,JURIRJAhz¯-Az¯U+δΠI,JURIRJX×Y|K(x,y,s,t)-Kh(x,y,s,t)|·|z¯(x,y)|dxdyU+δΠI,JURIRJ(Rx-Lx)(Ry-Ly)rh+δ=:ε.(3)

The number ε will be called a predictive error of problem (Equation1). Denote the pair of errors η:=(h,δ). Obviously, ε0 as η0.

In practice, finding an analytical solution of Equation (Equation1) is impossible. Instead of an analytical solution, we consider a discrete one by some numerical methods. That is to say, for problem (Equation1) instead of a solution function z(x,y)Z(=C(X×Y)) we consider the value of solution z(x,y) at some fixed points; i.e. we need to find an approximation z of the matrix {z(xm,yn)}m,n=1M,N (where {xm}m=1M and {yn}n=1N are some corresponding grids on X and Y; i.e. Lx=x1<x2<<xM=Rx and Ly=y1<y2<<yN=Ry). Then, from the obtained approximate matrix z we construct a continuous function zη,M,N(x,y) using an approximation rule and prove that zη,M,N(x,y)Zz(x,y) as η0 and M,N+.

First of all, let us consider a method of recovery of the value of the function z(x,y) at a point (xm,yn). Define set Θ as a set of all a priori information of the problem (Equation1)(4) Θ:=(z,u):zMr,ΠI,JAhz-uε,(4)

where the predictive error ε(=ε(h,δ)) is defined by (Equation3).

Definition 2.1

A method of recovery of the value of the function z(x,y) at a point (xm,yn) is called any functional u: RIRJR. The error of the recovery method u is given by(5) Δ0((xm,yn),Θ,u):=sup(z,u)Θ|z(xm,yn)-u(u)|.(5)

Definition 2.2

The optimal error of recovery of the value of the function z(x,y) at the point (xm,yn) is given by(6) Δ1((xm,yn),Θ):=infuΔ0((xm,yn),Θ,u),(6)

where the infimum is taken over all functionals (maybe nonlinear) u: RIRJR. Any functional u^ for which the infimum in (Equation6) is attained we call an optimal recovery method of the function z(x,y) at the point (xm,yn) .

Now, let us discuss how to find a functional u^ (a method of recovery of the value of the function z(x,y) at the point (xm,yn)). Note that set Mr is a bounded convex and balanced set in space Z, then by theorem Smolyak [Citation19,Citation27] we see that among all optimal recovery methods there exists a linear one. Moreover, by the Riesz-Fréchet representation theorem, problems (Equation5) and (Equation6) can be rewritten as the following problem(7) infλRIRJsup(z,u)Θ|z(xm,yn)-λ,u|,(7)

where ·,· denotes the Euclidean inner product.

3. Reduction to a finite-dimensional problem and its solving method

In order to solve a problem of optimal recovery in finite-dimensional space instead of the problem (Equation7), which is posed in the infinite-dimensional space C(X×Y), we introduce a projection operator QM,N: ZRMRN and a bilinear interpolation operator Q¯M,N: RMRNZ such that(8) QM,Nz:=m,n=1M,Nzm,nemen,(8)

where {em}m=1M ({en}n=1N) is the standard basis in RM (RN) and coefficient zm,n is the value of the function z(x,y) at the point (xm,yn); i.e. zm,n=z(xm,yn).(9) Q¯M,N(z)(x,y):=1(xm+1-xm)(yn+1-yn){zm,n(xm+1-x)(yn+1-y)+zm+1,n(x-xm)(yn+1-y)+zm,n+1(xm+1-x)(y-yn)+zm+1,n+1(x-xm)(y-yn)}(9)

for xmx<xm+1 and yny<yn+1. Here z=[zm,n]RMRN is an M×N matrix. For simplicity, we use the uniform grid (hx=xm+1-xm and hy=yn+1-yn for any index (m,n)) and the Tchebycheff norm for matrix z (z=maxi,jzij).

Define a set of discrete a priori information Θ^, which is a finite-dimensional analogue of the set Θ, as(10) Θ^:={(z,u):|zm,n|r,m=1,M¯,n=1,N¯,|zm,n-zm+1,n|L0hx,m=1,M-1¯,n=1,N¯,|zm,n-zm,n+1|L0hy,m=1,M¯,n=1,N-1¯,ΠI,JAhQ¯M,Nz-uε}.(10)

For problem (Equation7), we consider an auxiliary problem of optimal recovery of a real number zm,n in the finite-dimensional space RMRN:(11) infλRIRJsup(z,u)Θ^|zm,n-λ,u|.(11)

Define an approximation error Δi,jM,N of the problem (Equation11) as(12) Δi,jM,N=supzMr|ΠI,JAh(z-Q¯M,NQM,Nz)i,j|.(12)

In a similar way as the method described in the paper [Citation28], it is not difficult to prove that for any fixed indices i and j sequence Δi,jM,N0 as M,N+. Hence, instead of the infinite-dimensional problem (Equation7) we can only consider its finite-dimensional analogue – the problem (Equation11). The extreme value of the problem (Equation11) is as close as the extreme value of the initial problem (Equation7) if numbers M,N are sufficiently large.[Citation23]

Now, let us solve the finite-dimensional problem (Equation11). Above all, define operators vec(·) and array(·), which help us introduce problem (Equation11) into a matrix form.

Definition 3.1

Given a matrix uRIRJ, one can obtain a vector u~RIJ by stacking the columns of u. This defines a linear operator vec:RIRJRIJ,vec(u):=(u1,1,u2,1,,uI,1,u1,2,u2,2,,uI,2,,u1,J,u2,J,,uI,J)T.u~:=vec(u),u~q=ui,j,q=(i-1)·I+j.

This corresponds to lexicographical column ordering of the components in the matrix u. The symbol array denotes the inverse of the vec operator. That means the following equalities holdarray(vec(u))=u,vec(array(u~))=u~,

whenever uRIRJ and u~RIJ.

Remark 1

Similarly, we can define operator vec0:RMRNRMN such thatvec0(z):=(z1,1,z2,1,,zM,1,z1,2,z2,2,,zM,2,,z1,N,z2,N,,zM,N)T.z~:=vec0(z),z~p=zm,n,p=(m-1)·M+n.

Denote array0 as the inverse of the vec0 operator. Let z~=vec0(z), and λ~=vec(λ).

Note that u2=u~2=u~Tu~, hence, using operators vec and vec0 we can introduce set Θ^ to the following matrix form(13) Θ~:={(z~,u~)RMN×RIJ:|z~p|r,p=1,MN¯,|z~p-z~p+1|L0hx,p=1,MN¯,p0(modM),|z~p-z~p+M|L0hy,p=1,N¯,A^z~-u~ε}.(13)

where the IJ×MN matrix A^ can be obtained by any numerical integration methods.[Citation29,Citation30] In this article, we use trapezoidal rule.

Now, instead of problem (Equation11) we consider the following problem(14) infλ~RIJsup(z~,u~)Θ~|z~p-λ~pTu~|.(14)

Connect this problem to another extremal problem, which is called associated problem to (Equation14)(15) supz~Z0z~p,(15)

where(16) Z0:={z~RMN:|z~p|r,p=1,MN¯,|z~p-z~p+1|L0hx,p=1,MN¯,p0(modM),|z~p-z~p+M|L0hy,p=1,N¯,A^z~ε}.(16)

The optimization problem (Equation15) always exists an unique solution for a fixed error level ε since Z0(ε) is a bounded convex set in RMN.

Now, consider the sensitivity result of optimization problem (Equation15). The theory of point-to-set maps [Citation31] has been used for the analysis of this type of the optimization problem. Hogen [Citation32,Citation33] has provided an excellent development of those properties of point-to-set maps that are especially useful in deriving such results. The following theorem is a result of the case in which the parameter of the problem appears in the constraints.

Theorem 3.2

[Citation33,Citation34] Suppose that M is a compact convex set, f:RnR (not necessarily convex) is continuous on M, and the components of g:RnRm are both lower semicontinuous and strictly convex on M. Then the perturbation function associated with the convex programmeminxM,gi(x)0,i=1,m¯f(x)

is defined on Rm asf(ε):=infxM,gi(x)εi,i=1,m¯f(x)

is continuous on its effective domain in Rm.

Remark 2

(1)

By above theorem it is easy to show that the objective function z~p in (Equation15) is continuous relative to ε.

(2)

Obviously, Z0(ε1)Z0(ε2) for ε1ε2, which implies z~p(ε1)z~p(ε2) for all p=1,MN¯. Hence z~p(ε) monotone decrease converges to z~p(0).

At the end of this section, let us formulate a theorem,[Citation19,Citation28] which is the basic for the algorithm proposed in this article.

Theorem 3.3

(Lagrange Principle) Denote by L:(RMNRIJ)RIJR,L((z~,u~),λ~p):=-z~p+λ~pTu~

the Lagrange function of problem (Equation14). Vector λ~pRIJ will be called a Lagrange multiplier of the problem (Equation14). If an element z~^ is an admissible point in the problem (Equation15); i.e. z~^Z0, then

(1)

two following conditions are equivalent:

(a)

z~^ is a solution of the associated problem (Equation15);

(b)

there exists a Lagrange multiplier λ~^pRIJ such that:L((z~^,0),λ~^p)=inf(z~,u~)Θ~L((z~,u~),λ~^p).

(2)

in the case when these two equivalent conditions hold, λ~^p is an optimal method of recovery in the problem (Equation14), and its error isΔ1(p,Θ~)=-L((z~^,0),λ~^p)=z~^p.

Remark 3

(1)

Lagrange Principle makes possible to reduce a problem of optimal recovery (Equation14) to finding a solution of an associated problem (Equation15) and a Lagrange multiplier λ~^p.

(2)

In order to find an optimal approximation of matrix {z(xm,yn)}m,n=1M,N of the initial problem (Equation1) we need to solve MN finite-dimensional problems (Equation14), which lead us to solve MN associated problems (Equation15) and to find MN Lagrange multipliers {λ~^p}p=1MN. These MN Lagrange multipliers will be called pseudo-optimal methods of recovery of the value of the function z(x,y) at grids {(xm,yn)}m=1,n=1M,N in the initial problem (Equation1). In this paper, we will propose a method to find Lagrange multipliers {λ~^p}p=1MN as a matrix with a regularization property for the ill-posed problem (Equation1).

4. Lagrange multiplier selection method

Define Λ:=(λ~1,λ~2,,λ~MN)T, which will be called a Lagrange multiplier matrix of the problem (Equation14). Obviously, the equality λ~pT=ΛTep holds, where {ep}p=1MN={(1,0,,0)T, (0,1,,0)T,,(0,0,,1)T} is the standard basis in RMN. In this section, we indicate that Lagrange multiplier matrix Λ plays the role of the regularization parameter for the ill-posed problem (Equation1), and we are going to find a so-called pseudo-optimal regularization Lagrange multiplier matrix. Above all, consider some definitions of spaces, to which the integral operator with a positive kernel A belongs. For simplicity, here, we only consider the finite-dimensional case.

Definition 4.1

A matrix A^(=A^M×K) is called a positive matrix (on a cone) if zRK and z0 the inequality A^z0 holds, where z0 means that all components of the vector z are nonnegative. Moreover, a matrix A^(=A^M×K) is called a generalized positive matrix if for any fixed matrix H^(=H^K×K) the matrix (A^TA^+αH^) for sufficiently small parameter α is a positive one, where α=diag(αp) is a diagonal matrix with nonnegative real numbers on the diagonal.

Remark 4

(1)

Obviously, the finite-dimensional approximation of the integral operator with a positive kernel A is not only a positive matrix, but also a generalized positive matrix.

(2)

For the infinite dimensional case, one can use the corresponding concepts in vector lattices.[Citation35]

Now, let us discuss how to find a pseudo-optimal regularization Lagrange multiplier matrix Λ. Lagrange Principle shows that for every fixed index p there exists a Lagrange multiplier λ~p such that(17) -z~^p=inf(z~,u~)Θ~-z~p+λ~pTu~,p=1,MN¯,(17)

where z~^p – a supremum of the associated problem (Equation15) for a fixed index p. After a simpler transformation we can obtain(18) (z~^p-z~p)+λ~pTu~0,(z~,u~)Θ~,p=1,MN¯.(18)

Let z^^:=(z~^1,,z~^MN)T and denote z^^0 as the solution z^^ with parameter ε=0. By Remark 2 in Section 3 we have limε0+z^^=z^^0. The vector representation of system of inequations (Equation18) is(19) (z^^-z~)+Λu~0,(z~,u~)Θ~.(19)

Denote Λ as a set of solutions of inequalities (Equation19). Theorem 3.3 asserts that Λ. For a general algebraic equation, in [Citation28], using the simplex algorithm and the set of active constraints index the author proposed an algorithm of finding all solutions of inequalities (Equation19). In many cases, the solution set is infinite. In [Citation23], using singular value decomposition the author found a so-called optimal regularization Lagrange multiplier matrix under some assumptions. In this paper, using the property of generalized positive matrix A^ we can also find a pseudo-optimal regularization Lagrange multiplier matrix.

In a similar way as the method described in [Citation23], among all solutions in set Λ we choose a Lagrange multiplier, which has the following form(20) Λ=(A^TA^+αH^)-1A^T,(20)

where α=diag(αp), p=1,MN¯ – regularization parameter, which will be defined later and H^ is a RMNRMN symmetric matrix, obtained from the discretization of some penalty functionals. For instance, for the squared L2-norm penalty functional we have H^=I^ and for the Sobolev W21 penalty functional we have H^=L^, where L^ is a RMNRMN symmetric tridiagonal matrix.[Citation36]

Above all, let us show the well-postness of the definition (Equation20). For simplicity, we only consider the case H^=I^. Indeed, by the singular value decomposition of the matrix A^ we have A^=EΣFT, where E and F are unitary matrices with size IJ×IJ and MN×MN respectively, Σ=diag(σ1,,σs,0,,0) is a rectangular diagonal matrix with nonnegative real numbers on the diagonal such that σ1>σ2>>σs>0. Here smin(IJ,MN) denotes the rank of A^. Hence, A^TA^+α=FΣ~(α)FT, where Σ~(α):=diag(σ12+α1,,σs2+αs,αs+1,,αMN), which implies A^TA^+α is invertible for αi0, i=1,s¯ and αj>0, i=s+1,MN¯.

Now, consider the condition of the regularization parameter α.

Lemma 4.2

If z^^p0>0 for all p=1,MN¯, then for sufficiently small error η and grid scales hx, hy the following two conditions are equivalent:

(1)

The columns of the matrix Λ, defined by (Equation20), are the Lagrange multipliers from Theorem 3.3.

(2)

The regularization parameter α in (Equation20) is sufficiently small and satisfies the following inequalities(21) sup(z~,u~)Θ~z~:epTH^(z^^-z~)>0fp(u~,z~)αpinf(z~,u~)Θ~z~:epTH^(z^^-z~)<0fp(u~,z~),p=1,MN¯,(21) where z^^:=(z~^1,,z~^MN)T, z~^p – a supremum of the associated problem (Equation15) for a fixed index p, and function fp(u~,z~) is defined by(22) fp(u~,z~):=epTA^T(A^z^^+u~-A^z~)-epTH^(z^^-z~).(22)

Proof

First, by the definition of generalized positive matrix A^ the composite matrix (A^TA^+αH^) is a positive one for sufficiently small parameter α. Then, put formula (Equation20) into inequality (Equation19) and use the property of positive operator (A^TA^+αH^) we see: (z~,u~)Θ~, inequality (A^TA^+αH^)(z^^-z~)+A^Tu~0 holds. Rewrite this vector inequality by componentsepT(A^TA^+αH^)(z^^-z~)+epTA^Tu~0,(z~,u~)Θ~,p=1,MN¯.

After a simpler transformation we can obtain p{1,,MN}, (z~,u~)Θ~:(23) αpepTH^(z^^-z~)-epTA^Tu~-epTA^TA^(z^^-z~)=-epTA^T(A^z^^+u~-A^z~).(23)

From above inequalities, we can see what a regularization parameter α should be. First, we indicate that for sufficiently small error η the right-hand side of inequality (Equation23) is always negative. Indeed, by the positivity on a cone of the matrix A^ and the positivity of z^^0 , we know that [A^z^^0]ι>0 for all ι=1,IJ¯. If one denotes(24) ε0:=12minι=1,IJ¯[A^z^^0]ι,(24)

then for any ε(0,ε0] we have(25) [A^z^^+u~-A^z~]ι=[A^z^^]ι+[u~-A^z~]ι[A^z^^0]ι-ε2ε0-ε0=ε0>0,(25)

which implies the negativity of the right-hand side of inequality (Equation23) in the condition of sufficiently small error η.

If epTH^(z^^-z~)=0, then for any αp inequality (Equation23) holds for a sufficiently small error η (i.e. εε0). Thus, inequality (Equation23) is equivalent to the following two inequalities:

(a)

If epTH^(z^^-z~)>0:  p{1,,MN}, (z~,u~)Θ~:αpepTA^T(A^z^^+u~-A^z~)-epTH^(z^^-z~)=:fp(u~,z~);

(b)

If epTH^(z^^-z~)<0:  p{1,,MN}, (z~,u~)Θ~:αpepTA^T(A^z^^+u~-A^z~)-epTH^(z^^-z~)=:fp(u~,z~).

Let us discuss the problem (b). Note that parameter αp is a constant, which does not depend on the point (z~,u~), which implies that the value of αp must satisfy inequality αpinf(z~,u~)Θ~z~:epTH^(z^^-z~)<0fp(u~,z~), which is the right inequality of (Equation21). In the same way, from problem (a) we can obtain the left inequality of (Equation21), which implies the forward direction of this lemma. The reverse direction of this lemma can be obtained in the same way as the above, and this concludes the proof.

Obviously, we should indicate the rationality of the condition of Lemma 4.2; i.e. we should prove that for every fixed index p,sup(z~,u~)Θ~z~:epTH^(z^^-z~)>0fp(u~,z~)inf(z~,u~)Θ~z~:epTH^(z^^-z~)<0fp(u~,z~).

In addition, regularization parameter α must be nonnegative; i.e. we should indicate the following inequality0min1pMNinf(z~,u~)Θ~z~:epTH^(z^^-z~)<0fp(u~,z~).

All these problems can be answered by the following lemma:

Lemma 4.3

If {(z~,u~)Θ~:epTH^(z^^-z~)<0} for sufficiently small error η and grid scales hx, hy, then the following inequalities hold for p=1,MN¯:(26) -<sup(z~,u~)Θ~z~:epTH^(z^^-z~)>0fp(u~,z~)0inf(z~,u~)Θ~z~:epTH^(z^^-z~)<0fp(u~,z~)<+,(26)

where function fp(u~,z~) is defined by (Equation22).

Proof

First, let us prove inequality 0inf(z~,u~)Θ~z~:epTH^(z^^-z~)<0fp(u~,z~). From the proof of Lemma 4.2, we know that for a sufficiently small error η, (z~,u~)Θ~ the inequality eqTA^T(A^z^^+u~-A^z~)0 holds. Then by the condition epTH^(z^^-z~)<0 we can obtain the positivity of function fp(u~,z~). Finally, by the property of functional inf we proved the first assertion.

Now, we are going to indicate that for any fixed index p the value ofinf(z~,u~)Θ~z~:epTH^(z^^-z~)<0fp(u~,z~)

is finite. Indeed, for any admissible point (z~0,u~0){(z~,u~)Θ~:epTH^(z^^-z~)<0} the value of function fp(u~0,z~0) is finite. Finally, by the assumption of the lemma and the inequality inf(z~,u~)Θ~z~:epTH^(z^^-z~)<0fp(u~,z~)fp(u~0,z~0)<+, we obtain the rightmost inequality of (Equation26).

Similarly, it is not difficult to obtain the two left inequalities of (Equation26). This completes the proof of this lemma.

The following lemma indicates that if one chooses the penalty matrix H^ as the unity matrix then set {(z~,u~)Θ~:epT(z^^-z~)<0} is nonempty.

Lemma 4.4

For all p=1,MN¯, {(z~,u~)Θ~:epT(z^^-z~)<0}.

Proof

Obviously, {(z~,u~)Θ~:epT(z^^-z~)<0}= means that z^^pz~p for all (z~,u~)Θ~, which implies that z^^p=r. Now let us prove the lemma by contradiction. Assume that z^^p0=z~^p0=r for an index p0 (note that, generally speaking, z^^z~^). Then by the a priori information of the solution Θ~ (the boundedness and the Lipschitz continuity of the solution) we have z~^pr-MNL0(hx+hy). Note that for the integral equation with a positive kernel we have that minpap=:a̲>0. This implies thatA^z~^=p=1MNapz~^pa̲p=1MNz~^pa̲MNr-MNL0(hx+hy).

By condition A^z~^ε we obtain a̲MNr-MNL0(hx+hy)ε, which is equivalent torε/(a̲MN)+MNL0(hx+hy).

Obviously, above equality will not hold for sufficiently small error η and grid scales hx, hy. It follows that the assumption that z^^p0=r must be false and hence z^^p0<r, which is equivalent to {(z~,u~)Θ~:ep0T(z^^-z~)<0}, proving the claim.

4.1. Regularization parameter selection methods

From Lemma 4.2, we can see what constraints of regularization parameter α should be satisfied. In this subsection, we will introduce some methods of constructing a regularization parameter α, which help us to get a pseudo-optimal regularization algorithm. First, introduce the definition of the regularization matrix κ.

Definition 4.5

The diagonal matrix κ=diag(κp), which is defined by(27) κp:=inf(z~,u~)Θ~z~:epTH^(z^^-z~)<0fp(u~,z~),(27)

will be called a regularization matrix of problem (Equation14).

By Lemma 4.3 we see that p: 0κp<+. The following example shows that κ0 in practice.

Example 1: Consider a matrix equation with the positive coefficient matrix A^=[1,1;1,1]. Assume that h=δ=0, r=2, L0hx=L0hy=2 and H^=I. Then ε=0 and we haveΘ~:=(z~,u~)R2×R2:|z~1|2,|z~2|2,|z~1-z~2|2,u~1=z~1,u~2=z~2.

The associate problem (Equation14) gives that z^^=(1,1)T. Finally, we obtainκ1=inf|z~1-z~2|2,1<z~12,|z~2|2.4z~1-1=4,κ2=inf|z~1-z~2|2,1<z~22,|z~1|2.4z~2-1=4.

The following lemma provides a sufficiency condition of minp=1,MN¯κp>0.

Lemma 4.6

If z^^p0>0 for all p=1,MN¯, then for sufficiently small error η and grid scales hx, hy: κp>0 for all p=1,MN¯.

Proof

Define ε0 by (Equation24). By inequalities (Equation25) and the boundedness of solution z~pr we haveinf(z~,u~)Θ~z~:epTH^(z^^-z~)<0epTA^T(A^z^^+u~-A^z~)-epTH^(z^^-z~)inf(z~,u~)Θ~z~:epTH^(z^^-z~)<0epTA^Tε01MN-epTH^(z^^-r1MN)==ε0epTA^T1MNepTH^(r1MN-z^^)>0,

which yields the required result by the definition of κp. Here, 1MN is an MN×1 vector, whose components are all unities. The last inequality, which holds for sufficiently small error level and grid scales, can be proved by the method, described in the proof of Lemma  4.2.

Using the regularization matrix κ, we can introduce various regularization parameter selection methods. Here, we only introduce two simplest regularization parameter selection methods – an a priori method and an a posteriori method. Above all, we assume that κ0.

4.1.1. A priori regularization parameter selection α=α1

The a priori regularization parameter α1 can be defined as(28) α1:=εσ1+εσκ,0<σ<2,(28)

where(29) κ=diag(κp),orκ=maxp=1,MN¯κpIMN.(29)

Here IMN is MN-dimensional unity matrix.

It is very easy to obtain following properties of the selected regularization parameter α1, proposed by formula (Equation28).[Citation5]

Lemma 4.7

(a) η0: κ(η)α1(η)0;    (b) limη0α1(η)=0; (c) limη0ε2maxp=1,MN¯α1,p(η)=0, where α1,p is the p-th element on the diagonal of the matrix α1.

4.1.2. A posteriori regularization parameter selection α=α2

An a priori parameter choice is not suitable in practice since a good regularization parameter α requires knowledge of the unknown solution z. This knowledge is not necessary for a posteriori parameter choice. Here, the a posteriori regularization parameter α2 can be defined by(30) α2:=minα,maxp=1,MN¯κp,(30)

where the nonnegative number α is the regularization parameter, which is chosen by the generalized discrepancy principle [Citation11,Citation13]; i.e. it is a positive root of the following functional(31) ρ(α)=A^z~α-u~RIJ2-δ+h·z~αRMN2,(31)

where z~α=Λ|α=αI·u~.

By the theory of generalized discrepancy principle in ill-posed problems [Citation5,Citation6] the following assertions hold

Lemma 4.8

(a) 0α2maxp=1,MN¯κp; (b) limη0α2(η)=0.

5. The optimal and regularization properties of the algorithm

Denote Hom(Z,U) as the class of all continuous operators acting from the space Z into the space U.

Definition 5.1

The family {Rp}pMN: (0,+)×(0,+)×Hom(Z,U)×(RIRJ)R is called a family of pointwise regularization methods of the problem (Equation1) on a given grid {(xm,yn)}m=1,n=1M,N if for any fixed point (xm,yn) in the grid {(xm,yn)}m=1,n=1M,N satisfy(32) sup(z,u)Θ|z(xm,yn)-Rp(η,α,Ah,u)|I,J+η00,p=1,MN¯,(32)

where the set of a priori information Θ of problem (Equation1) is defined by (Equation4). Denote R:=(R1,R2,,RMN) as a pointwise regularization method of the problem (Equation1) on a given grid {(xm,yn)}m=1,n=1M,N.

  • The least upper bound in problem (Equation32) will be called the point error of a family of regularization methods Rp of the problem (Equation1) at a given point (xm,yn) and denote as Δp((xm,yn),Θ,Rp), where p=1,MN¯.

  • The value Δ(Θ,R):=p=1MNΔp((xm,yn),Θ,Rp) will be called as the total error of a pointwise regularization method R of problem (Equation1) at a given grid {(xm,yn)}m=1,n=1M,N.

Remark 5

As compared with the classical regularization operator, the pointwise regularization method has stronger regularization property. Indeed, using the bilinear interpolation operator Q¯M,N we can obtain the classical regularization operator for the problem (Equation1) by the compound operator Q¯M,NR; i.e.zMr:z-Q¯M,NR(η,α,Ah,u)Z0,asη0,M,N,I,J+.

Definition 5.2

A regularization method R^ of problem (Equation1) is called pseudo-optimal, if for any fixed number ϵ>0 and for any regularization method R: (0,+)×(0,+)×Hom(Z,U)×(RIRJ)RMRN of problem (Equation1) there exists a pair of numbers (M(ϵ),N(ϵ)) such that on this fixed grid {(xm,yn)}m=1,n=1M(ϵ),N(ϵ) the following inequality holds(33) Δ(Θ,R^)<Δ(Θ,R)+ϵ,(33)

where the number ϵ will be called an error of the finite-dimensional approximation of the problem (Equation14).

Define a regularization method R of the problem (Equation1) by(34) R(η,α,Ah,u):=array0Λu~,(34)

where u~=vec(u), Λ=Λ(α,Ah) is a Lagrange multiplier matrix of MN problems (Equation14), which was defined by (Equation20), and the regularization parameter α is chosen neither by an a prior method (Equation28), or by an a posteriori method (Equation30).

Remark 6

Mapping R, defined by (Equation34), does not linearly depend on u, since the value of matrix Λ depends on u.

Before formulating the fundamental theorem in this work, consider the definition of complete bounded finite dimensional approximation, which stands a crucial point when proving the regularizing property of our proposed method.

Definition 5.3

A family of operators {ΠI,J:URIRJ}I,J=1 is called complete bounded finite dimensional approximation of the space U if it satisfies the following two conditions simultaneously.

(1)

There exists a constant CΠI,J>0 such that ε>0:ΠI(ε),J(ε)URIRJCΠI,J.

(2)

u1,u2U and u1u2:lim infε0+ΠI(ε),J(ε)u1-ΠI(ε),J(ε)u2RIRJ>0 orlim infε0+ΠI(ε),J(ε)u1-ΠI(ε),J(ε)u2RIRJ=.

At the end of this section, let us formulate a fundamental theorem, which is the basic result in this article.

Theorem 5.4

If minp=1,MN¯z^^p0>0 and {ΠI,J:URIRJ}I,J=1 is complete bounded finite dimensional approximation of the space U. Then, the regularization method R, which was defined by (Equation34), is a pseudo-optimal pointwise regularization method of problem (Equation1). Consequently, the operator Q¯M,NR is a pseudo-optimal regularization operator of problem (Equation1).

Proof

The (pointwise) regularization property of the proposed method Q¯M,NR (R) can be proved by the method of combination of the proof of the regularization property in [Citation37] (the author used the complete bounded finite dimensional approximation operators {ΠI,J}I,J=1 to obtain the regularizing property of the proposed method) and the proof of the classical Tikhonov regularization algorithm of ill-posed problems.[Citation7]

Now, let us prove the optimal property of the proposed method R. First, note that for any fixed error of the finite-dimensional approximation ϵ there exists a pair of numbers (M(ϵ),N(ϵ)) such that following two inequalities hold [Citation28]sup(z,u)Θ|z(xm,yn)-epTΛu~|sup(z,u)Θ^|zm,n-epTΛu~|+ϵ2MN,infλRIRIsup(z,u)Θ^|zm,n-λ,u|infλRIRIsup(z,u)Θ|z(xm,yn)-λ,u|+ϵ2MN,

where zm,n – an element of matrix z in the m-th row and n-th column.

Fix a grid {(xm,yn)}m=1,n=1M(ϵ),N(ϵ). Consider any regularization method (maybe nonlinear) R: (0,+)×(0,+)×Hom(Z,U)×(RIRJ)RMRN of the problem (Equation1) on this given grid. Define a helping mapping P:RIRJRMRN such that P(u)=R(η,α,Ah,u). Define Rp(u):=epTvec0R(u). For every fixed index p, by Lagrange Principle and the definition of regularization method R (Equation34) we haveΔp((xm,yn),Θ,Rp):=sup(z,u)Θ|z(xm,yn)-Rp(η,α,Ah,u)|=(34)̲̲sup(z,u)Θ|z(xm,yn)-epTΛu~|sup(z,u)Θ^|zm,n-epTΛu~|+ϵ2MN==infλRIRIsup(z,u)Θ^|zm,n-λ,u|+ϵ2MNinfλRIRIsup(z,u)Θ|z(xm,yn)-λ,u|+ϵ2MN+ϵ2MN==infu:RIRIR1sup(z,u)Θ|z(xm,yn)-u(u)|+ϵMNsup(z,u)Θ|z(xm,yn)-P(u)|+ϵMN=P(u)=R(η,α,Ah,u)̲̲sup(z,u)Θ|z(xm,yn)-R(η,α,Ah,u)|+ϵMN=:Δp((xm,yn),Θ,R)+ϵMN,

where = holds by Lagrange Principle and = holds by theorem Smolyak [Citation19,Citation27] about existence of linear optimal recovery methods.

From above inequalities we can obtainΔ(Θ,R):=p=1MNΔp((xm,yn),Θ,Rp)p=1MNΔp((xm,yn),Θ,R)+ϵMN:=Δ(Θ,R)+ϵ,

which implies the desired result.

Figure 1. Left: the truecolor image of the galaxy cluster, whose digital representation is an 1280×1280×3 array. Right: the inexact the point spread function Kh(x,y,s,t).

Figure 1. Left: the truecolor image of the galaxy cluster, whose digital representation is an 1280×1280×3 array. Right: the inexact the point spread function Kh(x,y,s,t).

Figure 2. Reconstructions with noiseless projections h=δ=0.01. (1) the exact blurred image u, (2) the blurred image with noisy uδ, (3) reconstruction, obtained by our method using a priori regularization parameter selection α=α1, (4) reconstruction, obtained by our method using a posteriori regularization parameter selection α=α2.

Figure 2. Reconstructions with noiseless projections h=δ=0.01. (1) the exact blurred image u, (2) the blurred image with noisy uδ, (3) reconstruction, obtained by our method using a priori regularization parameter selection α=α1, (4) reconstruction, obtained by our method using a posteriori regularization parameter selection α=α2.

Figure 3. Reconstructions with noiseless projections h=δ=0.1. (1) the exact blurred image u, (2) the blurred image with noisy uδ, (3) reconstruction, obtained by our method using a priori regularization parameter selection α=α1, (4) reconstruction, obtained by our method using a posteriori regularization parameter selection α=α2.

Figure 3. Reconstructions with noiseless projections h=δ=0.1. (1) the exact blurred image u, (2) the blurred image with noisy uδ, (3) reconstruction, obtained by our method using a priori regularization parameter selection α=α1, (4) reconstruction, obtained by our method using a posteriori regularization parameter selection α=α2.

Table 1. The total pseudo-optimal errors of the proposed pseudo-optimal pointwise regularization method for problem (35) and relative errors of reconstructions presented in Figures and .

6. A solution algorithm of the inverse problem for two-dimensional integral equation of the first kind with a positive kernel

In this section, let us describe steps of a pseudo-optimal regularized solution algorithm of an optimal recovery problem (Equation7) for two-dimensional integral equation of the first kind with a positive kernel (Equation1). This algorithm is based on the theory mentioned above.

(1)

Using an a priori information r,L0,u,Kh,Lμ,Rμ(μ=x,y,s,t),δ,h,M,N,I,J, calculate the predictive error ε by (Equation3) and the matrix A^ by the trapezoidal rule.

(2)

For every fixed index p=1,MN¯ find a supremum of the associated problem (Equation15) – z~^p. All these numbers consist as a vector – z^^=(z~^1,,z~^MN)T.

(3)

Calculate the vector of suprema of the associated problems with parameter ε=0 – z^^0.

(4)

If minp=1,MN¯z^^p0>0, goto step (5). Otherwise, our algorithm does not work, and one can use algorithms, described in papers [Citation23,Citation28].

(5)

Calculate the regularization matrix κ by formula (Equation27) and then calculate the regularization parameter α neither by an a prior method (Equation30) α=α2.

(6)

Computer the Lagrange multiplier matrix Λ by (Equation20).

(7)

The result is that: the matrix Λ is a pseudo-optimal pointwise regularization method of problem (Equation1) on a given grid {(xm,yn)}m=1,n=1M,N. The continuous operator Q¯M,NΛ – a pseudo-optimal regularization method of problem (Equation1). Vector R(α,Ah,u):=array0Λvec(u) – a pseudo-optimal regularized approximate solution of problem (Equation1) on a given grid {(xm,yn)}m=1,n=1M,N. Δp((xm,yn),Θ,Rp)=z^^p(=z~^p), p=1,MN¯ – its corresponding the pseudo-optimal point error of a family of the regularization method Rp(:=epTR) of the problem (Equation1) on a given point (xm,yn) and Δ(Θ,R)=pz^^p – the total pseudo-optimal error of the regularization method R of the problem (Equation1) on a given grid {(xm,yn)}m=1,n=1M,N. The continuous function Q¯M,NR is a pseudo-optimal regularized approximate solution of the problem (Equation1).

7. Numerical experiments

In this section, we present a numerical experiment illustrating the ability of our method to reconstruct the solution of the following two-dimensional image deblurring problem:(35) -0.50.5-0.50.5K(x,y,s,t)z(x,y)dxdy=u(s,t),(s,t)[-0.5,0.5]×[-0.5,0.5],(35)

whereK(x,y,s,t)=exp-16[(x-s)2+(y-t)2].

In our simulations, the inexact point spread function is given by (see the right figure in Figure )Kh(x,y,s,t)=(1-h)K(x,y,s,t)=(1-h)exp-16[(x-s)2+(y-t)2].

The blurred discrete image u with noisy data is given by ui,j=u(si,tj)+ϵi,j (i=1,I¯, j=1,J¯), where ϵi,j describe measurement errors. We generated the exact image u(si,tj) using a forward solver and added independent and identically distributed Gaussian random variables. Denote u¯:=[u(si,tj)]I×J as the exact discrete image, and uδ:=Π¯I,Ju as the blurred continuous image, where the bilinear interpolation operator Π¯I,J is defined in the similar way as (Equation9). Define error level δ as δ=u-u¯/u¯. Set M=N=I=J=256, r=2.5 and L0=1. The model solution is a galaxy cluster – Arp 272, which is a remarkable collision between two spiral galaxies, NGC 6050 and IC 1179, and is part of the Hercules Galaxy Cluster, located in the constellation of Hercules (see the left figure in Figure ).

We discretize the integral operator (Equation35) using the trapezoidal rule based on the composite trapezoidal quadrature rule. The Fast Fourier Transform, carried out with Matlab’s built-in functions fft2 and ifft2, is used for convolution. Figures  and  show reconstructions from data with different noiseless projections h and δ, generated using our proposed pseudo-optimal pointwise regularization method.

For any different noiseless projections h and δ, the total pseudo-optimal errors of our method Δ(Θ,R) are always the same, and equal 430. Due to the model problem, we can calculate the relative errors of obtained approximate solutions. The total pseudo-optimal error Δ(Θ,R) of our method and relative error of reconstruction are displayed in Table .

8. Conclusion

The results of numerical experiments showed that the proposed method is an efficient and feasible algorithm for solving two-dimensional integral equation of the first kind with a positive kernel. Comparing with the standard regularization methods, the pseudo-optimal pointwise regularization method guarantees the pseudo-optimal error of the method. This proposed regularization method can not only solve n-dimensional (n1) integral equation of the first kind with a positive kernel, but also solve any generalized positive operator on some Banach lattice [Citation35]. The application of this work is of great help in image processing, optics and gravity measurement.

Acknowledgements

We thank referees for their extremely helpful comments. We are especially grateful to the associate editor for his helpful comments on versions of the manuscript and for his patience and persistence.

Additional information

Funding

The work was partially supported by RFBR [grant number 14-01-00182-a], [grant number 14-01-91151-NSFC-a].

Notes

No potential conflict of interest was reported by the authors.

References

  • Arfken GB, Weber HJ. Mathematical methods for physicists. San Diego (CA): Harcourt; 2001.
  • Polyanin AD, Manzhirov AV. Handbook of integral equations. Boca Raton (FL): CRC Press; 1998.
  • Krasnov M, Kiselev A, Makarenko G. Problems and exercises in integral equations. Moscow: Mir Publishers; 1971.
  • Roggemann MC, Welsh B. Imaging through turbulence. Boca Raton (FL): CRC Press; 1996.
  • Tikhonov AN, Leonov AS, Yagola AG. Nonlinear Ill-posed problems. London: Chapman and Hall; 1998.
  • Tikhonov AN, Goncharsky AV, Stepanov VV, Yagola AG. Numerical methods for the solution of Ill-posed problems. Dordrecht: Kluwer Academic Publishers; 1995.
  • Wang YF, Yagola AG, Yang CC. Optimization and regularization for computational inverse problems and applications. 1st ed. Berlin: Springer-Verlag; 2011.
  • Ivanov VK, Vasin VV, Tanana VP. Theory of linear Ill-posed problems and its applications. Utrecht: VSP; 2002.
  • Beilina L, Klibanov MV. Appoximate global convergence and adaptivity for coefficient inverse problems. New York (NY): Springer-Verlag; 2012.
  • Bakushinsky AB, Goncharsky AV. Ill-Posed problems: theory and applications. Dordrecht: Kluwer Academic Publishers; 1994.
  • Yagola AG, Leonov AS, Titarenko VN. Data errors and an error estimation for ill-posed problems. Inverse Prob. Sci. Eng. 2002;10:117–129.
  • Dorofeev KYu, Nikolaeva NN, Titarenko VN, Yagola AG. New approaches to error estimation to ill-posed problems with applications to inverse problems of heat conductivity. J. Inverse Ill-Posed Prob. 2002;10:155–169.
  • Titarenko VN, Yagola AG. Error estimation for ill-posed problems on piecewise convex functions and sourcewise represented sets. J. Inverse Ill-Posed Prob. 2008;14:1–14.
  • Wang YF, Zhang Y, Lukyanenko DV, Yagola AG. A method of restoring the restoring aerosol particle size distribution function on the set of piecewise-convex functions. Numer. Methods Prog. 2011;13:67–73. Russian.
  • Wang YF, Zhang Y, Lukyanenko DV, Yagola AG. Recovering aerosol particle size distribution function on the set of bounded piecewise-convex functions. Inverse Prob. Sci. Eng. 2013;21:339–354.
  • Tikhonov AN, Arsenin VY. Solutions of Ill-posed problems. New York (NY): Wiley; 1977.
  • Micchelli CA, Rivlin TJ. Lectures on optimal recovery. Lecturenotes in mathematics. Heidelberg: Springer Verlag; 1985.
  • Micchelli CA, Rivlin TJ. A survey of optimal recovery. In: Optimal estimation in approximation theory. New York (NY): Plenum Press; 1977.
  • Magaril-Ilyaev GG, Osipenko KY, Tikhomirov VM. Optimal recovery and extremum theory. Comput. Methods Funct. Theory. 2002;2:87–112.
  • Bayev AV, Yagola AG. Optimal recovery in problems of solving linear integral equations with a priori information. J. Inverse Ill-Posed Problems. 2007;15:569–586.
  • Arestov VV. Best recovery of operators and related problems. Proc. Steklov Inst. Math. 1989;189:320.
  • Traub JF, Wozniakowski H. Best recovery of operators and related problems. New York (NY): Academic Press; 1980.
  • Zhang Y, Lukyanenko DV, Yagola AG. Using Lagrange principle for solving linear ill-posed problems with a priori information. Numer. Methods Prog. 2013;14:468–482. Russian.
  • Zhang Y, Lukyanenko DV, Yagola AG. An optimal regularization method for convolution equations on the sourcewise represented set. J. Inverse Ill-Posed Prob. 2015;23. doi:10.1515/jiip-2014-0047
  • Jain AK. Fundamentals of digital image processing. New York (NY): Prentice-Hall; 1989.
  • Bertero M, Boccacci P. Introduction to inverse problems in imaging. Bristol: IOP Publishing; 1998.
  • Smolyak SA. On optimal restoration of functions and functionals of them dissertation [dissertation]. Moscow: Moscow State University; 1965. Russian.
  • Bayev AV. The Lagrange principle and finite-dimensional approximations in the optimal inverse problem for linear operators. Numer. Methods Prog. 2006;7:323–336. Russian.
  • Davis PJ, Rabinowitz P. Methods of numerical integration. New York (NY): Academic Press; 1975.
  • Stoer Josef, Bulirsch Roland. Introduction to numerical analysis. New York (NY): Springer-Verlag; 1980.
  • Berge C. Topological spaces. Patterson EM, translator; New York (NY): Macmillan; 1963.
  • Hogen WW. Point-to-set maps in mathematical programming. SIAM Res. 1973;15:591–603.
  • Hogen WW. The continuity of the perturbation function of a convex program. Oper. Res. 1973;21:351–352.
  • Fiacco A. Introduction to sensitivity and stability analysis in nonlinear programming. New York (NY): Academic press; 1983.
  • Schaefer HH. Banach lattices and positive operators. New York (NY): Springer-Verlag; 1974.
  • Vogel CR. Computational methods for inverse problems. Philadelphia: SIAM; 2002.
  • Bayev AV. An optimal regularizing algorithm for the recovery of functionals in linear inverse problems with sourcewise represented solution. Zh. Vychisl. Mat. Mat. Fiz. 2008;48:1933–1941. Russian.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.