325
Views
9
CrossRef citations to date
0
Altmetric
Articles

Numerical reconstruction of unknown boundary data in the Cauchy problem for Laplace’s equation

Pages 1255-1267 | Received 24 Feb 2012, Accepted 24 Oct 2012, Published online: 23 Nov 2012

Abstract

This study explores an application of the quasi-solution and the Fourier methods used to solve a mixed initial-boundary problem involving Laplace’s equation in a rectangle. The problem is reduced to a system consisting of mutually dependent direct and adjoint problems. The explicit formula for the quasi-solution is obtained using the Fourier method. Computational experiments are performed for different types of synthetic data, and admissible parameter ranges are established.

1 Introduction

Laplace’s and Poisson’s equations are commonly studied mathematical models used in applications involving both direct and inverse problems. In many practical cases, a whole boundary for a given domain is unavailable, and boundary data cannot be stated explicitly. For instance, in geophysical applications, much of the boundary of an object of interest is not available for measurement. Hence, recovery of the boundary data is an important and practical problem [Citation1, Citation2]. All possible measurements for the available portion of the boundary are used as additional information. This work concerns the application of Fourier and quasi-solution methods to solve mixed initial-boundary problem using Laplace’s equation in a rectangle. This is a classical inverse problem studied by many authors [Citation3Citation8]. We consider the same statement given in [Citation5], but we neither minimise the residual functional by any gradient method, nor do we use other iterative methods. Instead, we find a solution to the given problem by using a necessary minimum condition for the residual.

Use of the Fourier method to investigate the Cauchy problem for Laplace’s equation was applied for the first time in [Citation7]. An iterative method used to solve an initial-boundary problem using Laplace’s equation was developed in [Citation6], where initial data were complemented by additional data on one side of the boundary. The quasi-solution method [Citation9] is the most useful method for an investigation of problems of this type [Citation5Citation10]. By this method, a given inverse problem is reduced to an optimal control problem that can be solved using corresponding techniques. In many cases, the gradient of the residual functional is expressed in terms of the solution to the adjoint problem [Citation3, Citation5, Citation10Citation16].

Therefore, this solution is useful for the numerical calculation of the gradient [Citation3, Citation5, Citation6, Citation10Citation13]. Under certain conditions, the optimal control can be expressed in terms of the solutions to direct and adjoint problems [Citation11, Citation16Citation21]. Necessary optimality conditions lead to the joined system that consists of direct and adjoint problems [Citation16Citation21]. The solution, if it exists, satisfies the necessary optimality conditions. If the residual functional is convex, then a necessary condition is also sufficient for optimality. To the best of our knowledge, authors of [Citation16, Citation17, Citation20] pioneer to use such an approach to an optimal control problem concerning processes in distributed objects. The solvability of a system consisting of mutually dependent direct and adjoint problems was first considered in [Citation21] in the case of the 1D quasi-linear parabolic equation. Further, this problem was investigated in detail in [Citation22]. We are not aware of any reference to the application of Fourier methods for numerical solutions to such systems.

In most cases, optimal control conditions lead to a nonlinear system; therefore, the Fourier method cannot be applied directly. In many cases, the solution to an adjoint problem is used to find a gradient of the functional, but a simultaneous solution for coupled direct and adjoint problems is not considered [Citation3, Citation5, Citation6, Citation10Citation13].

We consider the same statement as in [Citation5], but we do not minimise the residual functional by any gradient method and do not use other iterative methods, as is done in [Citation5]. Instead, we find a solution to the given problem by using a necessary minimum condition for the residual. The system forms the necessary optimality condition that expresses the equality to zero at the first variation of the functional at the extremum point. In [Citation5], the solution to the adjoint problem is used only to find a gradient of the residual. Compared with [Citation5], the main difference of the presented work is that we directly solve a coupled system consisting of direct and adjoint problems. Then, we obtain an explicit formula for the solution of the given inverse problem. Furthermore, we do not apply any gradient methods to obtain numerical solutions. In [Citation5], only one example of the numerical solution corresponding to the first harmonic has been considered; in the present work, we analyse different cases with a high number of harmonics and a discontinuous sought-for function.

This method makes it possible to realise a large number of numerical simulations and find admissible values of a regularisation parameter in different cases. The explicit formula for the solution allows one to study the solution in detail; this is an additional advantage of the presented method. The problems concerning convexity of the residual functional and sufficient conditions of optimality are outside the scope of this work.

The paper is organised as follows: in Section 2, the statement of the problem is given, and the residual functional is constructed; in Section 3, the explicit formula for the solution to a system consisting of mutually dependent direct and adjoint problems is derived using the Fourier method; and results of computational experiments are presented in Section 4.

2 The statement to the problem

We consider the following problem:

Find the function U(a,y) at the boundary x=a where U(x,y) is the solution to the initial-boundary value problem for Poisson’s equation1 Uxx+Uyy=f(x,y),(x,y)Ω=(0,a)×(0,b),U(x,0)=q1(x),U(x,b)=q2(x),U(0,y)=p1(y),Ux(0,y)=p(y).1 Assume thatf(x,y)C(Ω¯),U(x,0)=q1(x),U(x,b)=q2(x),q1(x),q2(x)C[0,a]p1(y),p(y)C[0,b],and the following consistence conditions hold true2 q1(0)=p1(0),p1(b)=q2(0).2 Let us rewrite the statement (Equation2.1)-(Equation2.2). The solution U(x,y) to the problem (Equation2.1)-(Equation2.2) can be represented as U(x,y)=V(x,y)+u(x,y), where functions V(x,y) and u(x,y) are the solutions to the problems formulated below:3 Vxx+Vyy=f(x,y),(x,y)Ω=(0,a)×(0,b)V(x,0)=q1(x),V(x,b)=q2(x),V(0,y)=p1(y),V(a,y)=(1-y/b)q1(a)+(y/b)q2(a),3 and4 uxx+uyy=0,(x,y)Ω=(0,a)×(0,b)u(x,0)=0,u(x,b)=0,u(0,y)=0,ux(0,y)=p(y)-Vx(0,y)g(y).4 Here the function u(a,y)=r(y) is unknown:5 u(a,y)=r(y)-?5 Then, the sought-for function is expressed asU(a,y)=r(y)+V(a,y)The problem (Equation2.3) is a classical statement of the Dirichlet problem for Poisson’s equation and is solved uniquely by V(x,y)C(Ω¯)C2(Ω). The normal derivative Vx(0,y) on the boundary x=0 exists and is continuous [Citation23]. Therefore, the original problem (Equation2.1)-(Equation2.2) is reduced to the initial boundary value problem (Equation2.4) for Laplace’s equation. Let us apply the quasi-solution method [Citation9] to the problem (Equation2.4)-(Equation2.5). Namely, the required function r(y) will be found as a solution to the following minimisation problem:6 J(r)=120b(ux(0,y;r)-g(y))2dy+12β0br2(s)dsminr(·).6 Here, the derivative ux(x,0;r) is obtained by the following: we define the function r(y), use it as a boundary condition at the boundary y=b, solve the Dirichlet problem for the function u(x,y), and find ux(0,y). Here, β>0 is a regularisation parameter [Citation24].

We consider the following set of admissible functions r(y):r(y)C[0,b],r(0)=0,r(b)=0.Let us introduce the following adjoint problem:7 vxx+vyy=0,(x,y)Ω=(0,a)×(0,b),v(x,0)=0,v(x,b)=0,v(0,y)=ux(0,y,r)-g(y)v(a,y)=0,7 Authors of [Citation5] have shown that the solution to the minimisation problem (Equation2.6) is expressed as8 r(y)=β-1vx(a,y)8 Using the formula (Equation2.8) in (Equation2.4) as a boundary condition, we obtain the following coupled system of direct and adjoint problems, which need to be solved simultaneously:9 uxx+uyy=0,vxx+yyy=0,(x,y)Ω=(0,a)×(0,b)9 10 v(x,0)=0,v(x,b)=0,u(x,0)=0,u(x,b)=0,v(a,y)=0,u(0,y)=0,β·u(a,y)-vx(a,y)=0,ux(0,y;r)-v(0,y)=g(y).10

3 The solution to the problem

The problem (Equation2.9)-(Equation2.10) is a linear one. As the Fourier method is useful in solving linear problems, we shall apply it. First, we follow the standard procedure by separating the variables. Then, we find the function u(x,y) in the following form:11 u(x,y)=1[akcosh(πkx/b)+bksinh(πkx/b)]sin(πky/b).11 Analogously, do the same for v(x,y):12 v(x,y)=1cksinh(πk(x-a)/b)sin(πky/b)12 The presentation (Equation3.1), (Equation3.2) automatically obeys the conditions for u(x,y) and v(x,y) at the boundaries y=0,y=b, and the boundary condition for the function v(a,y). Now, we find the coefficients of the series (Equation3.1) and (Equation3.2) to satisfy the conditions for u(0,y),u(a,y),v(0,x). Let the function g(y) be represented by its Fourier series, which exists based on the assumptions, as follows:13 g(y)=1gksin(πky/b).13 Next, we substitute the above formulas into the boundary conditions: x=0. Then, we have:u(0,y)=1aksin(πky/b)=0.This yieldsak=0,k=1,2,...,Now, by writing (Equation3.1) and (Equation3.2) for x=0 and x=a, and taking derivatives, we obtain the following expressions:14 v(0,y)=-1cksinh(πka/b)sin(πky/b)ux(x,y)=1bkπkbcosh(πkx/b)sin(πky/b)ux(0,y)=1bkπkbsin(πky/b)vx(x,y)=1ckπkbcosh(πk(x-a)/b)sin(πky/b)vx(a,y)=1ckπkbsin(πky/b)14 Next, we substitute the above formulas into the boundary conditions:15 βu(a,y)-vx(a,y)=1[bkβsinh(πka/b)-ckπkb]sin(πky/b)=0,ux(0,y)-v(0,y)=1[bkπkb+cksinh(πka/b)]sin(πky/b)=1gksin(πky/b).15 Using (Equation3.5), we obtain a linear algebraic system defining bk and ck:16 bkβsinh(πka/b)-ckπk/b=0,bkπk/b+cksinh(πka/b)=gk.16 The explicit formula for the solution is17 bk=gkπk/bβsinh2(πka/b)+π2k2/b2ck=gkβsinh(πka/b)βsinh2(πka/b)+π2k2/b217 Now, we are ready to determine the sought-for function r(y) using the formula (Equation2.8):18 r(y)=β-1vx(a,y)=β-11ckπkbsin(πky/b)=β-11gkβsinh(πka/b)βsinh2(πka/b)+π2k2/b2·πkbsin(πky/b)=1gksinh(πka/b)βsinh2(πka/b)+π2k2/b2·πkbsin(πky/b)18 However, we have to justify the convergence of the series (Equation3.8). First, we need to estimate the order of diminishing coefficients. We take into account that for sufficiently large values of k we havesinh(πka/b)exp(πka/b)/2.Then, for large k it holds that:19 rk2exp(-πka/b)βπkbgk.19 The estimate above guarantees the convergence of the series (Equation3.8) for each β>0 because g(y) belongs to C[0,b]L2[0,b]. As a by-product of the formula (Equation3.8), we obtain an estimate for r(y) in terms of g(y) as follows:20 r=b21rk22πβb1k2gksinh2(πka/b)2πβb1k2sinh2(πka/b)g.20 Here, we have used Holder’s inequality. Because the last sum in the expression (Equation3.10) is dominated by the integral1x2sinh-2(πxa/b)dx,we obtain the following estimate in L2-norm:21 rC(a,b)gβ21 Note that if the set of functions g(y) is restricted by a finite number of harmonics in (Equation3.3), then the formula (Equation3.8) covers the case β=0 as well. It follows from [Citation7] that, in this case, the system (Equation2.9)-(Equation2.10) has the only solution v0, and the initial-boundary problem for the function u(x,y) can be solved independently using the Fourier method as noted above. Then, the solution to the problem (Equation3.8) is defined for β=0 and is given by the formula:r(y)=b1Lgksinh(πka/b)πk·sin(πky/b)The value of the functional is defined by a finite number of terms in (Equation3.8) as well. The functional in this case vanishes; therefore, in the above mentioned finite-dimensional case the formula (Equation3.8) gives the exact solution.

We now find the expression for the residual functional corresponding to the obtained r(y). Using the last line of (Equation2.10), we substitute (Equation3.4) and (Equation3.8) into (Equation2.6) and integrate:J(r)=120bv2(0,y)dy+12β0br2(s)ds=b40ck2sinh2(πka/b)+b4β0ck2π2k2b2=b40ck2sinh2(πka/b)+π2k2βb2.Finally, by using expressions (Equation3.7) for ck, we obtain:J(r)=b40βgk2sinh2(πka/b)βsinh2(πka/b)+π2k2/b2.If β is positive then we can rewrite the formula above in the following form:22 J(r)=b40gk21+π2k2/(βb2sinh2(πka/b)).22 As right hand side of (Equation3.12) are dominated by b4Kgk2, these series converge for β>0 on the given set of g(y).

4 Synthesis of measured data

To solve an inverse problem, we need a synthetic function g(y). First, we define a function r(y) and then solve the direct problem. Then, by using this solution, we calculate its derivative ux(0,y) and use it as additional information in the inverse problem. We solve the direct problem numerically using the Fourier method with a finite number of harmonics.

Let the function r(y) be presented by its Fourier series:23 r(y)=1rksin(πky/b),23 The solution to the direct problem is found using the Fourier method and is given by the following formulas:u(x,y)=1rksinh(πkx/a)sinh(πk)sin(πky/b).Then its first derivative is defined asux(x,y)=1πkarkcosh(πkx/a)sinh(πk)sin(πky/b)Therefore the synthetic function g(y) can be written in the form24 g(y)=ux(0,y)=1Lπkarksinh(πk)sin(πky/b)1Lgksin(πky/b).24 Thus, we calculate the Fourier coefficients of the function r(y), find the function g(y),then forget the function r(y), and recover it as if it were unknown.

4.1 Numerical method

Let us describe the numerical algorithm. In our numerical simulations we take values a=1 and b=1 only, and consider the unknown function at grid points yi=i/200,i=0,1,..,200. The main steps of the numerical method are:

1.

Data synthesis

a)

Define the function r(y) and a sufficiently large number L of harmonics. In our simulations, we have tested L=25,40,50,100

b)

Calculate coefficients rk,k=1,..,L of the series (Equation4.1) numerically

c)

Find gk by (Equation4.2): gk=πkrk(asinh(πk))-1

2.

Give the number ML of harmonics for the function r(y); calculate r(yi) using (Equation3.8) and limiting by a finite number M of harmonics:

r(yi)=1Mgksinh(πka/b)βsinh2(πka/b)+π2k2/b2·πkbsin(πkyi/b)In our numerical simulations, we have used different values of L. It turns out that for given cases, the difference between synthetic functions g(y) obtained with L=40 and L>40 can be neglected; therefore, we use L=40.

Due to the linearity of the problem (Equation2.8) -(Equation2.9), and according to (Equation3.8), any linear combinationg(y)=1JαjGj(y)corresponds to the linear combinationr(y)=1JαjRj(y).Therefore, in our investigation, we can consider the case of unique harmonic such asr(y)=sin(πky/b)By (Equation4.2), the k-th harmonic is presented in the function g(y) with a coefficientπksinh(πk)that exponentially decreases with respect to k. This yields quality loss in the result when a frequency k increases. When we analyse the formulas (Equation3.8), we see that the coefficient corresponding to the frequency number k is equal togkπksinh(πk)=π2k2rkβasinh2(πk),and it decreases likeπ2k2exp(-2kπ)β.

Figure 1. The recovery of only harmonic r(y)=sin(10πy) for different β.

Figure 1. The recovery of only harmonic r(y)=sin(10πy) for different β.

Therefore, to take into account these harmonics, we need to use a small value of β for a high k. The higher k we consider, the smaller a parameter β need be. For instance, when k=10 we have obtained an admissible result for β<10-30 only. The numerical results for different values of β and k=10 are compared in Figure .

Table contains corresponding values of the parameter β, frequencies, residual errors, and relative errors in the case r(y)=sin(πky). Here, the relative error is defined asδ=rexact-rrecoveredC/rexactC.When 16k20, we need to take β close to machine precision. However, when we take k>20, the method does not work. For example, the result for k=21 reveals only the qualitative agreement with an exact solution (see Figure ).

Table 1. Harmonic’s numbers k, corresponding admissible values of β and relative errors δ.

Figure 2. The recovery of the harmonic r(y)=sin(21πy) for β=10-53.

Figure 2. The recovery of the harmonic r(y)=sin(21πy) for β=10-53.

Table 2. Relative errors δ for different parameter β in the cases of discontinuous (c1) and smooth (c2) functions r(y)

Figure 3. The influence of the regularization parameter β and the number of harmonics M to the solution in the case of discontinuous r(y).

Figure 3. The influence of the regularization parameter β and the number of harmonics M to the solution in the case of discontinuous r(y).

Figure 4. The influence of the regularization parameter β to the quality of the solution for r(y)=1-4(y-0.5)2-sin(6πy).

Figure 4. The influence of the regularization parameter β to the quality of the solution for r(y)=1-4(y-0.5)2-sin(6πy).

Further, more complicated cases of the function r(y) are considered. We take up well-behaved functions r as well as a discontinuous one. Relative errors for these two cases are compared in Table . It turns out that due to round-off errors, the case of small values of β yields numerical instability for M>20. The use of β<10-15 is not worthwhile (Figure , left). In Figure , for the sake of clarity, graphs corresponding to different parameters are shifted vertically.

Our numerical simulations show that in the case of strictly positive β>0, we need to limit the number of harmonics to no more than 15–18. Only in this case does the required function recover with satisfactory quality.

The case β=0 is considered as well. We recover the solution using the formula (Equation3.7). Acceptable results are obtained for M13 only.

The results in the case of discontinuous r(y) with β=0 and M=11 are similar to those with β=10-30 and M=25. The corresponding graphs are depicted in Figure . However, when the function r(y) is a linear combination of no more than M harmonics, the quality of recovery is high enough.

Numerical experiments show that the use of small β10-15 is preferable to β=0. From a practical point of view, if, by analysing the measured data g(y) and other reasonable additional information, we estimate that the contribution of high harmonics with k>15 can be neglected, then we can choose the parameter β10-15 (Figure ).

5 Conclusions

In our opinion, use of the Fourier method is an efficient way to solve the given problem. We can choose to either use it directly or to apply it using a quasi-solution approach. Numerical simulations show that the use of small β>0 leads to better results than applying the Fourier method directly by taking β=0. In addition, the case β>0 lets us consider a greater number of harmonics. We have determined more admissible values of the main parameters such as a harmonic number and a regularisation parameter. It turns out that the most useful range for β is above 10-15-10-16, and the most useful range for M is 15–18.

Acknowledgments

The author is grateful to professors M. Otelbaev, T.S. Kalmenov, B.E. Kanguzhin, and M.A. Sadybekov and to other participants of the Fundamental Mathematics Seminar in Almaty, Kazakhstan for their attention to this work and for valuable discussion related to this work. The author is also grateful to the anonymous reviewers for their valuable comments that helped to improve the manuscript. This work has been supported by the Ministry of Education and Science of the Republic of Kazakhstan, grant number 1631-29.09.2012.

References

  • O.M.Alifanov, Inverse Heat Transfer Problems, Springer, Berlin, 1994.
  • E.J.Kassab, E.Divo, and J.S.Kapat, Multi-dimensional heat flux reconstruction using narrow-band thermochromic liquid crystal thermography, Inverse Probl. Sci. Eng. 9(7) (2001), pp. 537–559.
  • L.Bourgeois and J.Darde, A duality-based method of quasi-reversibility to solve the Cauchy problem in the presence of noisy data. Inverse Prob. 26 (2010), p. 095016.
  • HuiCao, M.V.Klibanov, and S.V.Pereverzev, A Carleman estimate and the balancing principle in the quasi-reversibility method for solving the Cauchy problem for the Laplace equation, Inverse Prob. 25 (2009), p. 035005.
  • S.Kabanikhin, M.A.Bektemesov, A.T.Ayapbergenova, and D.V.Nechaev, Optimization methods of solving continuation problems. Vichisli. Tekhnol. Comp. Tech. SB RAS 9 (2004), pp. 50–60.
  • V.A.Kozlov, A.F.Fomin, and V.G.Maz’ja, An iterative method for solving the Cauchy problem for elliptic equations, USSR, Comput. Math. Math. Phys. 31 (1991), pp. 45–52.
  • M.M.Lavrent’ev, On the Cauchy problem for the Laplace equation, Izvest. Akad. Nauk. SSSR, Ser. Mat.20 (1956), pp. 819–842.
  • L.E.Payne, Bounds in the Cauchy problem for the Laplace equation, Arch. Ration. Mech. Anal.5 (1960), pp. 35–45.
  • V.K.Ivanov, On ill-posed problems, Mat. Sbornik61(2) (1963), pp. 211–223 (in Russian).
  • N.Zabaras and J.Liu, An analysis of two-dimensional linear inverse heat transfer problems using an integral method, Num. Heat Transf. 13 (1988), pp. 527–33.
  • F.P.Vasil’ev, Methods for Solving Extremal Problems, Nauka, Moscow, 1981.
  • A.Hasanov, P.Duchateau, and B.Pektas, An adjoint problem approach and coarse-fine mesh method for identification of the diffusion coefficient in a linear parabolic equation, J. Inverse Ill-posed Prob. 14(5) (2006), pp. 435–463.
  • L.Beilina, M.V.Klibanov, and M. Yu.Kokurin, Adaptivity with relaxation for ill-posed problems and global convergence for a coefficient inverse problem, J. Math. Sci.167(3) (2009), pp. 279–325.
  • L.Beilina and M.V.Klibanov, Approximate Global Convergence and Adaptivity for Coefficient Inverse Problems, Springer, New York, 2012.
  • A.I.Egorov, On optimal control of processes in distributed objects, J. Appl. Math. Mech.27(4) (1963), pp. 1045–1058.
  • A.I.Egorov, Optimal control of thermal and diffusion processes, Nauka, Moskow, 1978 (In Russian).
  • L.V.Petuhov and V.F.Troitskiy, Variational optimization problems, Appl. Math. Mech. 34(2) (1972), pp. 578–588 (In Russian).
  • V.I.Plotnikov,Necessary and sufficient optimality conditions and uniqueness conditions for optimizing functions for control systems of general type, Math. USSR Izv. 6(3) (1972), pp. 649–676 (In Russian).
  • L.S.Pontryagin, Selected works, The Mathematical Theory of Optimal Processes, Vol. 4, CRC Press, New York, 1987.
  • O.G.Provorova, On a question of control process described by a quasi linear parabolic equation, Upavliaemye sistremy, Nauka, Siberian Branch, Novosibirsk, 1973 (In Russian).
  • B.D.Tajibaev, On optimality conditions in one control problem, Upavliaemye sistremy, Nauka, Siberian Branch, Novosibirsk, 1988 (In Russian).
  • V.S.Belonosov, Interior estimates for solutions to quasiparabolic systems, Siber. Math. J.37(1) (1996), pp. 17–32.
  • O.A.Ladyzenskaja, V.A.Solonnikov, and N.N.Ural’ceva, Linear and Quasi-linear Equations of Parabolic Type. Translations of Mathematical Monographs, Vol. 23, AMS, Providence, RI, 1968.
  • A.Tikhonov and V.Arsenin, Solution of Ill-Posed Problems, John Wiley, New York, 1977.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.