341
Views
5
CrossRef citations to date
0
Altmetric
Original Articles

Stable explicit stepwise marching scheme in ill-posed time-reversed 2D Burgers' equation

Pages 1672-1688 | Received 20 Apr 2018, Accepted 01 Sep 2018, Published online: 21 Sep 2018

ABSTRACT

This paper constructs an unconditionally stable explicit difference scheme, marching backward in time, that can solve a limited, but important class of time-reversed 2D Burgers' initial value problems. Stability is achieved by applying a compensating smoothing operator at each time step to quench the instability. This leads to a distortion away from the true solution. However, in many interesting cases, the cumulative error is sufficiently small to allow for useful results. Effective smoothing operators based on (Δ)p, with real p>2, can be efficiently synthesized using FFT algorithms, and this may be feasible even in non-rectangular regions. Similar stabilizing techniques were successfully applied in other ill-posed evolution equations. The analysis of numerical stability is restricted to a related linear problem. However, extensive numerical experiments indicate that such linear stability results remain valid when the explicit scheme is applied to a significant class of time-reversed nonlinear 2D Burgers' initial value problems. As illustrative examples, the paper uses fictitiously blurred 256×256 pixel images, obtained by using sharp images as initial values in well-posed, forward 2D Burgers' equations. Such images are associated with highly irregular underlying intensity data that can seriously challenge ill-posed reconstruction procedures. The stabilized explicit scheme, applied to the time-reversed 2D Burgers' equation, is then used to deblur these images. Examples involving simpler data are also studied. Successful recovery from severely distorted data is shown to be possible, even at high Reynolds numbers.

AMS SUBJECT CLASSIFICATIONS:

1. Introduction

Following the 1951 seminal paper by Cole [Citation1], a large literature on the 2D Burgers' equation has developed, spawned by significant applications in science and engineering. Numerous references may be found in recent papers [Citation2–7]. As is often stressed, the 2D Burgers' coupled system is a useful simplification of the 2D incompressible Navier-Stokes equations, providing valuable insight into the behaviour of complex flows, together with the expected behaviour of possible numerical methods for computing such flows. Accordingly, numerical methods for the well-posed forward Burgers' initial value problem have been actively investigated.

On the other hand, very little seems known about possible numerical computation of the ill-posed time-reversed 2D Burgers' equation. The backward problem is of considerable interest, as it may enable computation of initial conditions that can produce desired flow patterns, as well as reconstructing the genesis of undesired flows. Such inverse design problems are actively being studied for the 1D Burgers' equation [Citation8–10]. The inverse Burgers' problem also plays an important role in studies of data assimilation for nonlinear geophysical fluid dynamics [Citation11–14]. Error estimates of logarithmic convexity type have been obtained for backward recovery in the 1D problem [Citation15, Citation16], but are not yet known in the 2D Burgers' problem. For the Navier-Stokes equations, backward error estimates are given in [Citation17, Citation18]. Further information about ill-posed continuation in partial differential equations may be found in [Citation19–22].

The present self-contained paper constructs an unconditionally stable explicit difference scheme, marching backward in time, that can solve an important but limited class of time-reversed 2D Burgers' initial value problems. Stability is achieved by applying a compensating smoothing operator at each time step to quench the instability. Eventually, this leads to a distortion away from the true solution. However, in many interesting cases, the cumulative effect of these errors is sufficiently small to allow for useful results. Effective smoothing operators based on (Δ)p, with real p>2, can be efficiently synthesized using FFT algorithms. Similar stabilizing techniques were successfully applied in other ill-posed evolution equations, [Citation23–26]. As was the case in these papers, the analysis of numerical stability given in Sections 3 and 4 below, is restricted to a related linear problem. However, extensive numerical experiments indicate that these linear stability results remain valid when the explicit scheme is applied to a significant class of time-reversed nonlinear Burgers' initial value problems.

Following [Citation1], we define the Reynolds number RE as follows (1) RE=(A/ν)Umax(1) where A is the area of the flow domain, ν is the kinematic viscosity and Umax is the maximum value of the initial velocity. In many numerical experiments on the well-posed forward 2D Burgers' equation, the objective is to demonstrate accurate calculation of an exact solution, known analytically together with its specific initial and time-dependent boundary values. Typically, with 100RE1000, computations are carried out up to some fixed time Tmax1. In the present time-reversed context, where only approximate values are generally available at some positive time Tmax, the objective is to demonstrate useful backward reconstruction from noisy data. As is well known, there is a necessary uncertainty in ill-posed backward recovery from imprecise data at Tmax. For the 1D Burgers' equation, the error estimates at t=0 given in [Citation15, Citation16], contain a factor K=exp{RE×Tmax}. For the Navier-Stokes equations, the corresponding uncertainty may be significantly larger [Citation17, Citation18]. Taken together, these results indicate that successful backward recovery in the 2D Burgers' equation, with 100RE1000, may not be feasible unless Tmax1.

In Section 5 below, three instructive numerical experiments are discussed. In these experiments, the solutions have the value zero on the domain boundary. Interesting initial values are propagated forward up to a time Tmax1, by numerically solving the well-posed forward 2D Burgers' equation. Despite the small value of Tmax, considerable distortion of the initial values occurs. These limited precision numerical values are then used as input data at Tmax, for the backward computation with the stabilized explicit scheme. In the first experiment, relatively simple data are used, involving two Gaussians on the unit square, and Tmax=2.5×103 with RE=50,000. The second experiment involves images on the unit square, defined by highly non-smooth intensity data. Here, Tmax=2.5×104 and RE=3400. The last experiment, with Tmax=2.5×104 and RE=2500, involves non-smooth image data in an elliptical region.

2. 2D Burgers' equation

Let Ω be the unit square in R2 with boundary ∂Ω. Let < , > and  2, respectively, denote the scalar product and norm on L2(Ω). With ν>0 the kinematic viscosity, we consider the following 2D Burgers' system for (x,y)Ω, (2) ut=L1uνΔuuuxvuy,0<tTmax,vt=L2vνΔvuvxvvy,0<tTmax,(2) together with periodic boundary conditions on ∂Ω, and the initial values (3) u(x,y,0)=u0(x,y),v(x,y,0)=v0(x,y),(x,y)Ω.(3) The well-posed forward initial value problem in Equation (Equation2) becomes ill-posed if the time direction is reversed, and one wishes to recover u(x,y,0)), v(x,y,0), from given approximate values for u(x,y,Tmax), v(x,y,Tmax). We contemplate such time-reversed computations by allowing for possible negative time steps Δt in the explicit time-marching finite difference scheme described below. With a given positive integer N, let |Δt|=Tmax/(N+1) be the time step magnitude, and let u~n(x,y)u~(x,y,nΔt), n=0,1,,N+1, denote the intended approximation to u(x,y,nΔt), and likewise for v~n(x,y). It is helpful to consider Fourier series expansions for u~n(x,y), v~n(x,y), on the unit square Ω, (4) u~n(x,y)=j,k=u~j,knexp{2πi(jx+ky)},(4) with Fourier coefficients {u~j,kn} given by (5) u~j,kn=Ωu~n(x,y)exp{2πi(jx+ky)}dxdy,(5) and similarly for v~n(x,y). With given fixed ω>0 and p>1, define λj,k, σj,k, as follows (6) λj,k=4π2ν(j2+k2),σj,k=exp{2ω|Δt|λj,kp}.(6) For any f(x,y)L2(Ω), let {fj,k} be its Fourier coefficients as in Equation (Equation5). Using Equation (Equation6), define the linear operators P and S as follows (7) Pf=j,k=λj,kpfj,kexp{2πi(jx+ky)},fL2(Ω),Sf=j,k=σj,kfj,kexp{2πi(jx+ky)},fL2(Ω).(7) As in [Citation23–26], the operator S is used as a stabilizing smoothing operator at each time step, in the following explicit time-marching finite difference scheme for the system in Equation (Equation2), in which only the time variable is discretized, while the space variables remain continuous, (8) u~n+1=Su~n+ΔtSL1u~n,v~n+1=Sv~n+ΔtSL2v~n,n=0,1,,N.(8) The analyses presented in Sections 3 and 4 below are relevant to the above semi-discrete problem. In Section 5, where actual numerical computations are discussed, the space variables are also discretized, and FFT algorithms are used to synthesize the smoothing operator S.

3. Fourier stability analysis in linearized problem

As in [Citation23–26], useful insight into the behaviour of the nonlinear scheme in Equation (Equation8), can be gained by analysing a related linear problem with constant coefficients. With positive constants a, b, consider the initial value problem on the unit square Ω, (9) ut=LuνΔuauxbuy,0<tTmax,u(x,y,0)=u0(x,y),(9) together with periodic boundary conditions on ∂Ω. Unlike the case in Equation (Equation8), the stabilized marching scheme (10) u~n+1=Su~n+ΔtSLu~n,n=0,1,,N,(10) with the linear operator L, is susceptible to Fourier analysis. If Lu~n=fn(x,y), then the Fourier coefficients {fj,kn} satisfy fj,kn=gj,ku~j,kn, where, with λj,k as in Equation (Equation6), (11) gj,k={λj,k+2πi(aj+bk)}.(11) Let R be the linear operator R=S+ΔtSL. Then, (12) u~n+1=Ru~nj,k=u~j,kn{1+Δtgj,k}σj,k,u~n+122=∥Ru~n22j,k=|u~j,kn|2{1+|Δt||gj,k|}2σj,k2,(12) on using Parseval's formula.

Lemma 3.1

Let λj,k, σj,k, be as in Equation (Equation6), and let gj,k be as in Equation (Equation11). Choose a positive integer J such that if λJ=4π2νJ, we have (13) max(j2+k2)J{|gj,k|}2λJ,|gj,k|2λj,k, (j2+k2)>J.(13) With p>1, choose ω(λJ)1p in Equation (Equation6). Then, (14) σj,k1+|Δt||gj,k|1+2|Δt|λJ.(14) Hence, from Equation (Equation12), R21+2|Δt|λJ, and (15) u~n2=∥Rnu02exp{2n|Δt|λJ}u02,n=1,2,,N+1.(15) Therefore, with this choice of (ω,p), the explicit linear marching scheme in Equation (Equation10) is stable.

Proof.

We first show how to find a positive integer J such that Equation (Equation13) is valid. We have  |gj,k|2=λj,k2+4π2(aj+bk)2λj,k2+(2c2/ν)λj,k, where 0<c=max(a,b). Choose a positive integer J such that λJ=4π2νJ>(2c2/ν). Then,  (j,k), |gj,k|2λj,k2+λJλj,k, which implies Equation (Equation13). Next, the inequality in Equation (Equation14) is valid whenever (j2+k2)J, since σj,k1. For (j2+k2)>J, we have λJ<λj,k and |gj,k|2λj,k. Hence (16) σj,k=exp{2ω|Δt|λj,kp}exp{2ω|Δt|λj,kλJp1}exp{2|Δt|λj,k},(16) since ωλJp11. Also, exp{2|Δt|λj,k}(1+2|Δt|λj,k)1, since 1+xex for real x. Hence, with |gj,k|2λj,k for (j2+k2)>J, we find σj,k(1+|Δt||gj,k|)1. Next, using Equation (Equation14) in Equation (Equation12) leads to u~n2(1+2|Δt|λJ)u~n12, which implies Equation (Equation15). QED.

For functions v(x,y,t) on Ω×[0,Tmax], define the norm |||v|||2, as follows (17) |||v|||2,Sup0tTmax{v(,t)2}.(17)

Lemma 3.2

Let un(x,y)u(x,y,nΔt) be the exact solution in Equation (Equation9). Let ω, p, λj,k, σj,k, be as in Equation (Equation6). Let P and S be as in Equation (Equation7), and let L be the linear operator in Equation (Equation9). Then, un+1=un+ΔtLun+τn, where τn is the truncation error. With the norm definition in Equation (Equation17), and 0nN, (18) (:12.006) τn21/2(Δt)2 |||L2u|||2,,unSun22ω|Δt| |||Pu|||2,,|Δt|LunSLun22ω(Δt)2 |||PLu|||2,.(18) (:12.006)

Proof.

The inequality for the truncation error τn in Equation (Equation18) follows naturally from a truncated Taylor series expansion. Using the inequality 1exx for all real x, together with Parseval's formula, we have (19) unSun22=j,k=|uj,kn|2(1σj,k)2(2ω|Δt|)2|||Pu|||2,2.(19) This proves the second inequality in Equation (Equation18). The last inequality is a corollary of the second. QED.

In Lemma 3.1, the finite difference approximation u~n(x,y)u~(x,y,nΔt) satisfies Equation (Equation10), whereas the exact solution un(x,y)u(x,y,nΔt) in Equation (Equation9), satisfies un+1=un+ΔtLun+τn, where τn is the truncation error. We need to estimate the error wn(x,y)=un(x,y)u~n(x,y), n=0,1,,N+1.

Theorem 3.1

With Δt>0, let un(x,y) be the unique solution of Equation (Equation9) at t=nΔt. Let u~n(x,y) be the corresponding solution of the forward explicit scheme in Equation (Equation10), and let p, λJ, ω, be as in Lemma 3.1. If wn(x,y)=un(x,y)u~n(x,y), denotes the error at t=nΔt, n=1,2,,N+1, we have (20) (:2.007) wn2e2tλJw02+ω(e2tλJ1)/λJ|||Pu|||2,+(e2tλJ1)/2λJ2ωΔt |||PLu|||2,+(Δt/2)|||L2u|||2,.(20) (:2.007)

Proof.

With (21) hn=τn+(unSun)+Δt(LunSLun),(21) let R be the linear operator defined in Equation (Equation12). Then, un+1=Run+hn, while u~n+1=Ru~n. Therefore (22) wn+1=Rwn+hn=Rn+1w0+Δtj=0nRnjhj/(Δt).(22) Let |||h|||2,max0nN+1{hn}. Using Lemma 3.1, and letting t=(n+1)Δt, (23) (:2.0072) wn+12e2tλJw02+|||h|||2,/ΔtΔtj=0nRnj2,e2tλJw02+|||h|||2,/Δt0te2λJ(tu)du=e2tλJw02+|||h|||2,/Δt(e2tλJ1)/2λJ.(23) (:2.0072) We now proceed to estimate {|||h|||2,/Δt}. From Equation (Equation21) and Lemma 3.2, we find for 0nN+1, (24) hn/Δt2ω|||Pu|||2,+2ωΔt|||PLu|||2,+(Δt/2)|||L2u|||2,.(24) Therefore, Equation (Equation20) follows from Equation (Equation23). QED.

4. The stabilization penalties in the forward and backward linearized problem in Equation (9)

The stabilizing smoothing operator S in the explicit scheme in Equation (Equation10) leads to unconditional stability, but at the cost of introducing a small error at each time step. We now assess the cumulative effect of that error.

In the forward problem in Theorem 3.1, we may assume the given initial data u0(x,y) to be known with sufficiently high accuracy that one may set w02=0 in Equation (Equation20). Choosing ω=(λJ)1p in Lemma 3.1, and putting t=nΔtTmax, Equation (Equation20) reduces to (25) wn2(λJ)p(e2tλJ1)|||Pu|||2,+O(Δt),n=1,2,,N+1.(25) Therefore, when using the explicit scheme in Equation (Equation10), there remains the non-vanishing residual error (λJ)p(e2tλJ1)|||Pu|||2,, as Δt0. This is the stabilization penalty, which results from smoothing at each time step, and grows monotonically as tTmax. Recall that λJ must be chosen large enough to satisfy Equation (Equation13) in Lemma 3.1. Clearly, if Tmax=(N+1)|Δt| is large, the accumulated distortion may become unacceptably large as tTmax, and the stabilized explicit scheme is not useful in that case. On the other hand, if Tmax is small, as is the case in problems involving small values of t, it may be possible to choose p>2 and sufficiently large λJ, yet with small enough λJTmax that (λJ)p(e2λJTmax1) is quite small. In that case, the stabilization penalty remains acceptable on 0tTmax. As an example, with Tmax=5.0×104, p=2.75, and λJ=104, we find (λJ)p(e2λJTmax1)<2.21×107. For this important but limited class of problems, the absence of restrictive Courant conditions on the time step Δt in the explicit scheme in Equation (Equation10), provides a significant advantage in well-posed forward computations of two-dimensional problems on fine meshes.

However, there is an additional penalty in the ill-posed problem of marching backward from t=Tmax, in that solutions exist only for a restricted class of data satisfying certain smoothness and other constraints. These data are seldom known with sufficient precision. We shall assume that the given data u~0(x,y) at t=Tmax, differ from such unknown exact data by small errors γ(x,y): (26) u~0(x,y)=u(x,y,Tmax)+γ(x,y),γ2δ.(26)

Theorem 4.1

With Δt<0, let un(x,y) be the unique solution of the forward well-posed problem in Equation (Equation9) at s=Tmaxn|Δt|. Let u~n(x,y) be the corresponding solution of the backward explicit scheme in Equation (Equation10), with initial data u~0(x,y)=u(x,y,Tmax)+γ(x,y) as in Equation (Equation26). Let p, λJ, ω, be as in Lemma 3.1. If wn(x,y)un(x,y)u~n(x,y), denotes the error at s=Tmaxn|Δt|, n=0,1,2,,N+1, we have, with δ as in Equation (Equation26), (27) (:2.0101) wn2δe2n|Δt|λJ+ω(e2n|Δt|λJ1)/λJ|||Pu|||2,+(e2n|Δt|λJ1)/2λJ2ω|Δt| |||PLu|||2,+(|Δt|/2)|||L2u|||2,.(27) (:2.0101)

Proof.

Let hn=τn+(unSun)+Δt(LunSLun). Let R be the linear operator in Equation (Equation12). Then, un+1=Run+hn, while u~n+1=Ru~n. Therefore (28) wn+1=Rwn+hn=Rn+1w0+|Δt|j=0nRnjhj/(|Δt|).(28) Let |||h|||2,max0nN+1{hn}. Using Lemma 3.1, and letting r=(n+1)|Δt|, (29) (:2.009A) wn+12δe2rλJ+|||h|||2,/|Δt||Δt|j=0nRnj2,δe2rλJ+|||h|||2,/|Δt|0re2λJ(ru)du,=δe2rλJ|||h|||2,/|Δt|{e2rλJ1}/2λJ.(29) (:2.009A) As in the preceding Theorem, we may now use Lemma 3.2 to estimate {|||h|||2,/|Δt|} and obtain Equation (Equation27) from Equation (Equation29). QED.

It is instructive to compare the results in the well-posed case in Equation (Equation25), with the ill-posed results implied by Equation (Equation27). For this purpose, we must reevaluate Equation (Equation27) at the same t values that are used in Equation (Equation25). With Δt>0, t=kΔt, and uk(x,y)=u(x,y,kΔt), let u~k(x,y) now denote the previously computed backward solution evaluated at t=kΔt. With Tmax=(N+1)Δt, let wk(x,y)=uk(x,y)u~k(x,y), k=0,1,2,,N+1, denote the error at t=kΔt. Again, choosing ω=(λJ)1p, we get from Equation (Equation27), (30) (:2.010B) wk2(λJ)pexp[2λJ(Tmaxt)]1|||Pu|||2,+δexp{2λJ(Tmaxt)}+O(Δt),0tTmax.(30) (:2.010B) Here, the stabilization penalty is augmented by an additional term, resulting from amplification of the errors γ(x,y) in the given data at t=Tmax, as indicated in Equation (Equation26). Both of the first two terms on the right in Equation (Equation30) grow monotonically as t0, reflecting backward in time marching from t=Tmax.

Let the exact solution u(x,y,0) at t=0, satisfy a prescribed L2 bound, (31) u02M.(31) Again, with large Tmax, and λJ large enough to satisfy Equation (Equation13) in Lemma 3.1, the non-vanishing residuals in Equation (Equation30) lead to large errors, and the backward explicit scheme is not useful in that case. However, if Tmax is small enough that (32) 2λJTmaxlog(M/δ),(32) with (δ,M) as in Equations (Equation26) and (Equation31), we find, with β(t)=t/Tmax, (33) (:2.01014) wk2(λJ)pexp[2λJ(Tmaxt)]1|||Pu|||2,+M1β(t)δβ(t)+O(Δt),0tTmax.(33) (:2.01014)

The second term on the right in Equation (Equation33) represents the fundamental uncertainty in ill-posed backward continuation from noisy data, for solutions satisfying the prescribed bounds (δ,M) in Equations (Equation26) and (Equation31). That uncertainty is known to be best-possible in the case of autonomous selfadjoint problems. Therefore, in a limited but potentially significant class of problems, the stabilized backward explicit scheme for the linearized problem in Equation (Equation9), can produce results differing from what is best-possible only by a small stabilization penalty as Δt0.

For example, with parameter values such as Tmax=103, M=102, δ=103, we have M/δ=105=exp{2λJTmax}, and λJ5756. Hence, with p=3.0, we find (λJ)p<1.91×1011. We would then obtain from Equation (Equation33), (34) (:2.014) wk)2M1β(t)δβ(t)+(1.91×106)|||Pu|||2,+O(Δt),0tTmax.(34) (:2.014)

Remark 1

In most practical applications of ill-posed backward problems, the values of M and δ in Equation (Equation34) are seldom known accurately. In many cases, interactive adjustment of the parameter pair (ω,p), in the definition of the smoothing operator S in Equation (Equation7), based on prior knowledge about the exact solution, is crucial in obtaining useful reconstructions. This process is similar to the manual tuning of an FM station, or the manual focusing of binoculars, and likewise requires user recognition of a ‘correct’ solution. There may be several possible good solutions, differing slightly from one another. Typical values of (ω,p) lie in the range 1010ω105, 2.5p3.5. A useful strategy, which avoids oversmoothing, is to begin with smaller values of ω and p, possibly leading to instability. Gradually increasing these quantities will eventually locate useful pairs (ω,p). An important advantage of the unconditional stability in the compensated explicit schemes, is the ability to conduct rapid trial computations, using relatively large values of |Δt|. Having located potentially useful pairs (ω,p), fine tuning these parameters, using smaller values of |Δt|, can then be considered.

Remark 2

As previously noted in [Citation24], for the class of problems where it is useful, the present methodology offers significant advantages over the Quasi-Reversibility Method, or QR method, developed in [Citation20]. Applied to the irreversible evolution equation ut=Lu, t>0, the QR method alters that equation by adding the higher order spatial differential operator ϵLLu, so that the new evolution equation is well-posed backward in time. However, explicit schemes for ut=Lu+ϵLLu, Tmaxt0, are impractical on fine meshes, as they require a stringent Courant stability condition ϵΔt=O({Δx}4m), if L is an elliptic differential operator of order 2m. On the other hand, for nonlinear multidimensional problems on fine meshes, implicit schemes require computationally intensive iterative solution of large nonlinear systems of difference equations at every time step. The present methodology does not alter the original evolution equation. Rather, it is based on a pre-designed stabilized explicit backward marching scheme, which is used directly on nonlinear problems by simply lagging the nonlinearity at the previous time step. Computational examples involving the explicit leapfrog scheme on 1024×1024 images are discussed in [Citation25]. Most importantly, a wide choice of compensating smoothing operators S in Equation (Equation7), involving (Δ)p with possibly non-integer values of p, can be accessed within the same single computational procedure. This substantially increases the likelihood of achieving useful backward reconstructions.

5. Behavior in the nonlinear stabilized explicit scheme in Equation (8)

In the nonlinear system in Equation (Equation2), the ‘coefficients’ for the first-order derivative terms are u(x,y,t) and v(x,y,t), rather than the positive constants a and b, which is the case in the linearized problem in Equation (Equation9). However, if the solution to the nonlinear problem satisfies |u(x,y,t)|a, |v(x,y,t)|b, for (x,y)Ω, 0tTmax, and suitable positive a, b, the stability analyses given in Sections 3 and 4 may be applicable. Theorems 3.1 and 4.1, and Equations (Equation25) and (Equation33), may provide helpful insight into the behaviour of the nonlinear explicit scheme in Equation (Equation8). In particular, as discussed in Remark 1, we may expect to obtain useful results by interactive adjustment of the parameter pair (ω,p) in Equation (Equation6), in backward in time computations with the nonlinear scheme in Equation (Equation8). As will be shown below, numerical experiments with examples similar to that discussed in Equation (Equation34), appear to confirm such expectations.

5.1. Two Gaussians experiment at RE=50,000

With Ω the unit square in R2 with boundary ∂Ω, Tmax=2.5×103, and the kinematic viscosity ν=0.001, consider the following initial value problem (35) ut=L1uνΔuuuxvuy,0<tTmax,vt=L2vνΔvuvxvvy,0<tTmax,(35) together with homogeneous boundary conditions on ∂Ω, and the initial values (36) u(x,y,0)=u0(x,y),v(x,y,0)=v0(x,y),(x,y)Ω.(36) where (37) u0(x,y)=50.0exp{150.0[(x0.35)2+(y0.35)2]+25.0exp{150.0[(x0.55)2+(y0.55)2],v0(x,y)=50.0exp{150.0[(x0.55)2+(y0.55)2]+25.0exp{150.0[(x0.35)2+(y0.35)2].(37) Plots of the initial values u0, v0, are shown in the first column of Figure . From Equation (Equation1), with A=1.0, ν=0.001, Umax=50.0, we have RE=50,000 in this experiment. We shall apply the previously discussed nonlinear explicit scheme (38) u~n+1=Su~n+ΔtSL1u~n,v~n+1=Sv~n+ΔtSL2v~n,n=0,1,,N.(38) With |Δt|=1.0×108, (N+1)=250,000, Tmax=(N+1)|Δt|=2.5×103, and S chosen as the identity operator in Equation (Equation38), we first consider the forward problem. We use centred finite differencing for the spatial derivatives in L1, L2, on a uniform grid with Δx=Δy=1.0/512. The results of this stable forward computation are shown in the middle column in Figure . Note that the vertical scales in the middle column differ from those in the other two columns. Indeed, in the middle column, the computed maximum values for u~(x,y,Tmax), v~(x,y,Tmax), on the 512×512 spatial grid, are, respectively, 144.54, and 209.25.

Figure 1. Using precomputed input data at time T=2.5×103, shown in middle column, nonlinear explicit scheme in Equation (Equation8), run backward in time, seeks to recover true initial data shown in leftmost column. Actually recovered data are shown in rightmost column. Note that maximum values in middle column are much larger than in leftmost column, and maximum values in the rightmost column are slightly smaller than in leftmost column.

Figure 1. Using precomputed input data at time T=2.5×10−3, shown in middle column, nonlinear explicit scheme in Equation (Equation8(8) u~n+1=Su~n+ΔtSL1u~n,v~n+1=Sv~n+ΔtSL2v~n,n=0,1,…,N.(8) ), run backward in time, seeks to recover true initial data shown in leftmost column. Actually recovered data are shown in rightmost column. Note that maximum values in middle column are much larger than in leftmost column, and maximum values in the rightmost column are slightly smaller than in leftmost column.

Next, we consider the backward problem. Here, we use the previously computed values at time Tmax as input data, and apply the stabilized explicit scheme in Equation (Equation38). With a uniform grid on the unit square Ω, FFT algorithms are the natural tool to use in synthesizing the smoothing operator S which is based on (Δ)p. After a few interactive parameter trials, values of ω=1.0×109, together with p=3.0, were arrived at. Because of the S-induced unconditional stability, it was possible to use a value of |Δt| five times larger in the backward computation, namely, Δt=5.0×108,(N+1)=50,000. The resulting recovered initial values are plotted in the right column of Figure . The maximum values in the recovered data are 46.7, as compared to the true values of 50.0.

5.2. Image experiment at RE=3400

Our next experiment, illustrated in Figures  and , involves 256×256 pixel grey scale images, defined on the unit square Ω. As is well known, many natural images are generated by highly non-smooth intensity data. Use of such data in ill-posed time-reversed evolution equations, presents significant challenges to any reconstruction algorithm. At the same time, the use of images is particularly instructive as it enables visualizing the distortion produced by the forward evolution, together with the subsequent attempt at undoing that distortion by marching backward in time.

Figure 2. Using precomputed input data at time T=2.5×104, shown in middle column, nonlinear explicit scheme in Equation (Equation8), run backward in time, seeks to recover the true images shown in leftmost column. Actually recovered mages are shown in rightmost column. Notice severe wavy distortion of sea wall in blurred Sydney Opera House image, shown at bottom in middle column, and its successful reconstruction in bottom rightmost image.

Figure 2. Using precomputed input data at time T=2.5×10−4, shown in middle column, nonlinear explicit scheme in Equation (Equation8(8) u~n+1=Su~n+ΔtSL1u~n,v~n+1=Sv~n+ΔtSL2v~n,n=0,1,…,N.(8) ), run backward in time, seeks to recover the true images shown in leftmost column. Actually recovered mages are shown in rightmost column. Notice severe wavy distortion of sea wall in blurred Sydney Opera House image, shown at bottom in middle column, and its successful reconstruction in bottom rightmost image.

Figure 3. Backward recovery of the underlying intensity data that generate the images in the reconstruction experiment shown in Figure .

Figure 3. Backward recovery of the underlying intensity data that generate the images in the reconstruction experiment shown in Figure 2.

Here, the 2D Burgers' system in Equation (Equation35) is used, with Tmax=2.5×104, the kinematic viscosity ν=0.075, and homogeneous boundary conditions on ∂Ω. The initial values are the intensity data, shown in the leftmost column of Figure , that define the images shown in the leftmost column of Figure . These intensities range from 0 to 255. Accordingly, with A=1.0, ν=0.075, Umax=255.0, we have RE=3400 in this experiment.

With |Δt|=8.33×1010, (N+1)=300,000, Tmax=(N+1)|Δt|, and S chosen as the identity operator in Equation (Equation38), stable computation of the forward problem on a uniform grid with Δx=Δy=1.0/256, produced the blurred images shown in the middle column of Figure .

In the absence of the leftmost and rightmost columns in Figure , the images in the middle column of that figure, when magnified, are almost unrecognizable, due to the severe blurring caused by the forward evolution. In particular, the sea wall surrounding the Sydney Opera House develops a striking wavy pattern in the bottom middle image, reflecting some form of turbulence associated with Burgers' equation. Such severe distortion is noteworthy, as each of Tmax and RE, are one order of magnitude smaller than was the case in the previous Gaussian data experiment.

Next, using the intensity data in the middle column of Figure  as input at Tmax=2.5×104, we apply the stabilized explicit scheme in Equation (Equation38) to march backward in time. With ω=3.0×109, p=3.25, in the FFT-synthesized smoothing operator S, and a value of |Δt| three times larger than in the forward problem, we obtain the images shown in the rightmost column of Figure . These are the images defined by the recovered intensity data shown in the rightmost column of Figure . Evidently, credible reconstruction has been achieved, although the recovered images and data differ qualitatively and quantitatively from the exact values. However, such discrepancies are to be expected in ill-posed continuation from noisy data.

5.3. Image experiment in non-rectangular region at RE=2500

In rectangular regions Ψ, the Fast Fourier Transform is an efficient tool for synthesizing (νΔ)p for positive non-integer p. This was used to advantage in the previous two experiments. However, as was shown in [Citation25, Citation26], and will be shown again below, FFT Laplacian smoothing may be feasible for the 2D Burgers' equation in non-rectangular regions Ω, with zero Dirichlet data on an assumed smooth boundary ∂Ω. Enclosing Ω in a rectangle Ψ, a uniform grid is imposed on Ψ, fine enough to sufficiently well approximate ∂Ω. The discrete boundary Ωd, consisting of the grid points closest to ∂Ω, is then used in place of ∂Ω . We use centred finite differencing for the spatial derivatives in L1, L2, in Equation (Equation35). At each time step n in Equation (Equation38), after applying the operators (I+ΔtL1), (I+ΔtL2), to u~n, v~n, respectively, on ΩΨ, these discrete functions are then extended to all of Ψ by defining them to be zero on the grid points in ΨΩ. FFT algorithms are then applied on Ψ to synthesize S in Equation (Equation38), and produce u~n+1, v~n+1. Retaining only the values of these discrete functions on Ω, the process is repeated at the next time step.

The last experiment, illustrated in Figures  and , involves recognizable face images defined on an elliptical domain Ω, with area A=0.544, enclosed in the unit square Ψ. As before, the kinematic viscosity ν=0.075, and Umax=255.0. With A=0.738, we now have RE=2500. A 256×256 grid was placed on Ψ. As in the previous experiment, with S the identity operator, |Δt|=4.17×1010,(N+1)=600,000, Tmax=(N+1)|Δt|=2.5×104, stable forward computation in Equation (Equation38), using the intensity data shown in first column of Figure , produced the blurred images in the middle column of Figure , and the intensity data in the middle column of Figure . Although not as severe as in Figure , quite noticeable blurring is evident in Figure .

Figure 4. Using precomputed input data at time T=2.5×104, shown in middle column, nonlinear explicit scheme in Equation (Equation8), run backward in time, seeks to recover the true images shown in leftmost column. Actually recovered images are shown in rightmost column.

Figure 4. Using precomputed input data at time T=2.5×10−4, shown in middle column, nonlinear explicit scheme in Equation (Equation8(8) u~n+1=Su~n+ΔtSL1u~n,v~n+1=Sv~n+ΔtSL2v~n,n=0,1,…,N.(8) ), run backward in time, seeks to recover the true images shown in leftmost column. Actually recovered images are shown in rightmost column.

Figure 5. Backward recovery of the underlying intensity data that generate the images in the reconstruction experiment shown in Figure .

Figure 5. Backward recovery of the underlying intensity data that generate the images in the reconstruction experiment shown in Figure 4.

Next, using the intensity data in the middle column of Figure  as input at Tmax=2.5×104, we apply the stabilized explicit scheme in Equation (Equation38) to march backward in time, using the FFT strategy outlined in the first paragraph. With ω=4.0×1010, p=3.5, in the smoothing operator S, and a value of |Δt| 10 times larger than in the forward problem, we obtain the images shown in the rightmost column of Figure . These are the images defined by the recovered intensity data shown in the rightmost column of Figure . Again, quite good reconstructions are obtained, despite inevitable discrepancies between the exact and recovered data and images in Figures  and .

6. Concluding remarks

To the author's knowledge, successful backward in time computations in the 2D Burgers' equation, have not previously appeared in the literature. The results obtained here offer a glimpse of what might be feasible, although the stabilized explicit scheme in Equation (Equation8) will generally be useful only in a limited class of problems. The successful first experiment in Section 5, at RE=50,000, is noteworthy even though relatively simple data were involved. The second experiment, at a much smaller Reynolds number, was instructive in highlighting the severe distortions that can occur with highly complex non-smooth data, and the remarkable backward recovery that yet remains possible. The last experiment is important in illustrating the possible use of FFT-synthesized smoothing operators in non-rectangular regions.

Theoretical error estimates for backward reconstruction from noisy data, such as are given in [Citation15–19], necessarily reflect worse case error accumulation scenarios, and may be too pessimistic in individual situations. As in [Citation23–26], the use of 8 bit grey scale images provide challenging test examples, as well as an instructive way of exploring the feasibility of backward recovery with various types of complicated non-smooth data.

Disclosure statement

No potential conflict of interest was reported by the author.

References

  • Cole JD. On a quasi-linear parabolic equation occurring in aerodynamics. Quart Appl Math. 1951;3:225–236. doi: 10.1090/qam/42889
  • Abazari R, Borhanifar A. Numerical study of the solution of the Burgers and coupled Burgers equations by a differential transformation method. Comput Math Appl. 2010;59:2711–2722. doi: 10.1016/j.camwa.2010.01.039
  • Zhu H, Shu H, Ding M. Numerical solutions of two-dimensional Burgers' equations by discrete Adomian decomposition method. Comput Math Appl. 2010;60:840–848. doi: 10.1016/j.camwa.2010.05.031
  • Srivastava VK, Tamsir M, Bhardwaj U, et al. Crank-Nicolson scheme for numerical solutions of two-dimensional coupled Burgers' equations. Int J Sci Eng Res. 2011;2:1–7.
  • Khan M. A novel technique for two dimensional Burgers equation. Alexandria Eng J. 2014;53:485–490. doi: 10.1016/j.aej.2014.01.004
  • Wang Y, Navon IM, Wang X, et al. 2D Burgers equation with large Reynolds number using POD/DEIM and calibration. Int J Numer Meth Fluids. 2016;82:909–931. doi: 10.1002/fld.4249
  • Zhanlav T, Chuluunbaatar O, Ulziibayar V. Higher-order numerical solution of two-dimensional coupled Burgers' equations. Am J Comput Math. 2016;6:120–129. doi: 10.4236/ajcm.2016.62013
  • Ou K, Jameson A. Unsteady adjoint method for the optimal control of advection and Burgers' equation using high order spectral difference method. 49th AIAA Aerospace Science Meeting, 2011 January 4–7; Orlando, FL.
  • Allahverdi N, Pozo A, Zuazua E. Numerical aspects of large-time optimal control of Burgers' equation. ESAIM Math Model Numer Anal. 2016;50:1371–1401. doi: 10.1051/m2an/2015076
  • Gosse L, Zuazua E. Filtered gradient algorithms for inverse design problems of one-dimensional Burgers' equation. In: Gosse L, Natalini R, editors. Innovative algorithms and analysis. SINDAM Series. Springer; 2017. p. 197–227. doi: 10.1007/978-3-319-49262-9_7
  • Lundvall J, Kozlov V, Weinerfelt P. Iterative methods for data assimilation for Burgers' equation. J Inv Ill-Posed Problems. 2006;14:505–535. doi: 10.1515/156939406778247589
  • Auroux D, Blum J. A nudging-based data assimilation method for oceanographic problems: the back and forth nudging (BFN) algorithm. Proc Geophys. 2008;15:305–319. doi: 10.5194/npg-15-305-2008
  • Auroux D, Nodet M. The back and forth nudging algorithm for data assimilation problems: theoretical results on transport equations. ESAIM:COCV. 2012;18:318–342. doi: 10.1051/cocv/2011004
  • Auroux D, Bansart P, Blum J. An evolution of the back and forth nudging for geophysical data assimilation: application to Burgers' equation and comparison. Inverse Probl Sci Eng. 2013;21:399–419. doi: 10.1080/17415977.2012.712528
  • Carasso A. Computing small solutions of Burgers' equation backwards in time. J Math Anal App. 1977;59:169–209. doi: 10.1016/0022-247X(77)90100-7
  • Hào DN, Nguyen VD, Nguyen VT. Stability estimates for Burgers-type equations backward in time. J Inverse Ill Posed Probl. 2015;23:41–49. doi: 10.1515/jiip-2013-0050
  • Knops RJ, Payne LE. On the stability of solutions of the Navier-Stokes equations backward in time. Arch Rat Mech Anal. 1968;29:331–335. doi: 10.1007/BF00283897
  • Payne LE, Straughan B. Comparison of viscous flow backward in time with small data. Int J Nonlinear Mech. 1989;24:209-214. doi: 10.1016/0020-7462(89)90039-5
  • Knops RJ. Logarithmic convexity and other techniques applied to problems in continuum mechanics. In: Knops RJ, editor. Symposium on non-well-posed problems and logarithmic convexity. Vol. 316, Lecture notes in mathematics. New York (NY): Springer-Verlag; 1973. p. 31–54.
  • Lattès R, Lions JL. Méthode de Quasi-Réversibilité et Applications [The method of quasi-reversibility and applications]. Paris: Dunod; 1967.
  • Ames KA, Straughan B. Non-standard and improperly posed problems. New York (NY): Academic Press; 1997.
  • Carasso AS. Reconstructing the past from imprecise knowledge of the present: effective non-uniqueness in solving parabolic equations backward in time. Math Methods Appl Sci. 2012;36:249–261. doi: 10.1002/mma.2582
  • Carasso AS. Stable explicit time-marching in well-posed or ill-posed nonlinear parabolic equations. Inverse Probl Sci Eng. 2016;24:1364–1384. doi: 10.1080/17415977.2015.1110150
  • Carasso AS. Stable explicit marching scheme in ill-posed time-reversed viscous wave equations. Inverse Probl Sci Eng. 2016;24:1454–1474. doi: 10.1080/17415977.2015.1124429
  • Carasso AS. Stabilized Richardson leapfrog scheme in explicit stepwise computation of forward or backward nonlinear parabolic equations. Inverse Probl Sci Eng. 2017;25:1–24. doi: 10.1080/17415977.2017.1281270
  • Carasso AS. Stabilized backward in time explicit marching schemes in the numerical computation of ill-posed time-reversed hyperbolic/parabolic systems. Inverse Probl Sci Eng. 2018;1:1–32. doi: 10.1080/17415977.2018.1446952

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.