Publication Cover
Stochastics
An International Journal of Probability and Stochastic Processes
Volume 94, 2022 - Issue 8
1,195
Views
1
CrossRef citations to date
0
Altmetric
Research Article

Girsanov theorem for multifractional Brownian processes

, &
Pages 1137-1165 | Received 12 Dec 2019, Accepted 04 Jan 2022, Published online: 28 Jan 2022

ABSTRACT

In this article, we will present a new perspective on the variable-order fractional calculus, which allows for differentiation and integration to a variable order. The concept of multifractional calculus has been a scarcely studied topic within the field of functional analysis in the past 20 years. We develop a multifractional differential operator which acts as the inverse of the multifractional integral operator. This is done by solving the Abel integral equation generalized to a multifractional order. With this new multifractional differential operator, we prove a Girsanov's theorem for multifractional Brownian motions of Riemann–Liouville type. As an application, we show how Girsanov's theorem can then be applied to prove the existence of a unique strong solution to stochastic differential equations where the drift coefficient is merely of linear growth, and the driving noise is given by a non-stationary multifractional Brownian motion with a Hurst parameter as a function of time. The Hurst functions we study will take values in a bounded subset of (0,12). The application of multifractional calculus to SDEs is based on a generalization of the works of D. Nualart and Y. Ouknine [Regularization of differential equations by fractional noise, Stoch Process Appl. 102(1) (2002), pp. 103–116].

MSC 2010:

1. Introduction

Fractional calculus is a well-studied subject in the field of functional analysis and has been adopted in the field of stochastic analysis as a tool for analysing stochastic processes which exhibit some sort of memory of its own past. A typical such process is called fractional Brownian motion (fBm), denoted as B~H:[0,T]R for some H(0,1). The process is characterized by its co-variance function RH(s,t)=E[B~sHB~tH]=12(t2H+s2H|ts|2H),and the fBm is often represented as a type of fractional integral with respect to a Brownian motion, in the sense that B~tH=0tKH(t,s)dBs,where KH is a singular and square integrable Volterra kernel and tBt is a Brownian motion. In fact, KH(s,t)(ts)H12 when |ts|>0 is sufficiently small. Since H(0,1), the kernel is square integrable, and we interpret the integral as a Wiener integral. This type of process has been studied in connection with various physical phenomena, such as weather and geology, but also in modelling of internet traffic and finance. For a more detailed description of probabilistic and analytic properties of the fractional Brownian motion, see e.g. [Citation17, Chp. 5].

In the 1990s, a generalization of the fractional Brownian motion was proposed by letting the Hurst parameter H of the process BH to be dependent on time (e.g. [Citation19]). The process was named multifractional Brownian motion (or mBm for short), referring to the fact that the fractional parameter H was a function depending on time taking values between 0 and 1. There has later been a series of articles on this type of processes, relating to local time, local and global Hölder continuity of the process, and other applications (see [Citation3,Citation4,Citation6,Citation13,Citation14] to name a few). However, there has not been, to the best of our knowledge, any articles on solving stochastic differential equations driven by this type of process. The mBm is a non-stationary stochastic process which makes it more intricate to handle in differential equations and require sufficient tools from fractional analysis in the sense of multifractional calculus (or fractional calculus of variable order). This type of fractional calculus was originally proposed by Samko and Ross [Citation21] in the beginning of the 1990s and generalizes the classical Riemann–Liouville fractional integral by considering I0+αf(x)=1Γ(α(x))0x(xt)α(x)1f(t)dt,where fL1(R) and α:R[c,d](0,1). In the same way, the authors also proposed to generalize the fractional derivative, i.e. D0+αf(x)=1Γ(1α(x))ddx0xf(t)(xt)α(x)dt.However, by generalizing the fractional derivative in this way, the authors found that the derivative is no longer the inverse of the integral operator, but one rather finds that D0+αI0+α=I+K,where I is the identity, and under some conditions K is a compact operator. Some of the research then focused on understanding the properties of the operator K, but fractional calculus of variable order has been scarcely treated in the literature, so far.

The Girsanov theorem is a fundamental tool in stochastic analysis and has various application areas. One of the most prominent areas of application of this theorem in recent years has been towards the construction of weak solutions to SDEs; see e.g. [Citation18] in the case when the SDE is driven by a fBm. To be able to develop a Girsanov's theorem to SDEs driven by multifractional noise, we need to have an inverse operator relating to the multifractional integral. In general, inverse operators relating to various integral operators have been extensively studied (see. e.g. [Citation12] and the references therein), however, the particular inverse operator relating to the multifractional Riemann–Liouville operator has, to the best of our knowledge, not been identified or studied.

In this article, we construct the multifractional derivative along a function α:[0,T][a,b](0,1) as the inverse of the multifractional integral along α, and see that this multifractional differential operator is well defined on a certain class of functions. The multifractional differential operator is applied towards the construction of a Girsanov theorem for Riemann–Liouville multifractional Brownian motion.

We mention that the restriction of the range of the function α to [a,b](0,1) is motivated by the application towards an analysis of the Riemann–Liouville multifractional Brownian motion given on the form (1) Bth:=c(ht)0t(ts)ht12dBs,t[0,T],(1) where {Bt}t[0,T] is a Brownian motion, c:R+R+ is a positive bounded function, and the integral is interpreted in the Wiener sense. The range of the function h is always assumed to be between (0,1), to mimic the behaviour of the Riemann Liouville fractional Brownian motion locally. We mention, however, that it might be possible to extend the definition of this operator to sufficiently smooth functions α:[0,T][a,b]R+. The first step in this direction would be to construct the operator for α taking values in all of [0,1], and then extend to any interval [k,k+1] for kN, before considering the general case of [a,b]R+ by taking into account the times tα(t) crosses the positive integers. Since our focus is mainly on the Girsanov theorem for multifractional Brownian motion of Riemann–Liouville type, we leave this extension for future investigations.

As an application of the Girsanov theorem, we show existence of unique strong solutions to certain SDEs based on the techniques developed in [Citation18]. We point out that the multifractional differential operator constructed here coincides with the Riemann–Liouville fractional derivative in the case when α(t)=c for some constant c(0,1). The multifractional derivative can therefore be seen as an extension of the Riemann–Liouville derivative to the variable order case. Just as the multifractional Brownian motion is an extension of the fractional Brownian motion, the corresponding Girsanov theorem for the multifractional Brownian motion is an extension of the classical Girsanov theorem for fractional Brownian motion, and the two agree when the regularity function of the multifractional Brownian motion is constant.

The SDE we will investigate is given by (2) Xtx=x+0tb(s,Xsx)ds+Bth;X0x=xR,t[0,T],(2) where Bh is a Riemann–Liouville multifractional Brownian motion with regularity function h:[0,T][a,b](0,12). Note that in the case when h is constant with values between 0 and 12, the above equation is similar to the SDE driven by a fractional Brownian motion studied in [Citation18], where the difference lies in the fact that we use here a Riemann–Liouville type of fractional process, while in [Citation18] it is used a classical fractional Brownian motion.

To the best of our knowledge, we do not know of any other article which investigates SDEs driven by any explicit form of multifractional noise and certainly does not investigate the possible regularizing effect it may have on SDEs. Actually, it turns out that we only need t0tb(s,Xsx)ds to be locally β—Hölder continuous for some well chosen β to construct weak solutions, but we will need b to be of linear growth to get strong solutions by the comparison theorem.

1.1. Notation and preliminaries

We will make use of a space of Hölder continuous functions defined as functions f:[0,T]V for some Banach space V (we will mostly use V=Rd in this article), which is such that the Hölder norm fβ is defined by fβ=|f(0)|+supst[0,T]|f(t)f(s)||ts|β<.We denote this space by Cβ([0,T];V). In addition, for a[0,T] let us define Caβ([0,T];V) to be the subspace of Cβ([0,T];V) such that for fCaβ([0,T];V), f(a)=0. In particular, we will be interested in C0β([0,T];V). We use the standard notation for Lp(V;μ) spaces, where V is a Banach space and µ is a measure on the Borel sets of V (usually what measure we use is clear from the situation), i.e. fLp(V):=(V|f|pdμ)1p.Furthermore let Δ(m)[a,b] denote the m-simplex. That is, define Δ(m)[a,b] to be given by Δ(m)[a,b]={(s1,..,sm)|as1<<smb}.

We will use the notion of variable order exponent spaces, which has in recent years become increasingly popular in potential analysis, see for example [Citation8,Citation11] for two new books on the subject. In this paper, we will be particularly concerned with the variable exponent Hölder space, as we are looking to differentiate functions to a variable order. We will, therefore, follow the definitions of such a space introduced in [Citation21] and give some preliminary properties.

Definition 1.1

Let α be a C1 function with values in [a,b](0,1). We define the space of locally Hölder continuous functions f:[0,T]R with regularity function α by the norm fα();[0,T]:=|f(0)|+supx,y[0,T]|f(x)f(y)||xy|max(α(x),α(y))<.We denote this space by Cα()([0,T];R). Moreover denote by C0α()([0,T];R) the space of locally Hölder continuous functions which start in 0, i.e. for fC0α()([0,T];R) then f(0)=0.

Remark 1.1

Notice that in C0α()([0,T];R), all functions f satisfy |f(x)|Cxα(x). Indeed, just write |f(x)||f(x)f(0)|+|f(0)|≤∥fα();[0,T]xα(x),where we have used that max(α(x),α(y))α(x). Further, we have the following equivalence sup|h|1;x+h[0,T]|f(x+h)f(x)||h|α(x)supx,y[0,T]|f(x)f(y)||xy|max(α(x),α(y)).

To show what we mean by the local regular property, we may divide the interval [0,T] into n2 intervals, by defining T0=0, T1=Tn and Tk=kTn, then [0,T]=k=0n1[Tk,Tk+1].Furthermore, we may define two sequences {βk} and {β^k} of sup and inf values of α restricted to each interval [Tk,Tk+1], i.e βk=inft[Tk,Tk+1]α(t),β^k=supt[Tk,Tk+1]α(t).Then we have the following inclusions Cβk([Tk,Tk+1])Cα()([Tk,Tk+1])Cβ^k([Tk,Tk+1]).By letting n be a large number, we have by the continuity of α that β^kβk=ε gets small. Therefore, the space Cα()([Tk,Tk+1];R) is similar to the spaces Cβk([Tk,Tk+1];R) and Cβ^k([Tk,Tk+1];R) when the Tk+1Tk is small, but on large intervals the spaces may be different depending on the chosen α. We will often write f:=inft[0,T]f(t) and f:=supt[0,T]f(t).

2. Multifractional calculus

In this section, we will give meaning to the multifractional calculus. In the field of classical fractional analysis, it is often common to refer to this concept as fractional calculus of variable order, as the idea is to let the order of integration (or differentiation) be dependent on time (or possibly space, but for our application, time will be sufficient), see for example [Citation20,Citation21] and the references therein. We will use the term multifractional calculus for the concept of fractional calculus of variable order, as this is coherent with the notion of multifractional stochastic processes.

Definition 2.1

(Multifractional Riemann-Liouville integrals) For 0<c<d, assume fL1([c,d]), and α:[c,d][a,b](0,) is a differentiable function. We define the left multifractional Riemann–Liouville integral operator Ic+α by (Ic+αf)(x)=1Γ(α(x))cx(xy)α(x)1f(y)dy.And define the space Ic+αLp([0,T]) as the image of Lp([0,T]) under the operator Ic+α.

By the definition of the space Ic+αLp([0,T]), we have that for all gIc+αLp([0,T]), g(c)=0. Indeed, since g=Ic+αf, we must have Ic+αf(c)=0 regardless of the function α. This property will become very important later, and we will give a more thorough discussion of the properties of the multifractional integral and derivative at the end of this section.

We want to define the multifractional derivative of a function gIc+αLp([0,T]) as the inverse operation of Ic+α, such that if gIc+αLp([0,T]) then there exists a unique fLp which satisfies g=Ic+αf and we define the fractional derivative Dc+αg=f. In contrast to the methodology used in [Citation21], we believe that to be able to construct a coherent fractional calculus, one must choose to generalize either the derivative or the integral operator, and the other operator must be found through the definition of the first. Therefore, we will use a method similar to that solving Abel's integral equation, often used to motivate the definition of the fractional derivative in the case of constant regularity function. However, by generalizing the integral equation, the calculations become a little more complicated.

We will start to give formal motivation for how we obtain the derivative operator corresponding to the multifractional integral. First, for an element gIa+αLp([0,T]), we know there exists a fLp([0,T]) such that g(t)=1Γ(α(t))atf(s)(ts)1α(t)ds.By elementary manipulations of the equation above, and using Fubini's theorem, we obtain the equation, axΓ(α(t))g(t)(xt)α(t)dt=axf(s)01τα(s+τ(xs))1(1τ)α(s+τ(xs))dτds.If α is constant, the last integral is simply the Beta function. However, the fact that α is a function complicates the expression. In the end, we are interested in solving the above equation by obtaining f(x), therefore, we may try to differentiate w.r.t x on both sides of the equation. We find that (3) ddx(axΓ(α(t))g(t)(xt)α(t)dt)=f(x)B(α(x),1α(x))+axf(s)01ddx(τα(s+τ(xs))1(1τ)α(s+τ(xs)))dτds,(3) where we recall that B(x,y) denotes the Beta function. Re-ordering the terms, we obtain B(α(x),1α(x))f(x)=ddx(axΓ(α(t))g(t)(xt)α(t)dt)axf(s)01ddx(τα(s+τ(xs))1(1τ)α(s+τ(xs)))dτds.In this sense, the multifractional derivative Da+α of a function gIa+αLp([0,T]) is given by f which is a solution to the linear Volterra integral equation above. Writing the above equation more compactly, we have (4) f(x)=Ga(g)(x)+axf(s)F(s,x)ds.(4) where F and Ga are given by Ga(g)(x):=ddx(axΓ(α(t))g(t)(xt)α(t)dt)F(s,x):=01ddx(τα(s+τ(xs))1(1τ)α(s+τ(xs)))dτ.We will use the rest of this section to prove that the integral equation that we obtained above, actually is well defined, and that the multifractional derivative can be defined as the solution to (Equation4). In the above motivation we looked at the multifractional derivative as the inverse operator to the multifractional integral with initial value of integration at a point a. However, we will mostly be interested in looking at the inverse operator of the integral starting at a = 0, and therefore the construction of the multifractional will be focused on this particular case, although it is straightforward to generalize this operator to any a[0,T]. We will give a suitable definition of the multifractional derivative in the following steps:

  1. The function F:Δ(2)([0,T])R in Equation (Equation4) is well defined and bounded on Δ(2)([0,T]) as long as α is C1.

  2. We show that if HLp([0,T]), then the linear Volterra integral equation f(x)=H(x)+0xf(s)F(s,x)ds,has a unique solution f in Lp([0,T]).

  3. The functional G0 given by G0(g)(x)=1B(α(x),1α(x))ddx(0xΓ(α(t))g(t)(xt)α(t)dt),is such that the mapping xG0(g)(x)Lp([0,T]) for all functions gCα()+ϵ([0,T];R), where p>1 and some small ϵ>0 depending on α.

  4. Then (Equation4) as motivated above has a unique solution in Lp([0,T]) for all gCα()+ϵ([0,T];R), and we will define the multifractional derivative D0+αg=f as the solution to Equation (Equation4).

First, we will look at the function F:Δ(2)([0,T])R in Equation (Equation4). Notice that the derivative of τα(s+τ(xs))1(1τ)α(s+τ(xs)) is explicitly given by (5) ddx(τα(s+τ(xs))1(1τ)α(s+τ(xs)))=ln(τ1τ)τα(s+τ(xs))(1τ)α(s+τ(xs))α(s+τ(xs)),(5) and we arrive at a lemma which gives estimates on this derivative.

Lemma 2.1

Let αC1([0,T];(0,1)). Define the functions U(s,x;τ):=α(s+τ(xs))×(ln(τ)ln(1τ))(τ1τ)α(s+τ(xs)),and F(s,x)=01U(s,x;τ)dτ.Then the following bound holds F;Δ(2)([0,T]):=sup(s,x)Δ(2)([0,T])|01U(s,x;τ)dτ|C(α)αC1([0,T])<,where αC1([0,T]):=supt[0,T]|α(t)|+supt[0,T]|α(t)|.

Proof.

We begin to observe that |01(ln(τ)ln(1τ))(τ1τ)α(s+τ(xs))α(s+τ(xs))dτ|≤∥αC1([0,T])01|(ln(τ)ln(1τ))(τ1τ)α(s+τ(xs))|dτ<.The fact that the last integral is finite follows by simple calculations, knowing that α(t)[c,d](0,1) for all t[0,T]. Indeed, we can write 01|ln(τ)|(τ1τ)α(s+τ(xs))dτ+01|ln(1τ)|(τ1τ)α(s+τ(xs))dτ=012|ln(τ)|(τ1τ)α(s+τ(xs))dτ+012|ln(1τ)|(τ1τ)α(s+τ(xs))dτ+121|ln(τ)|(τ1τ)α(s+τ(xs))dτ+121|ln(1τ)|(τ1τ)α(s+τ(xs))dτ=I1+..+I4.Now it is readily checked by elementary integration techniques that the following relations hold I12α012|ln(τ)|ταdτ<,I22α|ln(12)|012ταdτ<,I32α|ln(12)|012(1τ)αdτ<,I421α|ln(12)|012(1τ)αdτ<,where α:=inft[0,T]α(t) and α:=supt[0,T]α(t).

Remark 2.1

Note that in the case α is constant, i.e. α(t)=c(0,1),t[0,T] then F(s,x)=0. Indeed, then α(t)=0 for all t[0,T], and it follows from (Equation5) that F = 0.

We now show that the solution to the differential equation given by f(x)=H(x)+0xf(s)F(s,x)ds,is well-defined as an element of Lp([0,T]) when HLp([0,T]), and F is given as above.

Lemma 2.2

Let HLp([0,T]) and F be given as in Lemma 2.1. Then there exists a unique solution f to the equation f(x)=H(x)+0xf(s)F(s,x)ds,in Lp([0,T]).

Proof.

For simplicity, write Lp=Lp([0,T]). We consider a usual Picard iteration and define f0(x)=H(x)fn(x)=H(x)+0xfn1(s)F(s,x)ds,then for x[0,T] we use Lemma 2.1 to see that fn+1fnLp([0,T])p=0(fn(s)fn1(s))F(s,)dsLpp≤∥F;Δ(2)([0,T])p0T[(0x1ds)p1pfnfn1Lp([0,x])]pdx≤∥F;Δ(2)([0,T])p0Txp1fnfn1Lp([0,x])pdx,where we have used the Hölder inequality. By iteration, we obtain fn+1fnLp≤∥F;Δ(2)([0,T])npHLp0Tx1p10xnxnp1dxndx1=∥F;Δ(2)([0,T])np1pnn!TnpHLp,and more generally, we see that for m>n fmfnLp=i=nm1fi+1fiLp≤∥HLpi=nm1F;Δ(2)([0,T])ip1pii!Tip.Therefore, {fn}nN is Cauchy in Lp andfmfnLp0 as n,m, and we denote f to be its limit. Furthermore, we can choose a subsequence fnk converging almost surely and in Lp to f. By applying dominated convergence theorem, f solves the original linear Volterra integral equation, f(x)=H(x)+0xf(s)F(s,x)dsinLp([0,T]),and we set f:=f. The uniqueness of the solution is immediate by the linearity of the equation. Indeed, if f1 and f2 both solves the above equation, then their difference satisfies |f1(x)f2(x)|=|0x(f1(s)f2(s))F(s,x)dsF;Δ(2)([0,T])0x|f1(s)f2(s)|ds,and the conclusion follows from Grönwall's inequality.

The next lemma shows that the functional G0 in Equation (Equation4) is an element of Lp when acting on a certain class of functions.

Lemma 2.3

Let gCα()+ϵ([0,T];R) with αC1([0,T],[a,b]) for [a,b](0,1) and ϵ<1α. Furthermore, assume that for some p>1, the regularity function α satisfies the inequality αp<1.Then the functional G0 evaluated in g defined by G0(g)(x)=1B(α(x),1α(x))ddx(0xΓ(α(t))g(t)(xt)α(t)dt),is an element of Lp([0,T]).

Proof.

Let us first look at G0(g)(x), but ignoring the factor with the Beta function as this is behaving well already, i.e, there exists a β>0 such that 0<βB(α(x),1α(x))x[0,T],as α(x)[a,b] for all x[0,T]. Therefore, we need to prove that G~0(g)(x)=ddx(0xΓ(α(t))g(t)(xt)α(t)dt)=:ddx(0xK(x,t)g(t)dt),is an element of Lp([0,T]). In particular, we will show that the integral above is in fact continuously differentiable on (0,T], and then we show that it is an element in Lp([0,T]). Expanding the integral by adding and subtracting the point g(x), we can see that, 0xK(x,t)g(t)dt=g(x)0xK(x,t)dt0xK(x,t)(g(x)g(t))dt.Then one can show that ddx(0xK(x,t)g(t)dt)=g(x)ddx0xK(x,t)dt0xddxK(x,t)(g(x)g(t))dt.See the appendix in Section 4 for proof of this relation, which is based on using the definition of the derivative. In this way, we can make sense of this derivative by proving that the above two terms are elements of Lp([0,T]) when gCα()+ϵ([0,T];R).

Writing the second term explicitly, we have by straightforward derivation that 0xddxK(x,t)(g(x)g(t))dt=0xα(t)Γ(α(t))(g(x)g(t))(xt)α(t)+1dt.Notice that ddxK(x,t) is singular, but since gCα()+ϵ([0,T];R), we get the estimate |0xddxK(x,t)(g(x)g(t))dt|≤∥gCα()+ϵ([0,T])×0xα(t)|Γ(α(t))||xt|max(α(x),α(t))α(t)+ϵ1dt=:∥gCα()+ϵ([0,T])P1(x,ϵ).The gamma function is well defined on any [a,b](0,1), and α is C1 and we know max(α(x),α(t))α(t)=max(α(x)α(t),0)0,we have that the singularity is integrable, hence the above is well defined for all x[0,T]. The function P1 is dependent on both x and ϵ, but also on α, and we can see that |P1(x,ϵ)|C(α)xϵϵ.Therefore, P1 is Lp integrable, for all p as long as ϵ>0.

We are left to prove that the first term is an element of Lp, i.e. we need to show that xg(x)ddx0xΓ(α(t))(xt)α(t)dtLp([0,T]).Write M(x)=0xΓ(α(t))(xt)α(t)dt, and set u=xt, then by a change of variable M(x)=0xΓ(α(xu))uα(xu)du, and by Leibniz integral rule, we have for all x(0,T], ddxM(x)=Γ(α(0))xα(0)+0xddx(Γ(α(xu)uα(xu))du.and g(x)ddxM(x)=g(x)M1(x)+g(x)M2(x).The derivative inside the integral in M2 can be calculated explicitly as follows, ddx(Γ(α(xu))uα(xu))=uα(xu)α(xu)Γ(α(xu))(ln(1/u)+ψ0(α(xu))),where ψ0 is the digamma function. Note that since α(t)[a,b](0,1), the quantities C(Γ,ψ0;[a,b])=supr[a,b]|Γ(r)ψ0(r)|<.Using the fact that αC1([0,T]), it follows that |ddx(Γ(α(xu))uα(xu))|αC(Γ,ψ0;[a,b])uα(1+ln(1/u)).Since α(0,1), using the bound found above, elementary computations yields that there exists a positive constant C=C(T,[a,b])>0 such that (6) supx[0,T]|M2(x)|C.(6) Furthermore, observe that there exists a constant C>0 such that |g(x)M1(x)|gΓ(α(0))xα(0)Cxα,which is in Lp([0,T]) if αp<1. Therefore, it follows that xg(x)ddxM(x)Lp([0,T]) given that αp<1.

Combining the results above, we obtain that the G0 functional in our representation is well defined as an operator from Cα()+ϵ([0,T];R) into Lp([0,T]). Furthermore, it follows by the above computations that there exists a function xCT,α,ϵ(x)Lp([0,T]) such that (7) |G0(g)(x)|CT,α,ϵ(x)gα()+ϵ;[0,T].(7)

Remark 2.2

Note that the functional G0(g) coincides with the classical Riemann–Liouville derivative when α(t)=c(0,1) is constant for all t[0,T].

With the previous results at hand, we are now ready to define the multifractional derivative as the unique solution to the linear Volterra type integral equation investigated above. This is then the inverse operator of the generalized Riemann–Liouville integral.

Corollary 2.1

Under the assumptions from Lemma 2.3, define the multifractional derivative D0+αg of a function gCα()+ϵ([0,T];R) to be the solution of the differential equation f(x)=G0(g)(x)+0xf(s)F(s,x)ds,where G0 (dependent on g) and F are given as in Lemmas 2.3 and 2.1, respectively.

Remark 2.3

The derivative operator that we have defined above is indeed the inverse operator of the multifractional integral. To see this, assume for a moment that for a function gCα()+ϵ, we have the derivative D0+αg(x)=f(x)for fLp([0,T]),then we can look at Equation (Equation4), which we now know is well posed, and we can go backwards in the derivation method, which is found in the beginning of Section 2, to find that (8) g(t)=1Γ(α(t))0t(ts)α(t)1f(s)ds=(I0+αf)(t)=(I0+α(D0+αg))(t),(8) and hence gI0+αLp([0,T]).

Remark 2.4

From Remarks 2.1 and 2.2, it now follows directly that the multifractional derivative D0+αg coincides with the classical Riemann Liouville derivative when α(t)=c(0,1) is constant for all t[0,T]. Indeed, since in this case F = 0, and G0(g) coincides with the Riemann–Liouville fractional derivative, it follows that D0+αg as defined above coincides with the Riemann–Liouville derivative.

By simple iterations of the linear Volterra equation in (Equation8), we can get an explicit representation of this multifractional derivative.

Theorem 2.1

The multifractional derivative can be represented in Lp([0,T]) as the infinite sequence of integrals D0+αg(x)=G0(g)(x)+m=1Δ(m)(0,x)G0(g)(sm+1)F(sm+1,sm)××F(s1,x)dsm+1ds1,for any gCα()+ϵ([0,T];R) with αC1([0,T],[a,b]) for [a,b](0,1) and ϵ<1α. Furthermore, assume that for some p>1, the regularity function α satisfies the inequality αp<1.Then we have the estimate |D0+αg(x)|C(T,ϵ,α;x)gα()+ϵ;[0,T],where xC(T,ϵ,α;x) is an Lp function for any p satisfying the inequality and hence D0+αgLp([0,T]).

Proof.

The representation is immediate as a consequence of Corollary 2.1. We know from Lemmas 2.3 and 2.1, specifically (Equation7), that |G0(g)(x)|C(T,ϵ,α;x)gα()+ϵ;[0,T],and xC(T,ϵ,α;x)Lp([0,T]),for any p1 satisfying the inequality αp<1. We therefore obtain the following bounds |Δ(m)(0,x)G0(g)(sm+1)F(sm+1,sm)××F(s1,x)dsm+1ds1|≤∥F;Δ([0,T])mΔ(m)(0,x)|G0(g)(sm+1)|dsm+1ds1CF;Δ([0,T])mgα()+ϵΔ(m)(0,x)C(T,ϵ,α;sm+1)dsm+1ds1.We can then use Cauchy's formula for repeated integration given by Δ(m)(0,x)f(sm+1)dsm+1ds1=1m!0x(xt)mf(t)dt.Using this, we get the estimate on the representation |D0+αg(x)|C(T,ϵ,α;x)gα()+ϵ;[0,T]+gα()+ϵm=10xF;Δ([0,T])m(xt)mm!C(T,ϵ,α;t)dtC(T,ϵ,α;x)gα()+ϵexp(F;Δ([0,T])x),which completes the proof.

Remark 2.5

With Theorem 2.1, we can see that the multifractional differential operator is well defined on any local Hölder continuous space of order α()+ϵ for some small ϵ, as long as p and ϵ satisfies αp<1. Therefore, we have the relation Cα()+ϵ([0,T];R)I0+αLp([0,T]).

2.1. Remark on multifractional calculus

Most of the articles we have found on the multifractional calculus or fractional calculus of variable order has been related to applications in physics. In an article by Hartley and Lorenzo [Citation16], the authors suggest many applications of multifractional calculus in physics. Particular examples are given when physical phenomena is modelled by a fractional differential equation, i.e. D0+αy(t)=f(t,y(t)),y(0)=y0R,but the fractional-order parameter α is dependent on a variable, which again is dependent on time. An example could be that α was estimated on the basis of temperature, but temperature changes in time. Therefore, they suggest that by using multifractional differential operators, one can overcome this problem, as one would be able to give a different differential order α to different times. The multifractional calculus enables us to construct more accurate differential equations for processes where the local time regularity of the process is depending on time. Of course, this can also be generalized to multifractional differential operators in space, where the spatial variables of a system has time-dependent local regularity.

Although the multifractional calculus seems like a suitable tool for differential equations described above, the soul concept of multifractional or even fractional calculus can be very difficult to grasp, and use in practice. When considering fractional calculus, we usually consider an operator behaving well (as inverse, etc.) with respect to a fractional integral. This makes the operator dependent on the initial point of the integral, and it is not linear with respect to this initial point, in the sense that Ia+αIb+αI(a+b)+α. Furthermore, we can not fractionally differentiate a constant (different than 0), as it is not contained in Ia+αLp. There is, therefore, a strong dependence of the derivative operator on the choice of the integral (as we have seen it is actually completely determined by the chosen integral). The intuition of the framework is therefore far from that of the regular calculus of Newton and Leibniz, and a more rigorous understanding of the properties of multifractional calculus is needed to consider differential equations and partial differential equations with such operators.

3. Girsanov theorem and existence of strong solutions to SDEs driven by multifractional Brownian motion

In this section, we will apply the multifractional calculus that we developed in Section 2 to analyse differential equations driven by multifractional noise. By multifractional noise (or multifractional Brownian motion), we will from here on out use the Riemann–Liouville multifractional Brownian motion. We therefore begin with the following definition.

Definition 3.1

Let {Bt}t[0,T] be a one-dimensional Brownian motion on a filtered probability space (Ω,F,{Ft}t[0,T],P), and let h:[0,T][a,b](0,1) be a C1 function. We define the Riemann–Liouville multifractional Brownian motion (RLmBm) {Bth}t[0,T] adapted to the filtration {Ft}t[0,T] by Bth=1Γ(ht+12)0t(ts)ht12dBs,t0,where Γ is the Gamma function. The function h is called the regularity function of Bh.

The multifractional Brownian motion was first proposed in the 1990's by Peltier and Lévy Vehél in [Citation19] and independently by Benassi, Jaffard, Roux in [Citation2]. The process is non-stationary and on very small time steps it behaves like a fractional Brownian motion. However, by letting the Hurst parameter in the fractional Brownian motion be a function of time, the Hölder regularity of the process is depending on time, and therefore it makes more sense to talk about local regularities rather than global. The process was initially proposed as a generalization with respect to the fBm representation given by Mandelbrot and Van-Ness, that is, the mBm was defined for t[0,T] by B~th=c(ht)[0(ts)ht12(s)ht12dBs+0t(ts)ht12dBs]=:B~t(1),h+B~t(2),h,where {Bt}t(,T] is a real valued two-sided Brownian motion, and h:[0,T](0,1) is a continuous function. Notice in the above representation that B~t(1),h is always measurable with respect to the filtration F~0 (generated by the Brownian motion), as the stochastic process only ‘contributes’ from to 0. Therefore, we can think of B~t(2),h as the only part which contributes to the stochasticity of B~th when t>0. The reason why one also considers the process B~t(1),h when analysing regular fractional Brownian motions (in the case h(t)=H) is to ensure stationarity of the process. However, when we are considering the generalization B~th above, when h is not constant, we do not get stationarity of the process even though we consider a representation as the one above. We are therefore inclined to choose B~t(2),h to be the multifractional noise we consider in this article due to its very simplistic nature. This multifractional process is often called in the literature the Riemann-Liouville multifractional Brownian motion, inspired by the original definition of the fractional Brownian motion defined by Lèvy in the 1940s. The Riemann–Liouville multifractional Brownian motion was first analysed by S. C. Lim in [Citation15] and is well suited to the use of multifractional calculus, constructed above, in the analysis of differential equations driven by this process.

For a longer discussion on the properties of the multifractional Brownian motion, we refer to [Citation1,Citation4–6,Citation13–15].

We will begin to prove a Girsanov theorem for multifractional Brownian motion, and as an application of this theorem we construct weak solutions to SDEs driven by an additive RLmBm.

3.1. Girsanov's theorem

As before, let us denote by Bth the multifractional Brownian motion with its natural filtration {Ft}t[0,T]. We would like to show through a Girsanov theorem, that a perturbation of Bh with a specific type of function will give us a mBm under another new probability measure. Similar to the derivation found in [Citation18] in case of fBm, we consider the following perturbation (9) B~th=Bth+0tusds=1Γ(ht+12)0t(ts)ht12dBs+0tusds=1Γ(ht+12)0t(ts)ht12dB~s,(9) where the process B~ is given by B~t=Bt+0tD0+h+12(0usds)(r)dr.The multifractional derivative D0+h+12(0usds) in the last equation is well defined as an element in L2([0,T]) as long as 0usdsI0+h+12L2([0,T]).

Theorem 3.1

Girsanov theorem for RLmBm

Let {Bth}t[0,T] be a Riemann–Liouville multifractional Brownian motion with regularity function h:[0,T][a,b](0,12) such that hC1. Assume that for some ϵ>0 with 12h(t)>ϵ,t[0,T], the following two hypothesis holds

  1. 0usds is adapted, and D0+h+12(0usds)(t)L2([0,T]) a.s.

  2. E[ZT]=1 for Z(T):=exp[0TD0+h+12(0usds)(r)dBr120T(D0+h+12(0usds)(r))2dr].

Then the stochastic process B~th=Bth+0tusds,is an RLmBm under the measure P~ defined by dP~dP=Z(T).

Proof.

The proof follows from the derivations in (Equation9), and then applying the standard Girsanov theorem to B~t=Bt+0tD0+h+12(0usds)(r)dr, using the fact that Ch()+12+ϵ([0,T];R)I0+h+12L2([0,T]), showing that B~t is a BM under P~.

Remark 3.1

The reason that we only look at h:[0,T][a,b](0,12) is that the multifractional derivative is only constructed for α:[0,T][c,d](0,1), and as we need to look at α=h+12 to consider the RLmBm, we need to restrict 0<h(t)<12 for all t[0,T]. We are currently working on a way to consider the multifractional derivative for any function α, i.e. such that we can differentiate to variable order with largest value above 1 and smallest value less than 1.

3.2. Existence of weak solutions

As an application of the Girsanov Theorem proven above, we will now consider SDEs driven by an additive RLmBm. To this end, we follow the strategy outlined by Nualart and Ouknine [Citation18] in the case of regular fractional Brownian motion (i.e. the multifractional Brownian motion with constant regularity function α) and extend their results to the multifractional case by invoking the Girsanov theorem for RLmBm. We will begin to prove weak existence of a solution. By this we mean that there exists two stochastic processes X and Bh on a filtered probability space (Ω,F,{Ft}t[0,T],P) such that Bh is an Ft–RLmBm according to Definition 3.1, and X and Bh satisfy the following equation (10) Xtx=x+0tb(s,Xsx)ds+Bth,X0x=xR,t[0,T].(10)

Theorem 3.2

Let Bh be a RLmBm on a filtered probability space (Ω,F,{Ft}t[0,T],P), given as in Definition 3.1. Suppose b is integrable and of linear growth, i.e. there exists a constant C~>0 such that |b(t,x)|C~(1+|x|) for all (t,x)[0,T]×R. Then there exists a weak solution {(Xtx,Bth)}t[0,T] to Equation (Equation10).

Proof.

Set us=b(s,Bsh+x), and B~th=Bth+0tusds, and vt=D0+h+12(0usds)(t).We need to check that the process 0usds satisfies condition (i) and (ii) in Theorem 3.1. First we will show (i), i.e. that 0usds is adapted and vL2([0,T]). Adaptedness follows directly since D0+h+12 is a deterministic operator, and b(s,Bsh+x) is adapted. By Theorem 2.1, we know that |vt|≤∥0usdsh()+12+ϵ;[0,T]C(T,ϵ,h+12;t),where tC(T,ϵ,h+12;t)L2([0,T]) since 2h<1 for some small ϵ>0. Moreover, by the linear growth of b we have the following estimate (11) 0usdsh()+12+ϵ;[0,T]C~(1+|Bh|+|x|)T(12hϵ).(11) By Fernique's theorem [Citation9] (or e.g. [Citation7, Thm. 2.7]), it follows that |Bh| has finite exponential moments of all orders, and therefore we have that (12) tvtL2([0,T])Pa.s.(12) Next we consider condition (ii). Since we have already proven that v is adapted, it is sufficient to prove that Novikov's condition is satisfied [Citation10, Cor. 5.13], i.e. it is sufficient to prove that there exists a λ>0 such that sup0sTE[exp(λvs2)]<.By Equation (Equation11) and Fernique's theorem, it is readily seen that the above condition is satisfied. Existence of a weak solution follows from the Girsanov Theorem 3.1.

3.3. Uniqueness in law and path-wise uniqueness

We will prove uniqueness in law and path-wise uniqueness in the same manner as was done in [Citation18] Section 3.3. The technique used is very similar, although we have different bounds on our differential equations, corresponding to the multifractality of our equation.

Theorem 3.3

Let (X,Bh) be a weak solution to differential Equation (Equation10) defined on the corresponding filtered probability space (Ω,F,{Ft}t[0,T],P). Then the weak solution is unique in law, and moreover is unique almost surely.

Proof.

Start again with u~t=D0+h+12(0b(s,Xs)ds)(t),and let a new probability measure P~ be defined by the Radon–Nikodym derivative dP~dP=exp(0Tu~rdBr120Tu~r2dr),where B denotes a standard Brownian motion. Although this Radon–Nikodym derivative is similar as before, notice that we now have the solution of the differential equation inside the function b. We will show that this new process u~ still satisfies condition (i) and (ii) of Girsanov Theorem 3.1. Since we know that, |Xt||x|+|0tb(s,Xs)ds|+|Bth||x|+0tC~(1+|Xs|)ds+|Bth|,we have by Grönwall's inequality that |X|;[0,T]C~(|x|+|Bh|+T)exp(C~T)Pa.s..Moreover, by the estimates on the multifractional derivative from Theorem 2.1, we know that for any ϵ>0 such that h12>ϵ we have |u~t|≤∥0b(,s,Xs)dsh()+12+ϵ;[0,T]C(T,ϵ,h+12;t),and since 0b(,s,Xs)dsh()+12+ϵ;[0,T]C(1+|X|;[0,T])T(12hϵ),we obtain the estimate |ut~|C(1+|X|;[0,T])C(T,ϵ,h+12;t),where tC(T,ϵ,h+12;t)L2([0,T]). Combining the above estimates, and again using Fernique's theorem [Citation7, Thm. 2.7], we have that Novikov's condition is satisfied and E[dP~dP]=1. The classical Girsanov theorem then states that the process B~t=0turdr+Bt,is an {Ft}-Brownian motion under P~, and we can then write Xt=x+0tKh(t,s)dB~s,where Kh(t,s)=1Γ(ht+12)(ts)ht12. Correspondingly we define B~th:=0tKh(t,s)dB~s. Now we have that Xtx is a {Ft}-multifractional Brownian motion with respect to P~. Therefore, we must have that Xtx and B~th has the same distribution under the probability P. We can show this by considering a measurable functional φ on C([0,T]), we have EP[φ(Xx)]=EP~[φ(Xx)exp(0TD0+h+12(0b(s,Xs)ds)(r)dBr+120TD0+h+12(0b(s,Xs)ds)2(r)dr)]=EP~[φ(Xx)exp(0TD0+h+12(0b(s,Xs)ds)(r)dB~r120TD0+h+12(0b(s,Xs)ds)2(r)dr)]=EP[φ(Bh)exp(0TD0+h+12(0b(s,x+Bsh)ds)(r)dBr120TD0+h+12(0b(s,x+Bsh)ds)2(r)dr)]=EP[φ(B~h)].For the almost surely uniqueness, assume there exist two weak solutions X1 and X2 on the same probability space (Ω,F,P~), then max(X1,X2) and min(X1,X2) are again two solutions, and according to the above proof, have the same distribution. Indeed, define the stopping time τ1=inf{t0|Xt1Xt2}. Then for all t[0,τ], Xt1Xt2, and then Yt=max(Xt1,Xt2) solves Yt=Xt1=x+0tb(Xs1)ds+Bth=x+0tb(Ys)ds+Bth,t[0,τ1]and thus Y solves (Equation10) on [0,τ]. Then define τ2=inf{tτ1|Xt1Xt2}, and it follows that Xt2Xt1 on [τ1,τ2], and similarly as above, Yt=Xt2=x+0tb(Xs2)ds+Bth=x+0τ1b(Xs2)ds+Bτ1+τ1tb(Xs2)ds+BtBτ1=Xτ11+τ1tb(Ys)ds+BtBτ1=x+0tb(Ys)ds+Bth,t[τ1,τ2]where we have used that Xτ11=Xτ12=Yτ1. Thus Y solves (Equation10) on [τ1,τ2], and thus also on [0,τ2]. By extending the above argument, we see that Y is a solution to (Equation10) on [0,T]. A similar argument proves that also min(X1,X2) is a solution to (Equation10).

We can rewrite max(x,y)=12(x+y+|xy|) and min(x,y)=12(x+y|xy|), and therefore E[max(X1,X2)]=E[min(X1,X2)],implies that EP~[|X1X2|]=EP~[|X1X2|],but of course this is only true if X1=X2 a.s.

Remark 3.2

Although, in this paper, we limit our selves to the study of SDEs with drift coefficient of linear growth, one can easily obtain weak existence of time-singular Volterra equations of the form Xt=x+0tV(t,s)b(s,Xs)ds+Bth,where V is a singular and square integrable deterministic Volterra kernel, and the drift b is still of linear growth. As we have seen in Theorem 3.3, we only need to prove that 0V(,s)b(s,Xs)dsh+12+ϵ<.But this can be verified in the case that |V(t,s)|C|ts|h+12+ϵ1, as we then obtain |0tV(t,s)ds|C|ts|h+12+ϵ. We are currently writing a paper studying such singular Volterra equations, with merely linear growth conditions on the drift, and their relation to stochastic fractional differential equations.

3.4. Existence of strong solutions

We are now ready to prove that the solution found in the above section is in fact a strong solution. By strong solution we mean that there exists a progressively measurable process Xt adapted to a filtration {Ft}t[0,T] if X satisfies (Equation10) (P-a.s.) for all t[0,T].

The following three theorems can be viewed as counterparts, or generalisations of Proposition 6, and 7 and Theorem 8, in [Citation18], to accommodate the Riemann–Liouville multifractional Brownian motion.

Proposition 3.1

Let X denote a weak solution constructed above, but where the drift b is a uniformly bounded function. Fix a constant ρ>supt[0,T]ht+1. For any measurable and non-negative function g with g:[0,T]×RR, there exists a constant K depending on b, ρ, and T such that E[0Tg(t,Xt)dt]K(0TRg(t,x)ρdxdt)1ρ.

Proof.

By elementary computations, we begin to observe that E[(Bth)2]=1Γ(ht+12)20t(ts)2ht1ds=c(ht)t2ht,where c(ht):=12htΓ(ht+12)2. Let Z=dP~dP be the Radon–Nikodym derivative that we constructed in Theorem 3.3. Then by the Hölder inequality for some 1α+1β=1, we have the estimate EP[0Tg(t,Xt)dt]=EP~[Z10Tg(t,Xt)dt](EP~[Zα])1α(EP~[0Tg(t,Xt)βdt])1β.The expectation of Zα is explicitly given for any α>1 by EP~[Zα]=EP~[exp(α0TusdBs+α20Tus2ds)]=EP~[exp(α0TusdB~sα20Tus2ds)]K(b,T,α),obtained by arguments given in the proof of Theorem 3.3. Next we look at the other term, EP~[0Tg(t,Xt)βdt]=0T12πc(ht)thtRg(t,x)βexp((yx)22c(ht)t2ht)dydt12π(0TRg(t,y)βγdydt)1γ(0TthtγReγ(xy)22t2htdydt)1γ,where we applied the Hölder inequality again, with γ=γγ11γ+1γ=1 with supt[0,T]ht+1<γ. Explicit calculations then yield EP~[0Tg(t,Xt)βdt]12π(0TRg(t,y)βγdydt)1γ(0Tt(1γ)htdt)1γc(T,γ)2π(0TRg(t,y)βγdydt)1γ,and the result follows.

The next proposition will show that if we are able to find a sequence of bounded and measurable functions {bn}nN converging to the drift function b in Equation (Equation10) almost surely, and the corresponding solutions X(n) to Equation (Equation10) constructed with bn converges to some process X. Then X is a solution to the original differential Equation (Equation10). When we have proved this result, we will show that there actually exists such a sequence of functions {bn}nN, when b is of a certain class, and therefore, by the path wise uniqueness property of the solution, we obtain that the solution to Equation (Equation10) is a strong solution.

Proposition 3.2

Let {bn}nN be a sequence of bounded and measurable functions on [0,T]×R bounded uniformly by a constant C such that bn(t,x)b(t,x)for a.a.(x,t)[0,T]×R.Also, assume that the corresponding solutions Xt(n) of Equation (Equation10) given by Xt(n)=x+0tbn(s,Xs(n))ds+Bth,converge to a process Xt for all t[0,T]. Then X is a solution to Equation Equation10.

Proof.

We need to show that E[0T|bn(s,Xs(n))b(s,Xs)|ds]0asn.Adding and subtracting bn(s,Xs) in the above expression, we can find E[0T|bn(s,Xs(n))b(s,Xs)|ds]E[0T|bn(s,Xs)b(s,Xs)|ds]+supkNE[0T|bk(s,Xs(n))bk(s,Xs)|ds]=:J1(n)+J2(n).Moreover, there exists a smooth function κ:RR such that κ(0)=1 and κ(z)[0,1] for all |z|<1 and κ(z)=0 for all |z|1. Fix ϵ>0 and let R be a constant such that E[0T1κ(XtR)dt]<ϵ.By the Frechet-Kolmogorov theorem (a compactness criterion for L2(B) when B is a bounded subset of Rd), the sequence of functions {bk}kN is relatively compact in L2([0,T]×[R,R]), and therefore we can find a finite set of smooth functions {H1,..,HN}L2([0,T]×[R,R]) such that for every k, 0TRR|bk(t,x)Hi(t,x)|dxdt<ϵ2,for some Hi{H1,..,HN}. Using this result we can find E[0T|bk(s,Xs(n))bk(s,Xs)|ds]E[0T|bk(t,Xt(n))Hi(t,Xt(n))|dt]+E[0T|Hi(t,X(n))Hi(t,Xt)|dt]+E[0T|bk(t,Xt)Hi(t,Xt)|dt]=:I1(n,k)+I2(n)+I3(k).We begin by looking at the components Iu, u = 1, 2, 3 separately, and start with u=1. Using the κ function we defined earlier, we can see that I1(n,k)=E[0Tκ(XtR)|bk(t,Xt(n))Hi(t,Xt(n))|dt]+E[0T(1κ(XtR))|bk(t,Xt(n))Hi(t,Xt(n))|dt],then by Proposition 3.1, we know that the first of the two elements above is bounded in the following way, E[0Tκ(Xt(n)R)|bk(t,Xt(n))Hi(t,Xt(n))|dt]K(0TRR|bk(t,x)Hi(t,x)|2dxdt)1/2,where K1 depends on b and supiHi and T. We can then show for the second component, E[0T(1κ(Xt(n)R))|bk(t,Xt(n))Hi(t,Xt(n))|dt]K2E[0T(1κ(Xt(n)R))dt],where K2 also depends on b and supiHi. Set K=maxK1,K2, by the properties of Hi and κ shown above in relation to the sequence of {bk}, we have that, limnsupkI1(n,k)Kϵ.In the same way, we can show that for a constant K similar to the one above, supkI3(k)Kϵ,at last, of course limnI2(n)=0 since X(n)X, and therefore, limnJ2(n)=limnsupk(I1(n,k)+I2(n)+I3(k))=0.For J1 we can use a very similar argument, as we can decompose it into J1(n)=E[0Tκ(XtR)|bn(s,Xs)b(s,Xs)|ds]+E[0T(1κ(XtR))|bn(s,Xs)b(s,Xs)|ds],and use Proposition 3.1, just as above.

The next theorem shows how we can combine Propositions 3.1 and 3.2 to obtain strong solutions when b is of linear growth. Nualart and Ouknine do this by constructing a proper sequence of functions which is bounded and measurable such that this sequence converges to b, and we can apply Propositions 3.1, and 3.2.

Theorem 3.4

Let the drift b in the SDE in Equation (Equation10) be of linear growth, i.e. |b(t,x)|C(1+|x|) for a.a. (t,x)[0,T]×R. Then there exists a unique strong solution to Equation (Equation10).

Proof.

The path-wise uniqueness is already obtained in Theorem 3.3. Therefore, the object of interest here is the strong existence. Define a function bR(t,x)=b(t,(xR)R), which by the linear growth condition we can see is bounded and measurable. Next, let φ be a non-negative test function with compact support in R such that Rφ(y)dy=1. For jN define the function bR,j(t,x)=jRbR(t,y)φ(j(xy))dy,which one can verify is Lipschitz in the second variable uniformly in t., Indeed, |bR,j(t,x)bR,j(t,y)|j2supt[0,T]RbR(t,u)du×|xy|.Furthermore, define the functions b~R,n,k=j=nkbR,jandb~R,n=j=nbR,j.Both is again Lipschitz in the second variable uniformly in t, and we see that bR,n,kbR,n as k for a.a. xR and t[0,T]. Now, we can construct a unique solution X~R,n,k from Equation (Equation10) with corresponding drift coefficient bR,n,k. By the comparison theorem for ordinary differential equations, the sequence X~R,n,k is decreasing as k grows. Therefore X~R,n,k has a limit X~R,n. By the comparison theorem again, we have that X~R,n,k and X~R,n are bounded from above and below by R and R respectively. Moreover, the solution X~R,n is increasing as n gets larger, and converge to a limit XR. Now we can apply Proposition 3.2, and we obtain that there exists a unique strong solution.

4. Conclusion

In this article, we have constructed a new type of multifractional derivative operator acting as the inverse of the generalized fractional integral. We have then applied this derivative operator to construct strong solutions to SDEs where the drift is of linear growth and the noise is of non-stationary and multifractional type. The applications of such equations may be found in a range of fields, including finance, physics and geology. For future work, we are currently working on a way to construct solutions to Volterra SDEs with singular Volterra drift, and driven by self exciting multifractional noise. The methodology will be similar to the above, but we will need to generalize the last to sections to account for Volterra kernels of singular type. Furthermore, we believe that the multifractional differential operator may shed new light on both multifractional (partial) differential equations, with possibly random-order differentiation, and are currently working on a project relating to such equations.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

T. K. N is funded by Norwegian Research Council (Project 230448/F20).

References

  • A. Ayache, S. Cohen, and J.L. Vehel, The covariance structure of multifractional Brownian motion, with application to long range dependence, IEEE International Conference on Acoustics, Speech, and Signal Processing. 6, 2000, pp. 3810–3813.
  • A. Benassi, S. Jaffard, and D. Roux, Elliptic Gaussian random processes, Rev. Mat. Iberoam. 13 (1997), pp. 19–90.
  • S. Bianchi, A. Pantanella, and A. Pianese, Modeling stock prices by multifractional Brownian motion: an improved estimation of the pointwise regularity, Quant. Finance 13(8) (2013), pp. 1317–1330.
  • B. Boufoussi, M. Dozzi., and R. Marty, Local time and tanaka formula for a Volterra-type multifractional Gaussian process, Bernoulli 16 (2010), pp. 1294–1311.
  • S. Cohen, From self-similarity to local self-similarity: the estimation problem, in Fractals: Theory and Applications in Engineering, Springer-Verlag, London, 1999.
  • S. Corlay, J. Lebovits, and J. Lévy Véhel, Multifractional stochastic volatility models, Math. Finance 24(2) (2014), pp. 364–402.
  • G. Da Prato and J. Zabczyk, Stochastic Equations in Infinite Dimensions, Encyclopedia of Mathematics and its Applications Vol. 44, Cambridge University Press, Cambridge, 1992.
  • L. Diening, P. Harjulehto, P. Hästö, and M. Ruzicka, Lebesgue and Sobolev Spaces with Variable Exponents, Vol. 2017, Springer-Verlag, Berlin, 2011.
  • X. Fernique, Intégrabilité des vecteurs gaussiens, C.R. Acad. Sci. Paris Sér. A-B 270 (1970), pp. A1698–A1699.
  • I. Karatzas and S.E. Shreve, Brownian Motion and Stochastic Calculus, 2nd ed., Graduate Texts in Mathematics Vol. 113, Springer-Verlag, New York, 1991.
  • V. Kokilashvili, A. Meskhi, S. Samko, and H. Rafeiro, Integral Operators in Non-Standard Function Spaces Vol 2: Variable Exponent Hölder, Morrey-Campanato and Grand Spaces, Springer International Publishing, Switzerland, 2016.
  • V.N. Kolokoltsov, The probabilistic point of view on the generalized fractional partial differential equations, Fract. Calc. Appl. Anal. 22(3) (2019), pp. 543–600.
  • J. Lebovits, J. Lévy Véhel, and E. Herbin, Stochastic integration with respect to multifractional Brownian motion via tangent fractional Brownian motions, Stoch Process Appl. 124(1) (2014), pp. 678–708.
  • J. Lebovits and J. Lévy véhel, White noise-based stochastic calculus with respect to multifractional Brownian motion, Stochastics 86(1) (2013), pp. 87–124.
  • S.C. Lim, Fractional Brownian motion and multifractional Brownian motion of Riemann-Liouville type, J. Phys. A: Math. Gen. 34(7) (2001), pp. 1301–1310.
  • C. Lorenzo and T. Hartley, Variable order and distributed order fractional operators, Nonlinear Dyn.29 (2002), pp. 57–98.
  • D. Nualart, The Malliavin Calculus and Related Topics, Springer Verlag, Berlin, Heidelberg, 2006.
  • D. Nualart and Y. Ouknine, Regularization of differential equations by fractional noise, Stoch Process Appl. 102(1) (2002), pp. 103–116.
  • R.-F. Peltier and J. Lévy Véhel, Multifractional Brownian motion: definition and preliminary results, Research Report RR-2645, Project FRACTALES, INRIA, 1995.
  • S. Samko, Fractional integration and differentiation of variable order: an overview, Nonlinear Dyn. 71 (2013), pp. 653–662.
  • S.G. Samko and B. Ross, Integration and differentiation to a variable fractional order, Integral Transforms Spec. Funct. 1(4) (2007), pp. 277–300.

Appendix

In this section, we will prove that the integral 0xK(x,t)g(t)dt:=0xΓ(α(t))g(t)(xt)α(t)dt,satisfies for all x>0 the following relation ddx(0xK(x,t)g(t)dt)=g(x)ddx0xK(x,t)dt0xddxK(x,t)(g(x)g(t))dt.The existence of the terms on the right hand side is shown in Lemma 2.3, where it is shown that the right hand side is elements of Lp([0,T]), which is sufficient for our application. To see the above relation, we shall use the definition of the derivative. Start by adding and subtracting the point g(x), and get 0xK(x,t)g(t)dt=0xK(x,t)(g(x)g(t))dt+g(x)0xK(x,t)dt=J1(x)+J2(x).We then look at the right hand side as increments between x + h to x, and after simple manipulations get J1(x+h)J1(x)=0xK(x+h,t)(g(x+h)+(g(x)g(x))g(t))K(x,t)(g(x)g(t))dt+xx+hK(x+h,t)(g(x+h)g(t))dt=(0x(K(x+h,t)K(x,t))(g(x)g(t))dt+(g(x+h)g(x))×0xK(x+h,t)dt)+xx+hK(x+h,t)(g(x+h)g(t))dt.The second part is given by J2(x+h)J2(x)=g(x+h)0x+hK(x+h,t)dtg(x)0xK(x,t)dt=(g(x+h)g(x))0x+hK(x+h,t)dt+g(x)0x+hK(x+h,t)dtg(x)0xK(x,t)=(g(x+h)g(x))0xK(x+h,t)dt+(g(x+h)g(x))xx+hK(x+h,t)dt+g(x)(0x+hK(x+h,t)dt0xK(x,t)dt).Combining J1 and J2 and we get and notice that the term (g(x+h)g(x))0xK(x+h,t)dt cancels out and we obtain (J1(x+h)J1(x))+J2(x+h)J2(x)=0x(K(x+h,t)K(x,t))(g(x)g(t))dt+g(x)(0x+hK(x+h,t)dt0xK(x,t)dt)+(g(x+h)g(x))xx+hK(x+h,t)dtxx+hK(x+h,t)(g(x+h)g(t))dt.Let us first look at the last line above, (g(x+h)g(x))xx+hK(x+h,t)dtxx+hK(x+h,t)(g(x+h)g(t))dt=xx+hK(x+h,t)(g(t)g(x))dt.It is readily checked that |Γ(α(t))(x+ht)α(t)(g(x)g(t))|C(|xt|α(t)+ϵ|x+ht|α(t))|xt|ϵ,as h0. Furthermore, we observe that |xx+hK(x+h,t)(g(t)g(x))dt|Cxx+h|xt|α(t)+ϵ|x+ht|α(t)dt.Now set t=x+λh, and then a simple change of variables yields that xx+h|xt|α(t)+ϵ|x+ht|α(t)dt=h1+ϵ01|λ|α(x+λh)+ϵ|1λ|α(x+λh)dλ.But the integral on the right hand side is bounded, i.e 01|λ|α(x+λh)+ϵ|1λ|α(x+λh)dλB(α+ϵ+1,1α).This implies in particular that limh0xx+hK(x+h,t)(g(t)g(x))dth=0.We are left to look at the limit when h0 of the integrals 1h0x(K(x+h,t)K(x,t))(g(x)g(t))dt,and 1hg(x)(0x+hK(x+h,t)dt0xK(x,t)dt).We will look at them separately, staring with the one one the left above. By dominated convergence, we have limh00x(K(x+h,t)K(x,t))(g(x)g(t))dth=0xddxK(x,t)(g(x)g(t))dt.Secondly, it is straight forward to see that, limh0g(x)(0x+hK(x+h,t)dt0xK(x,t)dt)h=g(x)ddx0xK(x,t)dt.The fact that ddx0xK(x,t)dt indeed exists is proved in Lemma 2.3 using the fact that K(x,t)=Γ(α(t))(xt)α(t). This proves our claim and thus concludes the proof.