![MathJax Logo](/templates/jsp/_style2/_tandf/pb2/images/math-jax.gif)
Abstract
The goal of this paper is to study a stochastic game connected to a system of forward-backward stochastic differential equations (FBSDEs) involving delay and noisy memory. We derive sufficient and necessary maximum principles for a set of controls for the players to be a Nash equilibrium in the game. Furthermore, we study a corresponding FBSDE involving Malliavin derivatives. This kind of equation has not been studied before. The maximum principles give conditions for determining the Nash equilibrium of the game. We use this to derive a closed form Nash equilibrium for an economic model where the players maximize their consumption with respect to recursive utility.
1. Introduction
The aim of this paper is to study a stochastic game between two players. The game is based on a forward stochastic differential equation (SDE) for the process X. In applications to economy, this process can be thought of as the market situation, e.g. the financial market, the housing market or the oil market. This SDE includes two kinds of memory of the past; regular memory and noisy memory. Regular memory (also called delay, see f. ex. the survey paper by Ivanov et al. [Citation1]) means that the SDE can depend on previous values of the process X. That is, for some given X(t) depends on
For more on stochastic delay differential equations and optimal control with delay, see Øksendal et al. [Citation2] and Agram and Øksendal [Citation3]. In contrast, noisy memory means that the SDE may involve an Itô integral over previous values of the process, so for
X(t) depends on
where
is a Brownian motion. For more on noisy memory, see Dahl et al. [Citation4].
Connected to this SDE are two backward stochastic differential equations (BSDEs). These BSDEs are connected to the SDE in the sense that they depend on as well as the delay and noisy memory of this process. Hence, this forms an FBSDE system. Each of these BSDEs corresponds to one of the players in the stochastic game; corresponding to player i = 1, 2 is a BSDE in the process
The length of memory can be different for the two players, so for i = 1, 2, player i has memory span δi. The players may also have different levels of information, which is included in the model by having (potentially) different filtrations
i = 1, 2.
Each of the players aim to find an optimal control ui which maximizes their personal performance (objective) function, Ji. Seminal work in stochastic optimal control has been done by Krylov and his students, see e.g. Krylov [Citation5, Citation6]. The performance function of each of the agents will be defined in such a way that it depends on the player’s profit rate, the market process X and the process Wi coming from the player’s BSDE (more on this in Section 2, EquationEquation (11)(11)
(11) ). This kind of problem, where both players maximize their performance which depends on an FBSDE, is called an FBSDE stochastic game, and has been studied by e.g. Øksendal and Sulem [Citation7]. However, they do not include memory in their model. We study conditions for a pair of controls (u1, u2) to be a Nash equilibrium for such a stochastic game. That is, we would like to determine controls such that the players cannot benefit by changing their actions. In order to do so, we derive sufficient and necessary maximum principles giving conditions for a control to be Nash optimal. This is done in Sections 3 and 4. Maximum principles for forward-backward stochastic differential equations (FBSDEs) have been studied by Øksendal and Sulem [Citation7], Wang and Wu [Citation8], Wu [Citation9] and Wang et al. [Citation10], but these papers do not consider delay and noisy memory.
In connection with these maximum principles, there are adjoint equations (see e.g. Øksendal [Citation11] for an introduction to stochastic maximum principles and adjoint equations, or Øksendal and Sulem [Citation12] for maximum principles and adjoint equations where delay is involved). In our case, these adjoint equations are a system of coupled forward-backward stochastic differential equations involving Malliavin derivatives (see Di Nunno et al. [Citation13] for more on Malliavin derivatives). To the best of our knowledge, such equations have not been studied before. In Section 5 we study a slightly simplified version of these adjoint FBSDEs, and establish a connection between these equations and a system of FBSDEs without Malliavin derivatives. Finally, in Section 6, we apply our results to a specific example in order to determine the optimal consumption with respect to recursive utility.
2. The problem
Let be a probability space, and let B(t),
be a Brownian motion in this space. Let
be an independent Poisson random measure. Denote by
the associated Lèvy measure such that
Also, let
be the corresponding compensated Poisson random measure, i.e.,
Let be the P-augmented filtration generated by B(t) and
We will consider a game between two players: player 1 and player 2. Let be the control process chosen by player i = 1, 2, and denote
Let
i = 1, 2, denote the set of admissible controls for player i. It is contained in a given set of càdlàg processes in
with values in a subset
of
Let
be the combined controls for both players, and denote by
Let
i = 1, 2 be the memory span of player 1 and 2, respectively. We define
to be the longest memory span of the two agents.
We consider a controlled forward stochastic differential equation for a process determining the market situation (in the following, we omit the ω for notational ease unless it is important to highlight its dependence):
(1)
(1)
where
is some (given) initial process,
and
and
for i = 1, 2. The superscript t– means that we are taking the left limit of the process is question (that is, the value before a potential jump at time t), see Øksendal and Sulem [Citation14] for more on this.
Remark 2.1. N
ote that is a given initial process which can not be controlled (i.e., there is no dependency on u in ξ). Hence, we do not need to define the filtration
for
Here, the delay processes Yi, and the noisy memory processes Λi correspond to player i = 1, 2 respectively. Hence, the two players may have memories for different time intervals, depending on the values of δi. Also, on the coefficient functions
(2)
(2)
(3)
(3)
(4)
(4)
we impose the following set of assumptions.
Assumption 2.2.
The functions
and
are assumed to be C1 for each fixed
The functions
and
, and
are predictable for each
Lipschitz condition: The functions
are Lipschitz continuous in the variables
, with the Lipschitz constant independent of the variables
. Also, there exists a function
, independent of
, such that
(5)
(5)
(6)
(6)
Linear growth: The functions
satisfy the linear growth condition in the variables
, with the linear growth constant independent of the variables
Also, there exists a non-negative function
, independent of
, such that
(7)
(7)
(8)
(8)
Assumptions 1 and 2 are sufficient to ensure the integrands in EquationEquation (1)(1)
(1) have predictable versions, whenever X is càdlàg and adapted. It is always assumed that the
-integral is taken with respect to the predictable version of
Together with the Lipschitz and linear growth conditions, this ensures that for every
there exists a unique càdlàg adapted solution
to the EquationEquation (1)
(1)
(1) satisfying
(9)
(9)
This can be seen, for example, by regarding EquationEquation (1)(1)
(1) as a stochastic functional differential equation. See Dahl et al. [Citation4] for more on this.
In addition to this, the players (potentially) have different levels of information, represented by different subfiltrations where
for all
i = 1, 2.
For i = 1, 2, let be a given predictable process w.r.t.
i = 1, 2, and let
be an
-measurable function. Associated to the FSDE (1), we have a pair of backward stochastic differential equations (BSDEs) in the unknown stochastic processes
i = 1, 2:
(10)
(10)
Note that these BSDEs are coupled to the SDE (1) due to the dependency on X. Also, the BSDEs depend on the memory of the market process X, due to the dependency on the processes Y and However, EquationEquation (10)
(10)
(10) is a standard BSDE with jumps, hence the conditions for existence and uniqueness of solution are well known, see e.g. Theorem 1.5 in Øksendal and Sulem [Citation15]. Essentially, we require that g is square integrable w.r.t. t when all other inputs are 0 and that g is Lipschitz in
and K.
For i = 1, 2, let be functions representing a profit rate, bequest function and risk evaluation, respectively. Then, the performance function of each player i = 1, 2 is defined by:
(11)
(11)
where we must assume all conditions necessary for the integrals and the expectation to exist.
Also, note that the performance Ji of player i is a function of the control which is determined by both players. Therefore, this problem setting specifies a stochastic game.
A pair of controls is called a Nash equilibrium for this stochastic game if the following holds:
(12)
(12)
In words, this means that in the Nash equilibrium, neither player would like to change their control.
Assume there exists a Nash equilibrium for this forward-backward stochastic differential (FBSDE) game with delay and noisy memory. We would like to find this Nash equilibrium, and we will do so by proving sufficient and necessary maximum principles for this problem. Therefore, we define a Hamiltonian function for each player i = 1, 2 as follows:
(13)
(13)
Assume Hi is C1 in for i = 1, 2. In the following, for ease of notation, we will use the abbreviation
For i = 1, 2, we define a system of FBSDEs associated to these Hamiltonians in the unknown adjoint processes
FSDE in λi (which depends on ):
(14)
(14)
where
is the Fréchet derivative of Hi at ki, see the appendix in Øksendal and Sulem [Citation7] for a closer explanation of this gradient.
We also define a BSDE in which depends on λi:
(15)
(15)
where
and
denotes the Malliavin derivative (see Remark 2.3). Note that the conditional expectation in (15) is well defined by the extension of the Malliavin derivative introduced by Aase et al. [Citation16], see Remark 2.3. EquationEquations (14)
(14)
(14) and Equation(15)
(15)
(15) form an FBSDE-system involving Malliavin derivatives. To the best of our knowledge, such systems have not been studied before.
Remark 2.3.
We refer to Nualart [Citation17], Sanz-Solè [Citation18] and Di Nunno et al. [Citation13] for information about the Malliavin derivative Dt for Brownian motion B(t) and, more generally, Lévy processes. In Aase et al. [Citation16], Dt was extended from the space to
where
denotes the classical space of Malliavin differentiable
-measurable random variables. The extension is such that for all
the following holds:
where
denotes the Hida space of stochastic distributions,
the map
belongs to
where λ denotes the Lebesgue measure on
Moreover, the following generalized Clark–Ocone theorem holds:
(16)
(16)
See [Citation16], Theorem 3.11, and also [Citation13], Theorem 6.35. Notice that by combining Itô’s isometry with the Clark–Ocone theorem, we obtain
(17)
(17)
As observed in Agram et al. [Citation19], we can also apply the Clark–Ocone theorem to show the following generalized duality formula:
Let
and let
be adapted. Then,
(18)
(18)
Remark 2.4. N
ote that Equation Equation(14)(14)
(14) is linear in λi, and hence, if
were given, it could be solved by using the Itô formula. However, this solution will depend on the processes
and Wi, so in order to find an explicit solution for λi, we must also solve the coupled FBSDE system (1)–(10).
The BSDE (15) is linear in pi, and hence, if λi was given, it would be possible to find a unique solution to this equation by using e.g. Proposition 6.2.1 in Pham [Citation20] or Theorem 1.7 in Øksendal and Sulem [Citation15]. However, as for the adjoint SDE (14), this solution will depend on the coupled FBSDE system (1)–(10).
In the remaining part of the paper, we will prove a sufficient (Section 3) and a necessary maximum principle (Section 4) for this kind of FBSDE game with delay and noisy memory. Then, we will study existence and uniqueness of solutions of the FBSDE system (14) and (15) (Section 5). Finally, we will present an example which illustrates our results: optimal consumption rate with respect to recursive utility (see Section 6).
3. Sufficient maximum principle for FBSDE games with delay and noisy memory
We prove a sufficient maximum principle which roughly states that under concavity conditions, a control satisfying a conditional maximum principle and an
-condition is a Nash equilibrium for the stochastic game.
Theorem 3.1.
Let and
with corresponding solutions
of the FSDE (1), the BSDE (10), and the FBSDE system (14) and (15) for i = 1, 2. Also, assume that:
(Concavity I) The functions
are concave for i = 1, 2.
(The conditional maximum principle)
(Concavity II) The functions
and
are concave for all t a.s.
Finally, assume that the following
conditions hold:
Then, is a Nash equilibrium.
Proof.
We would like to show that for all
Choose
By the definition of the performance function J1,
where
Note that from the definition of the Hamiltonian,
(19)
(19)
where we have used the abbreviation
and corresponding abbreviations for
and
Also,
(20)
(20)
where the first inequality follows from the concavity of
the second equality follows from EquationEquation (15)
(15)
(15) , the fourth equality from Itô’s product rule applied to
and
the fifth equality follows from EquationEquation (15)
(15)
(15) , the double expectation rule and EquationEquation (1)
(1)
(1) .
Also, note that
(21)
(21)
where the first inequality follows from the concavity of ψ1, the second equality follows from EquationEquation (14)
(14)
(14) , the third equality follows from Itô’s product rule applied to
and
the fourth equality follows from EquationEquation (10)
(10)
(10) as well as EquationEquation (14)
(14)
(14) . The final inequality follows from the concavity of h1 and that
Hence,
(22)
(22)
Note that by changing the order of integration and using the duality formula for Malliavin derivatives (see Di Nunno et al. [Citation13]), we get:
(23)
(23)
Also, note that
(24)
(24)
Hence, by the inequality (22) combined with EquationEquations (23)(23)
(23) and Equation(24)
(24)
(24) ,
(25)
(25)
Fix some By assumption,
is concave, so it is superdifferentiable1 (see Rockafellar [Citation21]) at the point
Thus, there exists a supergradient
such that for all
the following holds:
(26)
(26)
Define
(27)
(27)
Then, by EquationEquation (26)(26)
(26) ,
(28)
(28)
Therefore, by differentiating EquationEquation (27)(27)
(27) and using EquationEquation (28)
(28)
(28) , we find that
Therefore, it follows from this, EquationEquations (25)(25)
(25) and Equation(28)
(28)
(28) that
(where the final inequality follows since
is concave). This means that
for all
In a similar way, one can prove that
for all
This completes the proof that
is a Nash-equilibrium. □
4. Necessary maximum principle for FBSDE games with delay and noisy memory
In the following, we need some additional assumptions and notation:
For all
and all bounded
-measurable random variables
the control
(29)
(29)
For all
with βi bounded, there exists
such that the control
(30)
(30)
Also, assume that the following derivative processes exist and belong to
(31)
(31)
and similarly for etc. Here, the derivative processes are directional derivatives, defined in the following way:
(32)
(32)
For more on this, see (4.11) in Di Nunno et al. [Citation22] and Appendix A in Øksendal and Sulem [Citation7]. Note also that for i = 1, 2 since
If these assumptions hold, we can prove a necessary maximum principle for our noisy memory FBSDE game. The proof of the following theorem is based on the same idea as the proof of Theorem 2.2 in Øksendal and Sulem [Citation7], however the presence of noisy memory in our problem requires some extra care.
Theorem 4.1.
Suppose that with corresponding solutions
, i = 1, 2, of EquationEquations (1)
(1)
(1) , Equation(10)
(10)
(10) , Equation(14)
(14)
(14) and Equation(15)
(15)
(15) . Also, assume that conditions (29)–(31) hold. Then, the following are equivalent:
for all bounded
Proof.
We only prove that for all bounded
is equivalent to
The remaining part of the theorem (i.e., the same statement for ) is proved in a similar way.
Note that, by the definition of and by interchanging differentiation and integration,
(33)
(33)
Note that the interchange of differentiation and integration is justified since everything in EquationEquation (33)(33)
(33) is well defined and square integrable by assumption and
is a finite measure space. Hence, we can apply Theorem 11.5 in Shilling [Citation23] to change the order of the expectation/integral and the differentiation. Also, note that
is a directional derivative of
defined similarly as in EquationEquation (32)
(32)
(32) . For more details on directional (also called Gâteaux derivative), see Appendix A in Øksendal and Sulem [Citation7]. Furthermore, note that
is the partial derivative of the function
wrt.
inserted the corresponding processes at time
For proofs of the differentiability of the performance functional in a similar context, see Dahl et al. [Citation4].
We study the different parts of separately. First, by the Itô product rule, the adjoint BSDE (15) and the definition of
(34)
(34)
Also, by the FSDE (14), the BSDE (10), the definition of and the Itô product rule,
(35)
(35)
By the definition of as well as EquationEquations (34)
(34)
(34) and Equation(35)
(35)
(35) ,
(36)
(36)
where
(37)
(37)
Then, by using the definition of the Hamiltonian see EquationEquation (13)
(13)
(13) , we see that everything inside the curly brackets in EquationEquation (36)
(36)
(36) is equal to zero. Hence,
Recall that from the definitions of
This implies, by change of variables
Also, by the duality formula for Malliavin derivatives (see Di Nunno et al. [Citation13]) and changing the order of integration
But, from the definition of
So, by the rule of double expectation and the calculations above, This implies that
so
which was what we wanted to prove. □
5. Solution of the noisy memory FBSDE
In this section, we consider a slightly simplified version of the system of noisy memory FBSDEs in EquationEquations (14)(14)
(14) and Equation(15)
(15)
(15) . Instead, consider the following noisy memory FBSDE:
FSDE in
(38)
(38)
BSDE in
(39)
(39)
where
and
Note that the set of EquationEquations (14)(14)
(14) and Equation(15)
(15)
(15) are two such systems such as (38) and (39) involving the same
process as well as the same controls
Also, consider the following system consisting of an FSDE and two BSDEs:
FSDE in
(40)
(40)
BSDE in
(41)
(41)
BSDE in
(42)
(42)
where
(43)
(43)
and
Note that Hence, EquationEquations (38)
(38)
(38) and Equation(40)
(40)
(40) are structurally equal.
Then, by similar techniques as in Dahl et al. [Citation4], we can show the following theorem:
Theorem 5.1.
Assume that solve the FBSDE system (40)–(42). Define
and assume that
. Then,
solves the noisy memory FBSDE (38) and (39) and
Proof.
The jump terms do not make a difference here, so assume for simplicity that everywhere.
In general, we know that if then
(44)
(44)
Now, note that the solution of the BSDE (42) can be written
where the equalities follow from Fubini’s theorem, the rule of double expectation, the definition of
and a change of variables. Hence, by EquationEquation (44)
(44)
(44) :
which is part of what we wanted to prove.
By inserting this expression for into the definition of
we see that
Hence, we see that the BSDE (41) is the same as (39), so they have the same solution. This completes the proof of the theorem. □
We can also prove the following converse result.
Theorem 5.2.
If solve the FBSDE (38) and (39) and we define
and
Then, solve the system of EquationEquations (40)–(42).
Proof. A
gain, the jump parts make no crucial difference, so we consider the no-jump situation for simplicity.
It is clear that EquationEquation (40)(40)
(40) holds from the assumptions above (from the definition of
see (43)). Also, the BSDE (41) holds: Clearly, the terminal condition holds, and by the computations in the proof of Theorem 5.1, the remaining part of EquationEquation (41)
(41)
(41) also holds. Therefore, it only remains to prove that the BSDE (42) holds.
By the Itô isometry and the Clark–Ocone formula,
Hence,
Note that from the Clark–Ocone theorem,
Therefore, by the definition of in the theorem and the Fubini theorem
By some algebra and the Clark–Ocone theorem Equation(16)(16)
(16) ,
By splitting the integrals and using change of variables (twice) as well as some algebra,
This proves that the BSDE (42) holds as well. □
Now, we have expressed the solution of the Malliavin FBSDE via the solution of the “double” FBSDE system (40)–(42). What kind of system of equations is this? The system consists of two connected BSDEs in respectively, and these are again connected to a FBSDE in
However, from EquationEquation (42)
(42)
(42) and the definition of
we see that the right hand side of (42) does not depend on
Hence, the BSDE (42) can be rewritten
This can be solved to express by letting
for all
and
Now, we can substitute this solution for into the FBSDE system (40) and (41). The resulting set of equations is a regular system of time advanced FBSDEs with jumps. There are to the best of our knowledge, no general results on existence and uniqueness of such systems of FBSDEs. However, if we simplify by removing the jumps and there was no time-advanced part (i.e., no delay process
in the original FSDE (1)), there are some results by Ma et al. [Citation24].
6. Optimal consumption rate with respect to recursive utility
In this section, we apply the previous results to the problem of determining an optimal consumption rate with respect to recursive utility (see also Øksendal and Sulem [Citation25] and Dahl and Øksendal [Citation26]). Let where the consumption rate
is our control, and assume that
(45)
(45)
and
is given by
Let the performance functional be defined by i.e.,
is the recursive utility for player
Also, assume that both players have full information, so
We would like to find a Nash equilibrium for this FBSDE game with delay. To do so we will use the maximum principle Theorem 3.1. Note that and that
The Hamiltonians are:
The adjoint BSDEs are
where
for
Note that by the definition of
The adjoint BSDEs are linear, and the solutions are given by (see Øksendal and Sulem [Citation15])
(46)
(46)
where
Note that by the SDE (45),
(47)
(47)
Hence, by combining EquationEquations (46)(46)
(46) and Equation(47)
(47)
(47) , we see that
(48)
(48)
The adjoint FSDEs are
These are (non-stochastic) differential equation with solution
We maximize with respect to
For
the first order condition is:
By substituting EquationEquation (48)(48)
(48) into this, we find (by the sufficient maximum principle, Theorem 3.1) that the consumption rates leading to a Nash equilibrium for the recursive utility problem are given by:
where
7. Conclusion
In this paper, we have analyzed a two-player stochastic game connected to a set of FBSDEs involving delay and noisy memory of the market process. We have derived sufficient and necessary maximum principles for a set of controls for the two players to be a Nash equilibrium in this game. We have also studied the associated FBSDE involving Malliavin derivatives, and connected this to a system of FBSDEs not involving Malliavin derivatives. Finally, we were able to derive a closed form Nash equilibrium solution to a game where the aim is to find the optimal consumption with respect to recursive utility.
Notes
1 Defined similarly as subdifferentiability for convex functions.
References
- Ivanov, A. F., Kazmerchuk, Y. I., Swishchuk, A. V. Theory, stochastic stability and applications of stochastic delay differential equations: a survey of recent results. Research report, http://www.math.yorku.ca/aswishch/sddesurvey.pdf.
- Øksendal, B., Sulem, A., Zhang, T. (2011). Optimal control of stochastic delay equations and time-advanced backward stochastic differential equations. Adv. Appl. Probab. 43(2):572–596. DOI: 10.1239/aap/1308662493.
- Agram, N., Øksendal, B. (2014). Infinite horizon optimal control of forward-backward stochastic differential equations with delay. J. Comput. Appl. Math. 259 (B):336–349. DOI: 10.1016/j.cam.2013.04.048.
- Dahl, K., Mohammed, S., Øksendal, B., Røse, E. (2016). Optimal control of systems with noisy memory and BSDEs with malliavin derivatives. J. Funct. Anal. 271(2):289–329. DOI: 10.1016/j.jfa.2016.04.031.
- Krylov, N. V. (2009). Controlled Diffusion Processes. Berlin, Heidelberg: Springer.
- Krylov, N. V. (1972). Control of a solution of a stochastic integral equation. Theory Probab. Appl. 17(1):114–130. DOI: 10.1137/1117009.
- Øksendal, B., Sulem, A. (2014). Forward-backward stochastic differential games and stochastic control under model uncertainty. J. Optim. Theory Appl. 22 (161):22–55. DOI: 10.1007/s10957-012-0166-7.
- Wang, S., Wu, Z. (2017). Stochastic maximum principle for optimal control problems of forward-backward delay systems involving impulse controls. J. Syst. Sci. Complex. 30(2):280–306. DOI: 10.1007/s11424-016-5039-y.
- Wu, Z. (2013). A general maximum principle for optimal control of forward-backward stochastic systems. Automatica 49 (5):1473–1480. DOI: 10.1016/j.automatica.2013.02.005.
- Wang, G., Wu, Z., Xiong, J. (2013). Maximum principles for forward-backward stochastic control systems with correlated state and observation noises. SIAM J. Control Optim. 51 (1):491–524. DOI: 10.1137/110846920.
- Øksendal, B. (2013). Stochastic Differential Equations: An Introduction with Applications. Berlin, Heidelberg: Springer.
- Øksendal, B., Sulem, A. (2000). A maximum principle for optimal control of stochastic systems with delay, with applications to finance. In: J.M. Menaldi, E. Rofman, A. Sulem, eds. Optimal Control and Partial Differential Equations—Innovations and Applications. Amsterdam: IOS Press.
- Di Nunno, G., Øksendal, B., Proske, F. (2009). Malliavin Calculus for Lévy Processes with Applications to Finance. Berlin, Heidelberg: Springer.
- Øksendal, B., Sulem, A. (2007). Applied Stochastic Control of Jump Diffusions. Berlin, Heidelberg: Springer.
- Øksendal, B., Sulem, A. (2014). Risk minimization in financial markets modeled by Itô Lévy processes. Research report, Department of Mathematics, University of Oslo.
- Aase, K., Øksendal, B., Privault, N., Ubøe, J. (2000). White noise generalizations of the Clark–Haussmann–Ocone theorem with application to mathematical finance. Finance Stochast. Stochastics4 (4):465–496. DOI: 10.1007/PL00013528.
- Nualart, D. (2006). The Malliavin Calculus and Related Topics. Springer, Berlin Heidelberg.
- Sanz-Solè, M. (2005). Malliavin Calculus. Lausanne: EPFL Press.
- Agram, N., Øksendal, B. (2015). Malliavin calculus and optimal control of stochastic Volterra equations. J. Optim. Theory Appl. 167(3):1070–1094. DOI: 10.1007/s10957-015-0753-5.
- Pham, H. (2009). Continuous-time stochastic control and optimization with financial applications. Stochastic Modell. Appl. Probab. 61:139–169.
- Rockafellar, R. T. (1970). Convex Analysis. Princeton: Princeton University Press.
- Di Nunno, G., Øksendal, B., Proske, F. (2004). White noise analysis for Lévy processes. J. Funct. Anal. 206(1):109–148. DOI: 10.1016/S0022-1236(03)00184-8.
- Shilling, R. L. (2005). Measures, Integrals and Martingales. Cambridge: Cambridge University Press.
- Ma, J., Yin, H., Zhang, J. (2012). On non-Markovian forward-backward SDEs and backward stochastic PDEs. Stochastic Processes Their Appl. 122(12):3980–4004. DOI: 10.1016/j.spa.2012.08.002.
- Øksendal, B., Sulem, A. (2016). Optimal control of predictive mean-field equations and applications to finance, Eds. Benth F., Di Nunno G.: Stochastics of Environmental and Financial Economics, Springer Proceedings in Mathematics & Statistics, Vol. 138. Cham: Springer.
- Dahl, K., Øksendal, B. (2017). Singular recursive utility. Stochastics 89(6–7):994–1014. DOI: 10.1080/17442508.2017.1303067.