Abstract
We introduce a mixed generalized Dynkin game/stochastic control with -expectation in a Markovian framework. We study both the case when the terminal reward function is Borelian only and when it is continuous. By using the characterization of the value function of a generalized Dynkin game via an associated doubly reflected BSDEs (DRBSDE), we obtain that the value function of our problem coincides with the value function of an optimization problem for DRBSDEs. Using this property, we establish a weak dynamic programming principle. We then show a strong dynamic programming principle in the continuous case, which cannot be derived from the weak one. In particular, we have to prove that the value function of the problem is continuous with respect to time t, which requires some technical tools of stochastic analysis and new results on DRBSDEs. We finally study the links between our mixed problem and generalized Hamilton–Jacobi–Bellman variational inequalities in both cases.
Notes
No potential conflict of interest was reported by the authors.