617
Views
0
CrossRef citations to date
0
Altmetric
Research Papers

Functional quantization of rough volatility and applications to volatility derivatives

, ORCID Icon &
Pages 1769-1792 | Received 23 Jul 2021, Accepted 12 Oct 2023, Published online: 03 Dec 2023

Abstract

We develop a product functional quantization of rough volatility. Since the optimal quantizers can be computed offline, this new technique, built on the insightful works by [Luschgy, H. and Pagès, G., Functional quantization of Gaussian processes. J. Funct. Anal., 2002, 196(2), 486–531; Luschgy, H. and Pagès, G., High-resolution product quantization for Gaussian processes under sup-norm distortion. Bernoulli, 2007, 13(3), 653–671; Pagès, G., Quadratic optimal functional quantization of stochastic processes and numerical applications. In Monte Carlo and Quasi-Monte Carlo Methods 2006, pp. 101–142, 2007 (Springer: Berlin Heidelberg)], becomes a strong competitor in the new arena of numerical tools for rough volatility. We concentrate our numerical analysis on the pricing of options on the VIX and realized variance in the rough Bergomi model [Bayer, C., Friz, P.K. and Gatheral, J., Pricing under rough volatility. Quant. Finance, 2016, 16(6), 887–904] and compare our results to other benchmarks recently suggested.

1. Introduction

Gatheral et al. (Citation2018) recently introduced a new framework for financial modelling. To be precise—according to the reference website https://sites.google.com/site/roughvol/home—almost 2400 days have passed since instantaneous volatility was shown to have a rough nature, in the sense that its sample paths are α-Hölder-continuous with α<12. Many studies, both empirical (Bennedsen et al. Citation2017, Fukasawa Citation2021, Fukasawa et al. Citation2022) and theoretical (Alòs et al. Citation2007, Fukasawa Citation2011), have confirmed this, showing that these so-called rough volatility models are a more accurate fit to the implied volatility surface and to estimate historical volatility time series.

On equity markets, the quality of a model is usually measured by its ability to calibrate not only to the SPX implied volatility but also VIX Futures and the VIX implied volatility. The market standard models had so far been Markovian, in particular the double mean-reverting process (Gatheral Citation2008, Huh et al. Citation2018), Bergomi's model (Bergomi Citation2005) and, to some extent, jump models (Carr and Madan Citation2014, Kokholm and Stisen Citation2015). However, they each suffer from several drawbacks, which the new generation of rough volatility models seems to overcome. For VIX Futures pricing, the rough version of Bergomi's model was thoroughly investigated in Jacquier et al. (Citation2018a), showing accurate results. Nothing comes for free though and the new challenges set by rough volatility models lie on the numerical side, as new tools are needed to develop fast and accurate numerical techniques. Since classical simulation tools for fractional Brownian motions are too slow for realistic purposes, new schemes have been proposed to speed it up, among which the Monte Carlo hybrid scheme (Bennedsen et al. Citation2017, McCrickerd and Pakkanen Citation2018), a tree formulation (Horvath et al. Citation2019), quasi Monte-Carlo methods (Bayer et al. Citation2020) and Markovian approximations (Abi Jaber and El Euch Citation2019, Chen et al. Citation2021).

We suggest here a new approach, based on product functional quantization (Pagès Citation2007). Quantization was originally conceived as a discretization technique to approximate a continuous signal by a discrete one (Sheppard Citation1897), later developed at Bell Laboratory in the 1950s for signal transmission (Gersho and Gray Citation1992). It was, however, only in the 1990s that its power to compute (conditional) expectations of functionals of random variables (Graf and Luschgy Citation2007) was fully understood. Given an Rd-valued random vector on some probability space, optimal vector quantization investigates how to select an Rd-valued random vector Xˆ, supported on at most N elements, that best approximates X according to a given criterion (such as the Lr-distance, r1). Functional quantization is the infinite-dimensional version, approximating a stochastic process with a random vector taking a finite number of values in the space of trajectories for the original process. It has been investigated precisely (Luschgy and Pagès Citation2002, Pagès Citation2007) in the case of Brownian diffusions, in particular for financial applications (Pagès and Printems Citation2005). However, optimal functional quantizers are in general hard to compute numerically and instead product functional quantizers provide a rate-optimal (so, in principle, sub-optimal) alternative often admitting closed-form expressions (Pagès and Printems Citation2005, Luschgy and Pagès Citation2007).

In section 2, we briefly review important properties of Gaussian Volterra processes, displaying a series expansion representation, and paying special attention to the Riemann–Liouville case in section 2.2. This expansion yields, in section 3, a product functional quantization of the processes, that shows an L2-error of order log(N)H, with N the number of paths and H a regularity index. We then show, in section 3.1, that these functional quantizers, although sub-optimal, are stationary. We specialize our setup to the generalized rough Bergomi model in section 4 and show how product functional quantization applies to the pricing of VIX Futures and VIX options, proving in particular precise rates of convergence. Finally, section 5 provides a numerical confirmation of the quality of our approximations for VIX Futures and Call Options on the VIX in the rough Bergomi model, benchmarked against other existing schemes. In this section, we also discuss how product functional quantization of the Riemann–Liouville process itself can be exploited to price options on realized variance.

Notations   We set N as the set of strictly positive natural numbers. We denote by C[0,1] the space of real-valued continuous functions over [0,1] and by L2[0,1] the Hilbert space of real-valued square integrable functions on [0,1], with inner product f,gL2[0,1]:=01f(t)g(t)dt, inducing the norm fL2[0,1]:=(01|f(t)|2dt)1/2, for each f,gL2[0,1]. L2(P) denotes the space of square integrable (with respect to P) random variables.

2. Gaussian Volterra processes on R+

For clarity, we restrict ourselves to the time interval [0,1]. Let {Wt}t[0,1] be a standard Brownian motion on a filtered probability space (Ω,F,{Ft}t[0,1],P), with {Ft}t[0,1] its natural filtration. On this probability space, we introduce the Volterra process (1) Zt:=0tK(ts)dWs,t[0,1],(1) and we consider the following assumptions for the kernel K:

Assumption 2.1

There exist α(12,12){0} and L:(0,1](0,) continuously differentiable, slowly varying at 0, that is, for any t>0, limx0L(tx)L(x)=1, and bounded away from 0 function with |L(x)|C(1+x1), for x(0,1], for some C>0, such that K(x)=xαL(x),x(0,1].

This implies in particular that KL2[0,1], so that the stochastic integral (Equation1) is well defined. The Gamma kernel, with K(u)=eβuuα, for β>0 and α(12,12){0}, is a classical example satisfying assumption 2.1. Straightforward computations show that the covariance function of Z reads RZ(s,t)=0tsK(tu)K(su)du,s,t[0,1]. Under assumption 2.1, Z is a Gaussian process admitting a version which is ε-Hölder continuous for any ϵ<12+α=H and hence also admits a continuous version (Bennedsen et al. Citation2017, Proposition 2.11).

2.1. Series expansion

We introduce a series expansion representation for the centered Gaussian process Z in (Equation1), which will be key to develop its functional quantization. Inspired by Luschgy and Pagès (Citation2007), introduce the stochastic process (2) Yt:=n1K[ψn](t)ξn,t[0,1],(2) where {ξn}n1 is a sequence of i.i.d. standard Gaussian random variables, {ψn}n1 denotes the orthonormal basis of L2[0,1]: (3) ψn(t)=2cos(tλn),with λn=4(2n1)2π2,(3) and the operator K:L2[0,1]C[0,1] is defined for fL2[0,1] as (4) K[f](t):=0tK(ts)f(s)ds,for all t[0,1].(4)

Remark 2.2

The stochastic process Y in (Equation2) is defined as a weighted sum of independent centered Gaussian variables, so for every t[0,1] the random variable Yt is a centered Gaussian random variable and the whole process Y is Gaussian with zero mean.

We set the following assumptions on the functions {K[ψn]}nN:

Assumption 2.3

There exists H(0,12) such that

  1. there is a constant C1>0 for which, for any n1, K[ψn] is (H+12)-Hölder continuous, with sups,t[0,1],st|K[ψn](t)K[ψn](s)||ts|H+12C1n;

  2. there exists a constant C2>0 such that supt[0,1]|K[ψn](t)|C2n(H+12),for all n1.

Note that under these assumptions, the series (Equation2) converges both almost surely and in L2(P) for each t[0,1] by Khintchine–Kolmogorov Convergence Theorem (Chow and Teichner Citation1997, theorem 1, section 5.1).

It is natural to wonder whether assumption 2.1 implies assumption 2.3 given the basis functions (Equation3). This is far from trivial in our general setup and we provide examples and justifications later on for models of interest. Similar considerations with slightly different conditions can be found in Luschgy and Pagès (Citation2007). We now focus on the variance–covariance structure of the Gaussian process Y.

Lemma 2.4

For any s,t[0,1], the covariance function of Y is given by RY(s,t):=E[YsYt]=0tsK(tu)K(su)du.

Proof.

Exploiting the definition of Y in (Equation2), the definition of K in (Equation4) and the fact that the random variable ξn's are i.i.d. standard Normal, we obtain RY(s,t)=E[YsYt]=E[(n1K[ψn](s)ξn)×(m1K[ψm](t)ξm)]=n1K[ψn](s)K[ψn](t)=n1(01K(su)1[0,s](u)ψn(u)du×01K(tr)1[0,t](r)ψn(r)dr)=n1K(s)1[0,s](),ψnL2[0,1]K(t)1[0,t](),ψnL2[0,1]=n1K(t)1[0,t](),K(s)1[0,s](),ψnL2[0,1]ψnL2[0,1]=K(t)1[0,t](),n1K(s)×1[0,s](),ψnL2[0,1]ψnn1L2[0,1]=K(t)1[0,t](),K(s)1[0,s]()L2[0,1]=01K(su)1[0,s](u)K(tu)1[0,t](u)du=0tsK(tu)K(su)du.

Remark 2.5

Note that the centered Gaussian stochastic process Y admits a continuous version, too. Indeed, we have shown that Y has the same mean and covariance function as Z and, consequently, that the increments of the two processes share the same distribution. Thus  Bennedsen et al. (Citation2017, proposition 2.11) applies to Y as well, yielding that the process admits a continuous version. This last key property of Y can be alternatively proved directly as done in appendix A.2.

Lemma 2.4 implies that E[YsYt]=E[ZsZt], for all s,t[0,1]. Both Z and Y are continuous, centered, Gaussian with the same covariance structure, so from now on we will work with Y, using (5) Z=n1K[ψn]ξn,Pa.s.(5)

2.2. The Riemann–Liouville case

For K(u)=uH12, with H(0,12), the process (Equation1) takes the form ZtH:=0t(ts)H12dWs,t[0,1], where we add the superscript H to emphasize its importance. It is called a Riemann–Liouville process (henceforth RL) (also known as Type II fractional Brownian motion or Lévy fractional Brownian motion), as it is obtained by applying the Riemann–Liouville fractional operator to the standard Brownian motion, and is an example of a Volterra process. This process enjoys properties similar to those of the fractional Brownian motion (fBM), in particular being H-self-similar and centered Gaussian. However, contrary to the fractional Brownian motion, its increments are not stationary. For a more detailed comparison between the fBM and ZH, we refer to Picard (Citation2011, theorem 5.1). In the RL case, the covariance function RZH(,) is available (Jacquier et al. Citation2018b, proposition 2.1) explicitly as RZH(s,t)=1H+12(st)H+12(st)H12×2F1(1,12H;2H+1;stst),s,t[0,1], where 2F1(a,b;c;z) denotes the Gauss hypergeometric function  (Olver Citation1997, chapter 5, section 9). More generally, Olver (Citation1997, chapter 5, Section 11), the generalized hypergeometric functions pFq(z) are defined as (6) pFq(z)=pFq(a1,a2,,ap;c1,c2,,cq;z):=k=0(a1)k(a2)k(ap)k(c1)k(c2)k(cq)kzk!,(6) with the Pochammer's notation (a)0:=1 and (a)k:=a(a+1)(a+2)(a+k1), for k1, where none of the ck are negative integers or zero. For pq the series (Equation6) converges for all z and when p = q + 1 convergence holds for |z|<1 and the function is defined outside this disk by analytic continuation. Finally, when p>q+1 the series diverges for nonzero z unless one of the ak's is zero or a negative integer.

Regarding the series representation (Equation2), we have, for t[0,1] and n1, (7) KH[ψn](t):=20t(ts)H12cos(sλn)ds=221+2H tH+121F2(1;34+H2,54+H2;t24λn).(7) Assumption 2.3 holds in the RL case here using (Luschgy and Pagès Citation2007, lemma 4) (identifying KH[ψn] to fn from Luschgy and Pagès (Citation2007, equation (3.7))). Assumption 2.3 (B) implies that, for all t[0,1], n1KH[ψn](t)2n1(supt[0,1]|KH[ψn](t)|)2C22n11n1+2H<, and therefore the series (Equation2) converges both almost surely and in L2(P) for each t[0,1] by Khintchine–Kolmogorov Convergence Theorem (Chow and Teichner Citation1997, theorem 1, section 5.1).

Remark 2.6

The expansion (Equation2) is in general not a Karhunen–Loève decomposition (Pagès and Printems Citation2005, section 4.1.1). In the RL case, it can be numerically checked that the basis {KH[ψn]}nN is not orthogonal in L2[0,1] and does not correspond to eigenvectors for the covariance operator of the Riemann–Liouville process. In his PhD Thesis (Corlay Citation2011), Corlay exploited a numerical method to obtain approximations of the first terms in the K–L expansion of processes for which an explicit form is not available.

3. Functional quantization and error estimation

Optimal (quadratic) vector quantization was conceived to approximate a square integrable random vector X:(Ω,F,P)Rd by another one Xˆ, taking at most a finite number N of values, on a grid ΓN:={x1N,x2N,,xNN}, with xiNRd,i=1,,N. The quantization of X is defined as Xˆ:=ProjΓN(X), where ProjΓN:RdΓN denotes the nearest neighbor projection. Of course the choice of the N-quantizer ΓN is based on a given optimality criterion: in most cases ΓN minimizes the distance E[|XXˆ|2]1/2. We recall basic results for one-dimensional standard Gaussian, which shall be needed later, and refer to Graf and Luschgy (Citation2007) for a comprehensive introduction to quantization.

Definition 3.1

Let ξ be a one-dimensional standard Gaussian on a probability space (Ω,F,P). For each nN, we define the optimal quadratic n-quantization of ξ as the random variable ξˆn:=ProjΓn(ξ)=i=1nxin1Ci(Γn)(ξ), where Γn={x1n,,xnn} is the unique optimal quadratic n-quantizer of ξ, namely the unique solution to the minimization problem minΓnR,Card(Γn)=nE[|ξProjΓn(ξ)|2], and {Ci(Γn)}i{1,,n} is a Voronoi partition of R, that is a Borel partition of R that satisfies Ci(Γn){yR:|yxin|=min1jn|yxjn|}C¯i(Γn), where the right-hand side denotes the closure of the set in R.

The unique optimal quadratic n-quantizer Γn={x1n,,xnn} and the corresponding quadratic error are available online, at http://www.quantize.maths-fi.com/gaussian_database for n{1,,5999}.

Given a stochastic process, viewed as a random vector taking values in its trajectories space, such as L2[0,1], functional quantization does the analogue to vector quantization in an infinite-dimensional setting, approximating the process with a finite number of trajectories. In this section, we focus on product functional quantization of the centered Gaussian process Z from (Equation1) of order N (see Pagès Citation2007, section 7.4 for a general introduction to product functional quantization). Recall that we are working with the continuous version of Z in the series (Equation5). For any m,NN, we introduce the following set, which will be of key importance all throughout the paper: (8) DmN:={dNm:i=1md(i)N}.(8)

Definition 3.2

A product functional quantization of Z of order N is defined as (9) Zˆtd:=n=1mK[ψn](t)ξˆnd(n),t[0,1],(9) where dDmN, for some mN, and for every n{1,,m}, ξˆnd(n) is the (unique) optimal quadratic quantization of the standard Gaussian random variable ξn of order d(n), according to definition 3.1.

Remark 3.3

The condition i=1md(i)N in equation (Equation8) motivates the wording ‘product’ functional quantization. Clearly, the optimality of the quantizer also depends on the choice of m and d, for which we refer to proposition 3.6 and section 5.1.

Before proceeding, we need to make precise the explicit form for the product functional quantizer of the stochastic process Z:

Definition 3.4

The product functional d-quantizer of Z is defined as χi_d(t):=n=1mK[ψn](t) xind(n),t[0,1], i_=(i1,,im), for dDmN and 1ind(n) for each n=1,,m.

Remark 3.5

Intuitively, the quantizer is chosen as a Cartesian product of grids of the one-dimensional standard Gaussian random variables. So, we also immediately find the probability associated to every trajectory χi_d: for every i_=(i1,,im)n=1m{1,,d(n)}, P(Zˆd=χi_d)=n=1mP(ξnCin(Γd(n))), where Cj(Γd(n)) is the jth Voronoi cell relative to the d(n)-quantizer Γd(n) in definition 3.1.

The following, proved in appendix A.1, deals with the quantization error estimation and its minimization and provides hints to choose (m,d). A similar result on the error can be obtained applying (Luschgy and Pagès Citation2007, theorem 2) to the first example provided in the reference. For completeness we preferred to prove the result in an autonomous way to further characterize the explicit expression of the rate optimal parameters. Indeed, we then compare these rate optimal parameters with the (numerically computed) optimal ones in section 5.1. The symbol denotes the lower integer part.

Proposition 3.6

Under assumption 2.3, for any N1, there exist m(N)N and C>0 such that E[ZˆdNZL2[0,1]2]12Clog(N)H, where dNDm(N)N and with, for each n=1,,m(N), dN(n)=N1m(N)n(H+12)(m(N)!)2H+12m(N). Furthermore m(N)=O(log(N)).

Remark 3.7

In the RL case, the trajectories of ZˆH,d are easily computable and they are used in the numerical implementations to approximate the process ZH. In practice, the parameters m and d=(d(1),,d(m)) are chosen as explained in section 5.1.

3.1. Stationarity

We now show that the quantizers we are using are stationary. The use of stationary quantizers is motivated by the fact that their expectation provides a lower bound for the expectation of convex functionals of the process (remark 3.9) and they yield a lower (weak) error in cubature formulae (Pagès Citation2007, page 26). We first recall the definition of stationarity for the quadratic quantizer of a random vector (Pagès Citation2007, definition 1).

Definition 3.8

Let X be an Rd-valued random vector on (Ω,F,P). A quantizer Γ for X is stationary if the nearest neighbor projection XˆΓ=ProjΓ(X) satisfies (10) E[X|XˆΓ]=XˆΓ.(10)

Remark 3.9

Taking expectation on both sides of (Equation10) yields E[X]=E[E[X|XˆΓ]]=E[XˆΓ]. Furthermore, for any convex function f:RdR, the identity above, the conditional Jensen's inequality and the tower property yield E[f(XˆΓ)]=E[f(E[X|XˆΓ])]E[E[f(X)|XˆΓ]]=E[f(X)].

While an optimal quadratic quantizer of order N of a random vector is always stationary (Pagès Citation2007, proposition 1(c)), the converse is not true in general. We now present the corresponding definition for a stochastic process.

Definition 3.10

Let {Xt}t[T1,T2] be a stochastic process on (Ω,F,{Ft}t[T1,T2],P). We say that an N-quantizer ΛN:={λ1N,,λNN}L2[T1,T2], inducing the quantization Xˆ=XˆΛN, is stationary if E[Xt|Xˆt]=Xˆt, for all t[T1,T2].

Remark 3.11

To ease the notation, we omit the grid ΛN in XˆΛN, while the dependence on the dimension N remains via the superscript dDmN (recall (Equation9)).

As was stated in section 2.1, we are working with the continuous version of the Gaussian Volterra process Z given by the series expansion (Equation5). This will ease the proof of stationarity below (for a similar result in the case of the Brownian motion (Pagès Citation2007, proposition 2)).

Proposition 3.12

The product functional quantizers inducing Zˆd in (Equation9) are stationary.

Proof.

For any t[0,1], by linearity, we have the following chain of equalities: E[Zt|{ξˆnd(n)}1nm]=E[k1K[ψk](t)ξk|{ξˆnd(n)}1nm]=k1K[ψk](t)E[ξk|{ξˆnd(n)}1nm]. Since the N(0,1)-Gaussian ξn's are i.i.d., by definition of optimal quadratic quantizers (hence stationary), we have E[ξk|ξˆid(i)]=δikξˆid(i), for all i,k{1,,m}, and therefore E[ξk|{ξˆnd(n)}1nm]=E[ξk|ξˆkd(k)]=ξˆkd(k),for all k{1,,m}. Thus we obtain E[Zt|{ξˆnd(n)}1nm]=k1K[ψk](t)ξˆkd(k)=Zˆtd. Finally, exploiting the tower property and the fact that the σ-algebra generated by Zˆtd is included in the σ-algebra generated by {ξˆnd(n)}n{1,,m} by definition 3.2, we obtain E[Zt|Zˆtd]=E[E[Zt|{ξˆnd(n)}n{1,,m}]|Zˆtd]=E[Zˆtd|Zˆtd]=Zˆtd, which concludes the proof.

4. Application to VIX derivatives in rough Bergomi

We now specialize the setup above to the case of rough volatility models. These models are extensions of classical stochastic volatility models, introduced to better reproduce the market implied volatility surface. The volatility process is stochastic and driven by a rough process, by which we mean a process whose trajectories are H-Hölder continuous with H(0,12). The empirical study (Gatheral et al. Citation2018) was the first to suggest such a rough behavior for the volatility, and ignited tremendous interest in the topic. The website https://sites.google.com/site/roughvol/home contains an exhaustive and up-to-date review of the literature on rough volatility. Unlike continuous Markovian stochastic volatility models, which are not able to fully describe the steep implied volatility skew of short-maturity options in equity markets, rough volatility models have shown accurate fit for this crucial feature. Within rough volatility, the rough Bergomi model (Bayer et al. Citation2016) is one of the simplest, yet decisive frameworks to harness the power of the roughness for pricing purposes. We show how to adapt our functional quantization setup to this case.

4.1. The generalized Bergomi model

We work here with a slightly generalized version of the rough Bergomi model, defined as {Xt=120tVsds+0tVsdBs,X0=0,Vt=v0(t)exp{γZtγ220tK(ts)2ds},V0>0, where X is the log-stock price, V the instantaneous variance process driven by the Gaussian Volterra process Z in (Equation1), γ>0 and B is a Brownian motion defined as B:=ρW+1ρ2W for some correlation ρ[1,1] and W,W orthogonal Brownian motions. The filtered probability space is therefore taken as Ft=FtWFtW, t0. This is a non-Markovian generalization of Bergomi's second generation stochastic volatility model (Bergomi Citation2005), letting the variance be driven by a Gaussian Volterra process instead of a standard Brownian motion. Here, vT(t) denotes the forward variance for a remaining maturity t, observed at time T. In particular, v0 is the initial forward variance curve, assumed to be F0-measurable. Indeed, given market prices of variance swaps σT2(t) at time T with remaining maturity t, the forward variance curve can be recovered as vT(t)=ddt(tσT2(t)), for all t0, and the process {vs(ts)}0st is a martingale for all fixed t>0.

Remark 4.1

With K(u)=uH12, γ=2νCH, for ν>0, and CH:=2HΓ(3/2H)Γ(H+1/2)Γ(22H), we recover the standard rough Bergomi model (Bayer et al. Citation2016).

4.2. VIX Futures in the generalized Bergomi

We consider the pricing of VIX Futures (www.cboe.com/tradable_products/vix/) in the rough Bergomi model. They are highly liquid Futures on the Chicago Board Options Exchange Volatility Index, introduced on March 26, 2004, to allow for trading in the underlying VIX. Each VIX Future represents the expected implied volatility for the 30 days following the expiration date of the Futures contract itself. The continuous version of the VIX at time T is determined by the continuous-time monitoring formula (11) VIXT2:=ET[1ΔTT+ΔdXs,Xs]=1ΔTT+ΔE[Vs|FT]ds=1ΔTT+ΔET[v0(s)eγZsγ220sK(su)2du]ds=1ΔTT+Δv0(s)eγ0TK(su)dWuγ220sK(su)2du×ET[eγTsK(su)dWu]ds=1ΔTT+Δv0(s)eγ0TK(su)dWuγ220sK(su)2du×eγ22TsK(su)2duds,(11) similarly to Jacquier et al. (Citation2018a), where Δ is equal to 30 days, and we write ET[]:=E[|FT] (dropping the subscript when T = 0). Thus the price of a VIX Future with maturity T is given by PT:=E[VIXT]=E[(1ΔTT+Δv0(t)×eγZtT,Δ+γ22(0tTK(s)2ds0tK(s)2ds)dt)12], where the process (ZtT,Δ)t[T,T+Δ] is given by ZtT,Δ=0TK(ts)dWs,t[T,T+Δ]. To develop a functional quantization setup for VIX Futures, we need to quantize the process ZT,Δ, which is close, yet slightly different, from the Gaussian Volterra process Z in (Equation1).

4.3. Properties of ZT

To retrieve the same setting as above, we normalize the time interval to [0,1], that is T+Δ=1. Then, for T fixed, we define the process ZT:=ZT,1T as ZtT:=0TK(ts)dWs,t[T,1], which is well defined by the square integrability of K. By definition, the process ZT is centered Gaussian and Itô isometry gives its covariance function as RZT(t,s)=0TK(tu)K(su)du,t,s[T,1]. Proceeding as previously, we introduce a Gaussian process with same mean and covariance as those of ZT, represented as a series expansion involving standard Gaussian random variables; from which product functional quantization follows. It is easy to see that the process ZT has continuous trajectories. Indeed, (ZtTZsT)2E[|ZtZs|2|FTW], by conditional Jensen's inequality since ZtT=E[Zt|FTW]. Then, applying tower property, for any Ts<t1, E[|ZtTZsT|2]E[|ZtZs|2], and therefore the H-Hölder regularity of Z (section 2) implies that of ZT.

4.3.1. Series expansion

Let {ξn}n1 be an i.i.d. sequence of standard Gaussian and {ψn}n1 the orthonormal basis of L2[0,1] from (Equation3). Denote by KT() the operator from L2[0,1] to C[T,1] that associates to each fL2[0,1], (12) KT[f](t):=0TK(ts)f(s)ds,t[T,1].(12) We define the process YT as (recall the analogous  (Equation2)): YtT:=n1KT[ψn](t)ξn,t[T,1]. The lemma below follows from the corresponding results in remark 2.2 and lemma 2.4:

Lemma 4.2

The process YT is centered, Gaussian and with covariance function RYT(s,t):=E[YsTYtT]=0TK(tu)K(su)du,for\ all s,t[T,1].

To complete the analysis of ZT, we require an analogue version of assumption 2.3.

Assumption 4.3

Assumption 2.3 holds for the sequence (KT[ψn])n1 on [T,1] with the constants C1 and C2 depending on T.

4.4. The truncated RL case

We again pay special attention to the RL case, for which the operator (Equation12) reads, for each nN, KHT[ψn](t):=0T(ts)H12ψn(s)ds,for all t[T,1], and satisfies the following, proved in appendix A.4.

Lemma 4.4

The functions {KHT[ψn]}n1 satisfy assumption 4.3.

A key role in this proof is played by an intermediate lemma, proved in appendix A.3, which provides a convenient representation for the integral 0T(tu)H12eiπudu, tT0, in terms of the generalized Hypergeometric function 1F2().

Lemma 4.5

For any tT0, the representation 0T(tu)H12eiπudu=eiπt[(ζ12(t,h1)ζ12((tT),h1))iπ(ζ32(t,h2)ζ32((tT),h2))] holds, where h1:=12(H+12) and h2=12+h1, χ(z):=14π2z2 and (13) ζk(z,h):=z2h2h1F2(h;k,1+h;χ(z)),for k{12,32}.(13)

Remark 4.6

The representation in lemma 4.5 can be exploited to obtain an explicit formula for KHT[ψn](t), t[T,1] and nN: KHT[ψn](t)=2mH+120mT(mtu)H12cos(πu)du=2mH+12{0mT(mtu)H12eiπudu}=2mH+12{eiπmt[(ζ12(mt,h1)ζ12(m(tT),h1))iπ(ζ32(mt,h2)ζ32(m(tT),h2))]}=2mH+12{cos(mtπ)(ζ12(mt,h1)ζ12(m(tT),h1))+πsin(mtπ)(ζ32(mt,h2)ζ32(m(tT),h2))}, with m:=n12 and ζ12(), ζ32() in (Equation13). We shall exploit this in our numerical simulations.

4.5. VIX derivatives pricing

We can now introduce the quantization for the process ZT,Δ, similarly to definition 3.2, recalling the definition of the set DmN in (Equation8):

Definition 4.7

A product functional quantization for ZT,Δ of order N is defined as ZˆtT,Δ,d:=n=1mKT,Δ[ψnT,Δ](t)ξˆnd(n),t[T,T+Δ], where dDmN, for some mN, and for every n{1,,m}, ξˆnd(n) is the (unique) optimal quadratic quantization of the Gaussian variable ξn of order d(n).

The sequence {ψnT,Δ}nN denotes the orthonormal basis of L2[0,T+Δ] given by ψnT,Δ(t)=2T+Δcos(tλn(T+Δ)),withλn=4(2n1)2π2, and the operator KT,Δ:L2[0,T+Δ]C[T,T+Δ] is defined for fL2[0,T+Δ] as KT,Δ[f](t):=0TK(ts)f(s)ds,t[T,T+Δ]. Adapting the proof of proposition 3.12, it is possible to prove that these quantizers are stationary, too.

Remark 4.8

The dependence on Δ is due to the fact that the coefficients in the series expansion depend on the time interval [T,T+Δ].

In the RL case for each nN, we can write, using remark 4.6, for any t[T,T+Δ]: KHT,Δ[ψnT,Δ](t)=2T+Δ0T(ts)H12cos(sλn(T+Δ))ds,=2(T+Δ)H(n1/2)H+120(n1/2)T+ΔT((n1/2)T+Δtu)H12cos(πu)du=2(T+Δ)H(n12)H+12{cos((n12)T+Δtπ)(ζ12((n12)T+Δt,h1)+ζ12((n12)T+Δ(tT),h1))+πsin((n12)T+Δtπ)(ζ32((n12)T+Δt,h2)+ζ32((n12)T+Δ(tT),h2))}. We thus exploit ZˆT,Δ,d to obtain an estimation of VIXT and of VIX Futures through the following: (14) VIXˆTd:=(1ΔTT+Δv0(t)exp{γZˆtT,Δ,d+γ22×(0tTK(s)2ds0tK(s)2ds)}dt)12,PˆTd:=E[(1ΔTT+Δv0(t)exp{γZˆtT,Δ,d+γ22×(0tTK(s)2ds0tK(s)2ds)}dt)12].(14)

Remark 4.9

The expectation above reduces to the following deterministic summation, making its computation immediate: PˆTd=E[(1ΔTT+Δv0(t)eγn=1mKT,Δ[ψnT,Δ](t)ξˆnd(n)+γ22(0tTK(s)2ds0tK(s)2ds)dt)12]=i_Id(1ΔTT+Δv0(t)eγn=1mKT,Δ[ψnT,Δ](t)xind(n)+γ22(0tTK(s)2ds0tK(s)2ds)dt)12n=1mP(ξnCin(Γd(n))), where ξˆnd(n) is the (unique) optimal quadratic quantization of ξn of order d(n), Cj(Γd(n)) is the jth Voronoi cell relative to the d(n)-quantizer ( definition 3.1), with j=1,,d(n) and i_=(i1,,im)j=1m{1,,d(j)}. In the numerical illustrations displayed in section 5, we exploited Simpson rule to evaluate these integrals. In particular, we used simps function from scipy.integrate with 300 points.

4.6. Quantization error of VIX derivatives

The following L2-error estimate is a consequence of assumption 4.3 (B) and its proof is omitted since it is analogous to that of proposition 3.6:

Proposition 4.10

Under assumption 4.3, for any N1, there exist mT(N)N, C>0 such that E[ZˆT,Δ,dT,NZT,ΔL2[T,T+Δ]2]12Clog(N)H, for dT,NDmT(N)N and with, for each n=1,,mT(N), dT,N(n)=N1mT(N)n(H+12)(mT(N)!)2H+12mT(N). Furthermore mT(N)=O(log(N)).

As a consequence, we have the following error quantification for European options on the VIX:

Theorem 4.11

Let F:RR be a globally Lipschitz-continuous function and dNm for some mN. There exists K>0 such that (15) |E[F(VIXT)]E[F(VIXˆTd)]|K E[ZT,ΔZˆT,Δ,dL2([T,T+Δ])2]12.(15) Furthermore, for any N1, there exist mT(N)N and C>0 such that, with dT,NDmT(N)N, (16) |E[F(VIXT)]E[F(VIXˆTdT,N)]|Clog(N)H.(16)

The upper bound in (Equation16) is an immediate consequence of  (Equation15) and proposition 4.10. The proof of (Equation15) is much more involved and is postponed to appendix A.5.

Remark 4.12

  • When F(x)=1, we obtain the price of VIX Futures and the quantization error |PTPˆTd|K E[ZT,ΔZˆT,Δ,dL2([T,T+Δ])2]12, and, for any N1, theorem 4.11 yields the existence of mT(N)N, C>0 such that |PTPˆTdT,N|Clog(N)H.

  • Since the functions F(x):=(xK)+ and F(x):=(Kx)+ are globally Lipschitz continuous, the same bounds apply for European Call and Put options on the VIX.

5. Numerical results for the RL case

We now test the quality of the quantization on the pricing of VIX Futures in the standard rough Bergomi model, considering the RL kernel in remark 4.1.

5.1. Practical considerations for m and d

Proposition 3.6 provides, for any fixed NN, some indications on m(N) and dNDmN (see (Equation8)), for which the rate of convergence of the quantization error is log(N)H. We present now a numerical algorithm to compute the optimal parameters. For a given number of trajectories NN, the problem is equivalent to finding mN and dDmN such that E[ZHZˆH,dL2[0,1]2] is minimal. Starting from (EquationA1) and adding and subtracting the quantity n=1m(01KH[ψn](t)2dt), we obtain (17) E[ZHZˆH,dL2[0,1]2]=n=1m(01KH[ψn](t)2dt)[ϵd(n)(ξn)]2+km+101KH[ψk](t)2dt=n=1m(01KH[ψn](t)2dt)s{[ϵd(n)(ξn)]21}+k101KH[ψk](t)2dt,ix(17) where ϵd(n)(ξn) denotes the optimal quadratic quantization error for the quadratic quantizer of order d(n) of the standard Gaussian random variable ξn (see appendix A.1 for more details). Note that the last term on the right-hand side of (Equation17) does not depend on m, nor on d. We therefore simply look for m and d that minimize A(m,d):=n=1m(01KH[ψn]2(t)dt)([ϵd(n)(ξn)]21). This can be easily implemented: the functions KH[ψn] can be obtained numerically from the Hypergeometric function and the quadratic errors ϵd(n)(ξn) are available at www.quantize.maths-fi.com/gaussian_database, for d(n){1,,5999}. The algorithm therefore reads as follows:

  1. fix m;

  2. minimize A(m,d) over dDmN and call it A~(m);

  3. minimize A~(m) over mN.

The results of the algorithm for some reference values of NN are available in table , where N¯traj:=i=1m¯(N)d¯N(i) represents the number of trajectories actually computed in the optimal case. In table , we compute the rate optimal parameters derived in proposition 3.6: the column ‘Relative error’ contains the normalized difference between the L2-quantization error made with the optimal choice of m¯(N) and d¯N in table  and the L2-quantization error made with m(N) and dN of the corresponding line of the table, namely |ZHZˆH,d¯NL2[0,1]ZHZˆH,dNL2[0,1]|ZHZˆH,d¯NL2[0,1]. In the column Ntraj:=i=1m(N)dN(i) we display the number of trajectories actually computed in the rate-optimal case. The optimal quadratic vector quantization of a standard Gaussian of order 1 is the random variable identically equal to zero and so when d(i)=1 the corresponding term is uninfluential in the representation.

Table 1. Optimal parameters.

Table 2. Rate-optimal parameters.

5.2. The functional quantizers

The computations in sections 2 and 3 for the RL process, respectively the ones in sections 4.3 and 4.4 for ZH,T, provide a way to obtain the functional quantizers of the processes.

5.2.1. Quantizers of the RL process

For the RL process, definition 3.4 shows that its quantizer is a weighted Cartesian product of grids of the one-dimensional standard Gaussian random variables. The time-dependent weights KH[ψn]() are computed using (Equation7), and for a fixed number of trajectories N, suitable m¯(N) and d¯NDm¯(N)N are chosen according to the algorithm in section 5.1. Not surprisingly, figure  shows that as the paths of the process get smoother (H increases) the trajectories become less fluctuating and shrink around zero. For H = 0.5, where the RL process reduces to the standard Brownian motion, we recover the well-known quantizer from Pagès (Citation2007, figures 7–8). This is consistent as in that case KH[ψn](t)=λn2sin(tλn), and so YH is the Karhuenen–Loève expansion for the Brownian motion (Pagès Citation2007, section 7.1).

Figure 1. Product functional quantizations of the RL process with N-quantizers, for H{0.1,0.25,0.5}, for N = 10 and N = 100.

Figure 1. Product functional quantizations of the RL process with N-quantizers, for H∈{0.1,0.25,0.5}, for N = 10 and N = 100.

5.2.2. Quantizers of ZH,T

A quantizer for ZH,T is defined analogously to that of ZH using Definition 3.4. The weights KHT[ψn]() in the summation are available in closed form, as shown in Remark 4.6. It is therefore possible to compute the N-product functional quantizer, for any NN, as figure  displays.

Figure 2. Product functional quantization of ZH,T via N-quantizers, with H = 0.1, T = 0.7, for N{10,100}.

Figure 2. Product functional quantization of ZH,T via N-quantizers, with H = 0.1, T = 0.7, for N∈{10,100}.

5.3. Pricing and comparison with Monte Carlo

In this section, we show and comment some plots related to the estimation of prices of derivatives on the VIX and realized variance. We set the values H = 0.1 and ν=1.18778 for the parameters and investigate three different initial forward variance curves v0(), as in Jacquier et al. (Citation2018a):

  1. Scenario 1. v0(t)=0.2342;

  2. Scenario 2. v0(t)=0.2342(1+t)2;

  3. Scenario 3. v0(t)=0.23421+t.

The choice of such ν is a consequence of the choice η=1.9, consistently with Bennedsen et al. (Citation2017), and of the relationship ν=η2H2CH. In all these cases, v0 is an increasing function of time, whose value at zero is close to the square of the reference value of 0.25.

5.3.1. VIX Futures pricing

One of the most recent and effective way to compute the price of VIX Futures is a Monte-Carlo-simulation method based on Cholesky decomposition, for which we refer to Jacquier et al. (Citation2018a, section 3.3.2). It can be considered as a good approximation of the true price when the number M of computed paths is large. In fact, in Jacquier et al. (Citation2018a) the authors tested three simulation-based methods (Hybrid scheme + forward Euler, Truncated Cholesky, SVD decomposition) and ‘all three methods seem to approximate the prices similarly well’. We thus consider the truncated Cholesky approach as a benchmark and take M=106 trajectories and 300 equidistant point for the time grid.

In figure , we plot the VIX Futures prices as a function of the maturity T, where T ranges in {1,2,3,6,9,12} months (consistently with actual quotations) on the left, and the corresponding relative error w.r.t. the Monte Carlo benchmark on the right. It is clear that the quantization approximates the benchmark from below and that the accuracy increases with the number of trajectories.

Figure 3. VIX Futures prices (left) and relative error (right) computed with quantization and with Monte-Carlo as a function of the maturity T, for different numbers of trajectories, for each forward variance curve scenario.

Figure 3. VIX Futures prices (left) and relative error (right) computed with quantization and with Monte-Carlo as a function of the maturity T, for different numbers of trajectories, for each forward variance curve scenario.

We highlight that the quantization scheme for VIX Futures can be sped up considerably by storing ahead the quantized trajectories for ZH,T,Δ, so that we only need to compute the integrations and summations in remark 4.9, which are extremely fast.

Furthermore, the grid organization time itself is not that significant. In table , we display the grid organization times (in seconds) as a function of the maturity (rows) expressed in months and of the number of trajectories (columns). From this table, one might deduce that the time needed for the organization of the grids is suitable to be performed once per day (say every morning) as it should be for actual pricing purposes. It is interesting to note that the estimations obtained with quantization (which is an exact method) are consistent in that they mimick the trend of benchmark prices over time even for very small values of N. However, as a consequence of the variance in the estimations, the Monte Carlo prices are almost useless for small values of M. Moreover, improving the estimations with Monte Carlo requires to increase the number of points in the time grid with clear impact on computational time, while this is not the case with quantization since the trajectories in the quantizers are smooth. Indeed, the trajectories in the quantizers are not only smooth but also almost constant over time, hence reducing the number of time steps to get the desired level of accuracy. Note that here we may refer also to the issue of complexity related to discretization: a quadrature formula over n points has a cost O(n), while the simulation with a Cholesky method over the same grid has cost O(n2). Finally, our quantization method does not require much RAM. Indeed, all the simulations performed with quantization can be easily run on a personal laptop,Footnote1 while this is not the case for the Monte Carlo scheme proposed here.Footnote2 For the sake of completeness, we also recall that combining Monte Carlo pricing of VIX futures/options with an efficient control variate speeds up the computations significantly (Horvath et al. Citation2020).

Table 3. Grid organization times (in seconds) as a function of the maturity (rows, in months) and of the number of trajectories (columns).

In figure , we show some plots comparing the behavior of the empirical error with the theoretically predicted one. We have decided to display only a couple of maturities for the first scenario since the other plots are very similar. The figures display in a clear way that the order of convergence of the empirical error should be bigger than the theoretically predicted one: in particular, we expect it to be O(log(N)1).

Figure 4. Log–log (natural logarithm) plot of the empirical absolute error with the theoretically predicted one for scenario 1, with T{1,12} months.

Figure 4. Log–log (natural logarithm) plot of the empirical absolute error with the theoretically predicted one for scenario 1, with T∈{1,12} months.

5.3.2. VIX options pricing

To complete the discussion on VIX Options pricing, we present in figure  the approximation of the prices of ATM Call Options on the VIX obtained via quantization as a function of the maturity T and for different numbers of trajectories against the same price computed via Monte Carlo simulations with M=106 trajectories and 300 equidistant point for the time grid, as a benchmark. Each plot represents a different scenario for the initial forward variance curve. For all scenarios, as the number N of trajectories goes to infinity, the prices in figure are clearly converging, and the limiting curve is increasing in the maturity, as it should be.

Figure 5. Prices of ATM Call Options on the VIX via quantization.

Figure 5. Prices of ATM Call Options on the VIX via quantization.

5.3.3. Pricing of continuously monitored options on realized variance

Product functional quantization of the process (ZtH)t[0,T] can be exploited for (meaningful) pricing purposes, too. We first price variance swaps, whose price is given by the following expression: ST:=E[1T0TVtdt|F0]. Let us recall that, in the rough Bergomi model, Vt=v0(t)exp(2νCHZtHν2CH2Ht2H), where CH=2HΓ(3/2H)Γ(H+1/2)Γ(22H), ν>0 is an endogenous constant and v0(t) being the initial forward variance curve. Thus, exploiting the fact that, for any fixed t[0,T], ZtH is distributed according to a centered Gaussian random variable with variance 0t(ts)2H1ds=t2H2H, the quantity ST can be explicitly computed: ST=1T0Tv0(t)dt. This is particularly handy and provides us a simple benchmark. The price ST is, then, approximated via quantization through SˆTd=i_Id(1T0Tv0(t)exp(2νCHn=1mKH[ψn](t)xind(n)ν2CH2Ht2H)dt)n=1mP(ξnCin(Γd(n))). Numerical results are presented in figure . On the left-hand side, we display a table with the approximations (depending on N, the number of trajectories) of the price of a swap on the realized variance in Scenario 1, for T = 1, and the true value v0=0.2342. On the right-hand side a log–log (natural logarithm) plot of the error against the function clog(N)H, with c being a suitable positive constant. For variance swaps the error is not performing very well. It is indeed very close to the upper bound clog(N)H that we have computed theoretically. One possible theoretical motivation for this behavior lies in the difference between strong and weak error rates. Weak error and strong error do not necessarily share the same order of convergence, being the weak error faster in general. See Bayer et al. (Citation2021Citation2022) and Gassiat (Citation2022) for recent developments on the topic in the rough volatility framework. For pricing purposes, we are interested in weak error rates. Indeed, the pricing error should in principle have the following form E[f(ZH)]E[f(ZˆH)], where ZˆH is the process that we are using to approximate the original ZH and f is a functional that comes from the payoff function and that we can interpret as a test function. Thus the functional f has a smoothing effect. On the other hand, the upper bound for the quantization error we have computed is a strong error rate. This theoretical discrepancy motivates the findings in figure when pricing VIX Futures and other options on the VIX: the empirical error seems to converge with order O(log(N)1), while the predicted order is O(log(N)H). The different empirical rates that are seen in figure for VIX futures (roughly O(log(N)1))) and in figure for variance swaps (much closer to O(log(N)H)) could be also related to the different degree of pathwise regularity of the processes Z and ZT. While tZt=0tK(ts)dWs is a.s. (Hϵ)-Hölder, for fixed T, the trajectories tZtT=0TK(ts)dWs of ZT are much smoother when t(T,T+Δ) and t is bounded away from T. When pricing VIX derivatives, we are quantizing almost everywhere a smooth Gaussian process (hence error rate of order log(N)1), while when pricing derivatives on realized variance, we are applying quantization to a rough Gaussian process (hence error rate of order O(log(N)H)), resulting in a deteriorated accuracy for the prices of realized volatility derivatives such as the variance swaps in figure .

Figure 6. Prices and errors for variance swaps.

Figure 6. Prices and errors for variance swaps.

Furthermore, it can be easily shown that, for any dDmN and for any m,NN, with m<N, SˆTd always provides a lower bound for the true price ST. Indeed, since the quantizers ZˆH,d of the process ZH are stationary (cfr. proposition 3.12), an application of remark 3.9 to the convex function f(x)=exp(2νCHx) together with the positivity of v0(t)exp(ν2CH2t2HH), for any t[0,T], yields SˆTd=E[1T0Tv0(t)exp(ν2CH2t2HH)×exp(2νCHZˆTH,d)dt|F0]=1T0Tv0(t)exp(ν2CH2t2HH)×E0[exp(2νCHZˆTH,d)]dt1T0Tv0(t)exp(ν2CH2t2HH)×E0[exp(2νCHZTH)]dt=ST. To complete this section, we plot in figure  approximated prices of European Call Options on the realized variance via quantization with N{102,103,104,105,106} trajectories and via Monte Carlo with M=106 trajectories, as a benchmark. In order to take advantage of the trajectories obtained, we compute the price of a realized variance Call option with strike K and maturity T = 1 as C(K,T)=E[(1T0TVtdtK)+|F0], and we approximate it via quantization through Cˆd(K,T)=i_Id(1T0Tv0(t)exp(2νCHn=1mKH[ψn](t)xind(n)ν2CH2Ht2H)dtK)+n=1mP(ξnCin(Γd(n))). The three plots in figure display the behavior of the price of a European Call on the realized variance as a function of the strike price K (close to the ATM value) for the three scenarios considered before.

Figure 7. Prices of European Call Option on realized variance computed via Monte Carlo with M=106 trajectories and via quantization with N{102,103,104,105,106} trajectories, as a function of K.

Figure 7. Prices of European Call Option on realized variance computed via Monte Carlo with M=106 trajectories and via quantization with N∈{102,103,104,105,106} trajectories, as a function of K.

5.3.4. Quantization and MC comparison

To make a fair comparison between quantization and Monte Carlo simulations, we present a figure to display, for each methodology, the computational work needed for a given error tolerance for the pricing of VIX Futures. The plots in figure  should be read as follows. First, for any M,N{102,103,104,105,106}, we have computed the corresponding pricing errors: ϵMC(M):=|PriceMC(M)RefPrice| and ϵQ(N):=|PriceQ(N)RefPrice| where PriceMC(M) is the Monte Carlo price obtained via truncated Cholesky with M trajectories, PriceQ(N) is the price computed via quantization with N trajectories and RefPrice comes from the lowerbound in equation (3.4) in Jacquier et al. (Citation2018a) and the associated computational time in seconds tMC(M) and tQ(N), respectively for Monte Carlo simulation and quantization. Then, each point in the plot is associated either to a value of M in case of Monte Carlo (the circles in figure ), or N in case of quantization (the triangles in figure ), and its x-coordinate provides the absolute value of the associated pricing error, while its y-coordinate represents the associated computational cost in seconds.

Figure 8. Computational costs for quantization versus Monte Carlo for Scenario 1, with T = 1 month (left-hand side) and T = 12 months (right-hand side). The number of trajectories, M for Monte Carlo and N for quantization, corresponding to a specific dot is displayed above it.

Figure 8. Computational costs for quantization versus Monte Carlo for Scenario 1, with T = 1 month (left-hand side) and T = 12 months (right-hand side). The number of trajectories, M for Monte Carlo and N for quantization, corresponding to a specific dot is displayed above it.

These plots lead to the following observations:

  • For quantization, which is an exact method, the error is strictly monotone in the number of trajectories.

  • When a small number of trajectories is considered, quantization provides a lower error with respect to Monte Carlo, at a comparable cost.

  • For large numbers of trajectories Monte Carlo overcomes quantization both in terms of accuracy and of computational time.

To conclude, quantization can always be run with an arbitrary number of trajectories and furthermore for N{102,103,104} it leads to a lower error with respect to Monte Carlo, at a comparable computational cost, as it is visible from figure . This makes quantization particularly suitable to be used when dealing with standard machines, i.e. laptops with a RAM memory smaller or equal to 16GB.

6. Conclusion

In this paper we provide, on the theoretical side, a precise and detailed result on the convergence of product functional quantizers of Gaussian Volterra processes, showing that the L2-error is of order log(N)H, with N the number of trajectories and H the regularity index.

Furthermore, we explicitly characterize the rate optimal parameters, mN and dN, and we compare them with the corresponding optimal parameters, m¯N and d¯N, computed numerically.

In the rough Bergomi model, we apply product functional quantization to the pricing of VIX options, with precise rates of convergence, and of options on realized variance, comparing those—whenever possible—to standard Monte Carlo methods.

The thorough numerical analysis carried out in the paper shows that unfortunately, despite the conceptual promise of functional quantization, while the results on the VIX are very promising, other types of path-dependent options seem to require machine resources way beyond the current requirements of standard Monte Carlo schemes, as shown precisely in the case of variance swaps. While product functional quantization is an exact method, the analysis provided here does not however promise a bright future in the context of rough volatility. It may nevertheless be of practical interest when machine resources are limited and indeed the results for VIX Futures pricing are strongly encouraging in this respect. Functional quantization for rough volatility can, however, be salvaged when used as a control variate tool to reduce the variance in classical Monte Carlo simulations.

Acknowledgments

The authors would like to thank Andrea Pallavicini and Emanuela Rosazza Gianin for fruitful discussions.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

The second author was supported by the Grant BIRD190200 “Term Structure Dynamics in Interest Rate and Energy Markets: Modelling and Numerics” (Università degli Studi di Padova). The authors acknowledge financial support from the EPSRC grant EP/T032146/1.

Notes

1 The personal computer used to run the quantization codes has the following technical specifications: RAM: 8.00 GB, SSD memory: 512 GB, Processor: AMD Ryzen 7 4700U with Radeon Graphics 2.00 GHz.

2 The computer used to run the Monte Carlo codes is a virtual machine (OpenStack/Nova/KVM/Qemu, www.openstack.org) with the following technical specifications: RAM: 32.00 GB, CPU: 8 virtual cores, Hypervisor CPU: Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz, RAM 128GB, Storage: CEPH cluster (www.ceph.com).

References

  • Abi Jaber, E. and El Euch, O., Multifactor approximation of rough volatility models. SIAM J. Financ. Math., 2019, 10(2), 309–349.
  • Adler, R.J. and Taylor, J.E., Random Fields and Geometry. Springer Monographs in Mathematics, 2007 (Springer-Verlag: New York).
  • Alòs, E., León, J.A. and Vives, J., On the short-time behavior of the implied volatility for jump-diffusion models with stochastic volatility. Finance Stoch., 2007, 11(4), 571–589.
  • Bayer, C., Friz, P.K. and Gatheral, J., Pricing under rough volatility. Quant. Finance, 2016, 16(6), 887–904.
  • Bayer, C., Hammouda, C.B. and Tempone, R., Hierarchical adaptive sparse grids and quasi Monte Carlo for option pricing under the rough Bergomi model. Quant. Finance, 2020, 20(9), 1457–1473.
  • Bayer, C., Hall, E.J. and Tempone, R., Weak error rates for option pricing under linear rough volatility, arXiv preprint, 2021. https://arxiv.org/abs/2009.01219.
  • Bayer, C., Fukasawa, M. and Nakahara, S., On the weak convergence rate in the discretization of rough volatility models, ArXiV preprint, 2022. https://arxiv.org/abs/2203.02943.
  • Bennedsen, M., Lunde, A. and Pakkanen, M.S., Hybrid scheme for Brownian semistationary processes. Finance Stoch., 2017, 21, 931–965.
  • Bergomi, L., Smile dynamics II, Risk, 2005, pp. 67–73.
  • Carr, P.P. and Madan, D.B., Joint modeling of VIX and SPX options at a single and common maturity with risk management applications. IIE Trans., 2014, 46(11), 1125–1131.
  • Chen, W., Langrené, N., Loeper, G. and Zhu, Q., Markovian approximation of the rough Bergomi model for Monte Carlo option pricing. Mathematics, 2021, 9(5), 528.
  • Chow, Y.S. and Teichner, E., Probability Theory. Springer Texts in Statistics, 1997 (Springer-Verlag: New York).
  • Corlay, S., Quelques aspects de la quantification optimale, et applications en finance (in English, with French summary). PhD Thesis, Université Pierre et Marie Curie, 2011.
  • Fukasawa, M., Asymptotic analysis for stochastic volatility: Martingale expansion. Finance Stoch., 2011, 15(4), 635–654.
  • Fukasawa, M., Volatility has to be rough. Quant. Finance, 2021, 21, 1–8.
  • Fukasawa, M., Takabatake, T. and Westphal, R., Consistent estimation for fractional stochastic volatility model under high-frequency asymptotics (Is Volatility Rough?). Math. Finance., 2022, 32, 1086–1132.
  • Gassiat, P., Weak error rates of numerical schemes for rough volatility, arXiv preprint, 2022. https://arxiv.org/abs/2203.09298.
  • Gatheral, J., Consistent Modelling of SPX and VIX Options. Presentation, 2008 (Bachelier Congress: London).
  • Gatheral, J., Jaisson, T. and Rosenbaum, M., Volatility is rough. Quant. Finance, 2018, 18(6), 933–949.
  • Gersho, A. and Gray, R.M., Vector Quantization and Signal Compression, 1992 (Kluwer Academic Publishers: New York).
  • Graf, S. and Luschgy, H., Foundations of Quantization for Probability Distributions. Lecture Notes in Mathematics, 1730, 2007 (Springer: Berlin Heidelberg).
  • Horvath, B., Jacquier, A. and Muguruza, A., Functional central limit theorems for rough volatility, 2019. arxiv.org/abs/1711.03078.
  • Horvath, B., Jacquier, A. and Tankov, P., Volatility options in rough volatility models. SIAM J. Financ. Math., 2020, 11(2), 437–469.
  • Huh, J., Jeon, J. and Kim, J.H., A scaled version of the double-mean-reverting model for VIX derivatives. Math. Financ. Econom., 2018, 12(4), 495–515.
  • Jacquier, A., Martini, C. and Muguruza, A., On VIX futures in the rough Bergomi model. Quant. Finance, 2018a, 18(1), 45–61.
  • Jacquier, A., Pakkanen, M.S. and Stone, H., Pathwise large deviations for the rough Bergomi model. J. Appl. Probab., 2018b, 55(4), 1078–1092.
  • Kallenberg, O., Foundations of Modern Probability. 2nd edn, Probability and Its Applications, 2002 (Springer-Verlag: New York).
  • Karp, D.B., Representations and inequalities for generalized Hypergeometric functions. J. Math. Sci. (N.Y.), 2015, 207, 885–897.
  • Kokholm, T. and Stisen, M., Joint pricing of VIX and SPX options with stochastic volatility and jump models. J. Risk Finance, 2015, 16(1), 27–48.
  • Luke, Y.L., The Special Functions and Their Approximations, Vol. 1, 1969 (Academic Press: New York and London).
  • Luschgy, H. and Pagès, G., Functional quantization of Gaussian processes. J. Funct. Anal., 2002, 196(2), 486–531.
  • Luschgy, H. and Pagès, G., High-resolution product quantization for Gaussian processes under sup-norm distortion. Bernoulli, 2007, 13(3), 653–671.
  • McCrickerd, R. and Pakkanen, M.S., Turbocharging Monte Carlo pricing for the rough Bergomi model. Quant. Finance, 2018, 18(11), 1877–1886.
  • Olver, F.W.J., Asymptotics and Special Functions, 2nd ed., 1997 (A.K. Peters/CRC Press: New York).
  • Pagès, G., Quadratic optimal functional quantization of stochastic processes and numerical applications. In Monte Carlo and Quasi-Monte Carlo Methods 2006, pp. 101–142, 2007 (Springer: Berlin Heidelberg).
  • Pagès, G. and Printems, J., Functional quantization for numerics with an application to option pricing. Monte Carlo Methods Appl., 2005, 11(4), 407–446.
  • Picard, J., Representation formulae for the fractional Brownian motion. In Séminaire de Probabilités XLIII, Lecture Notes in Mathematics, 2006, pp. 3–70, 2011 (Springer-Verlag: Berlin Heidelberg).
  • Sheppard, W.F., On the calculation of the most probable values of frequency-constants, for data arranged according to equidistant division of a scale. Proc. Lond. Math. Soc. (3), 1897, 1(1), 353–380.
  • Steele, J.M., The Cauchy-Schwarz Master-Class, 2004 (Cambridge University Press: New York).

Appendices

Appendix 1.

Proofs

A.1. Proof of proposition 3.6

Consider a fixed N1 and (m,d) for dDmN. We have (A1) E[ZZˆdL2[0,1]2]=E[n1K[ψn]()ξnn=1mK[ψn]()ξˆnd(n)L2[0,1]2]=E[n=1mK[ψn]()(ξnξˆnd(n))+km+1K[ψk]()ξkL2[0,1]2]=E[01|n=1mK[ψn](t)(ξnξˆnd(n))+km+1K[ψk](t)ξk|2dt]=01(n=1mK[ψn]2(t)E[|ξnξˆnd(n)|2]+km+1K[ψk]2(t))dt=01(n=1mK[ψn]2(t)ϵd(n)(ξn)2+km+1K[ψk]2(t))dt,(A1) using Fubini's Theorem and the fact that {ξn}n1 is a sequence of i.i.d. Gaussian and where ϵd(n)(ξn):=inf(α1,,αd(n))Rd(n)E[min1id(n)|ξnαi|2]. The Extended Pierce Lemma (Pagès Citation2007, theorem 1(b)) ensures that ϵd(n)(ξn)Ld(n) for a suitable positive constant L. Exploiting this error bound and the property (B) for K[ψn] in assumption 2.3, we obtain (A2) E[ZZˆdL2[0,1]2]=n=1m(01K[ψn]2(t)dt)ϵd(n)(ξn)2+km+101K[ψk]2(t)dtC22{n=1mn(2H+1)ϵd(n)(ξn)2+km+1k(2H+1)}C22{n=1mn(2H+1)L2d(n)2+km+1k(2H+1)}C~(n=1m1n2H+1d(n)2+km+1k(2H+1)),(A2) with C~=max{L2C22,C22}. Inspired by  Luschgy and Pagès (Citation2002, section 4.1), we now look for an ‘optimal’ choice of mN and dDmN. This reduces the error in approximating Z with a product quantization of the form in (Equation9). Define the optimal product functional quantization  ZˆN, of order N as the Zˆd which realizes the minimal error: E[ZZˆN,L2[0,1]2]=min{E[ZZˆdL2[0,1]2],mN,dDmN}. From (EquationA2), we deduce (A3) E[ZZˆN,L2[0,1]2]C~infmN{km+11k2H+1+inf{n=1m1n2H+1d(n)2,dDmN}}.(A3) For any fixed mN, we associate to the internal minimization problem the one we get by relaxing the hypothesis that d(n)N: I:=inf{n=1m1n2H+1z(n)2,{z(n)}n=1,,m(0,):n=1mz(n)N}. For this infimum, we derive a simple solution exploiting the arithmetic-geometric inequality using lemma A.3. Setting z~(n):=γN,mn(H+12), with γN,m:=N1m(j=1mj(2H+1))12m, n=1,,m, we get I=n=1m1n2H+1z~(n)2=N2mm(n=1mn(2H+1))1m, and note that the sequence {z~(n)} is decreasing. Since ultimately the vector d consists of integers, we use d~(n)=z~(n), n=1,,m. In fact, this choice guarantees that n=1md~(n)=n=1mz~(n)n=1mz~(n)=N. Furthermore, setting d~(j)=z~(j) for each j{1,,m}, we obtain d~(j)+1(j(2H+1))12=jH+12(z~(j)+1)jH+12z~(j)=jH+12N1mjH+12{n=1m1n2H+1}12m=N1m{n=1m1n2H+1}12m. Ordering the terms, we have (d~(j)+1)2N2m(n=1mn(2H+1))1mj(2H+1), for each j{1,,m}. From this we deduce the following inequality (notice that the left-hand side term is defined only if d~(1),,d~(m)>0): (A4) j=1mj(2H+1)d~(j)2j=1m(d~(j)+1d~(j))2N2m(n=1mn(2H+1))1m=N2m(n=1mn(2H+1))1mj=1m(d~(j)+1d~(j))24mN2m(n=1mn(2H+1))1m.(A4) Hence, we are able to make a first error estimation, placing in the internal minimization of the right-hand side of (EquationA3) the result of inequality in (EquationA4). (A5) E[ZZˆN,L2[0,1]2] C~inf{km+11k2H+1+4mN2m(n=1mn(2H+1))1m,mI(N)} Cinf{km+11k2H+1+mN2m(n=1mn(2H+1))1m,mI(N)},(A5) where C=4C~ and the set (A6) I(N):={mN:N2mm(2H+1)(n=1mn(2H+1))1m1},(A6) which represents all m's such that all d~(1),,d~(m) are positive integers. This is to avoid the case where i=1md~(i)N holds only because one of the factors is zero. In fact, for all n{1,,m}, d~(n)=z~(n) is a positive integer if and only if z~(n)1. Thanks to the monotonicity of {z(n)}n=1,,m, we only need to check that z~(m)=N1mm(H+12)(n=1mn(2H+1))12m1. First, let us show that I(N), defined in (EquationA6) for each N1, is a non-empty finite set with maximum given by m(N) of order log(N). We can rewrite it as I(N)={m1:amlog(N)}, where an=12log(j=1nn2H+1j2H+1). We can now verify that the sequence an is increasing in nN: anan+1j=1nlog(j(2H+1))nlog(n(2H+1))j=1n+1log(j(2H+1))(n+1)log((n+1)(2H+1))nlog(n(2H+1))log((n+1)(2H+1))(n+1)log((n+1)(2H+1))log(n(2H+1))log((n+1)(2H+1)), which is obviously true. Furthermore the sequence (an)n diverges to infinity since j=1nn(2H+1)j(2H+1)=n(2H+1)nj=1n1j(2H+1)n(2H+1)nj=2n1j(2H+1)n(2H+1)n1n(2H+1)(n1)n(2H+1). and H(0,12). We immediately deduce that I(N) is finite and, since {1}I(N), it is also non-empty. Hence I(N)={1,,m(N)}. Moreover, for all N1, am(N)log(N)<am(N)+1, which implies that m(N)=O(log(N)).

Now, the error estimation in (EquationA5) can be further simplified exploiting the fact that, for each mI(N), mN2m(n=1mn(2H+1))1m=mm(2H+1)(m(2H+1)N2m(n=1mn(2H+1))1m)1m2H. The last inequality is a consequence of the fact that (n=1mn(2H+1))1m1 by definition. Hence, (A7) E[ZZˆN,L2[0,1]2]Cinf{km+11k2H+1+m2H,mI(N)},(A7) for some suitable constant C>0.

Consider now the sequence {bn}nN, given by bn=kn+11k2H+1+n2H. For n1, bn+1bn=kn+21k2H+1+1(n+1)2H[kn+11k2H+1+1n2H]=1(n+1)2H+1(n+1)2H+11n2H0, so that the sequence is decreasing and the infimum in (EquationA7) is attained at m=m(N). Therefore, E[ZZˆN,L2[0,1]2]Cinf{km+11k2H+1+m2H,mI(N)}=C(km(N)+11k2H+1+m(N)2H)ssC(m(N)2H1+1+m(N)2H)=2Cm(N)2HClog(N)2H.

A.2. Proof of Remark 2.5

This can be proved specializing the computations done in Luschgy and Pagès (Citation2007, page 656). Consider an arbitrary index n1. For all t,s[0,1], exploiting assumption 2.3, we have that, for any ρ[0,1], |K[ψn](t)K[ψn](s)|=|K[ψn](t)K[ψn](s)|ρ|K[ψn](t)K[ψn](s)|1ρ(supu,v[0,1],uv|K[ψn](u)K[ψn](v)||uv|H+12|ts|H+12)ρ×(2supt[0,1]K[ψn](t))1ρ(C1n)ρ(2C2n(H+12))1ρ|ts|ρ(H+12)=Cρnρ(H+32)(H+12)|ts|ρ(H+12), where Cρ:=C1ρ(2C2)1ρ<. Therefore (A8) [K[ψn]]ρ(H+12)=supts[0,1]|K[ψn](t)K[ψn](s)||ts|ρ(H+12)Cρnρ(H+32)(H+12).(A8) Notice that ρ(H+32)(H+12)<12 when ρ[0,HH+3/2] so that (EquationA8) implies n=1[K[ψn]]ρ(H+12)2Cρ2n=1n2ρ(H+32)2(H+12)Cρ2n=1n(1+ϵ)=K<. In particular, E[|YtYs|2]=n=1|K[ψn](t)K[ψn](s)|2n=1[K[ψn]]ρ(H+12)2|ts|2ρ(H+12)K|ts|2ρ(H+12). As noticed in Remark 2.2 the process Y is centered Gaussian. Hence, for each t,s[0,1] so is YtYs. Proposition A.2 therefore implies that, for any rN, E[|YtYs|2r]=E[|YtYs|2]r(2r1)!!K|ts|2rρ(H+12), where K=Kr(2r1)!!, yielding existence of a continuous version of Y since choosing rN such that 2rρ(H+12)>1, Kolmogorov continuity theorem (Kallenberg Citation2002, theorem 3.23) applies directly.

A.3. Proof of Lemma 4.5

Let H+:=H+12. Using Karp (Citation2015, corollary 1, equation (12)) (with ψ=b2+b1a>1/2), the identity 1F2(a,b1,b2,r)=Γ(b1)Γ(b2)Γ(a)π01G2,22,0([b1,b2],[a,12],u)cos(2ru)duu, holds for all r>0, where G denotes the Meijer-G function, generally defined through the so-called Mellin–Barnes type integral (Luke Citation1969, equation (1), section 5.2)) as Gp,qm,n([a1,,ap],[b1,,bq],z)=12πiLj=1mΓ(bjs)j=1nΓ(1aj+s)j=m+1qΓ(1bj+s)j=n+1pΓ(ajs)zsds. This representation holds if z0, 0mq and 0np, for integers m, n, p, q, and akbj1,2,3,, for k=1,2,,n and j=1,2,,m. The last constraint is set to prevent any pole of any Γ(bjs),j=1,2,,m, from coinciding with any pole of any Γ(1ak+s),k=1,2,,n. With a>0, b2=1+a and b1=12, since G2,22,0([12,a+1],[a,12],u)=ua, we can therefore write (A9) 01ua1cos(2ru)du=1a1F2(a;12,a+1;r).(A9) Similarly, using integration by parts and properties of generalized Hypergeometric functions, (A10) 01ua1sin(2ru)du=sin(2r)ara01ua12cos(2ru)du=sin(2r)ara(a+12)1F2(a+12;12,a+32;r)=2ra+121F2(a+12;32,a+32;r),(A10) where the last step follows from the definition of generalized sine function sin(z)=z0F1(32,14z2). Indeed, exploiting (Equation6), we have sin(2r)ara(a+12)1F2(a+12;12,a+32;r)=2ra0F1(32,r)ra(a+12)1F2(a+12;12,a+32;r)=2ra(a+12)[(a+12)0F1(32;r)121F2(a+12;12,a+32;r)]=2ra(a+12)[(a+12)k=0(r)kk!(3/2)k12k=0(a+1/2)kk!(1/2)k(a+3/2)k(r)k]=2ra(a+12)k=01k![(a+1/2)(3/2)k1/2(a+1/2)k(1/2)k(a+3/2)k](r)k=2ra(a+12)k=01k![a(a+1/2)k(3/2)k(a+3/2)k](r)k=2r(a+12)k=01k!(a+1/2)k(3/2)k(a+3/2)k(r)k=2r(a+12)1F2(a+12;32,a+32;r). Letting α:=H12, τ:=tT, and mapping v: = tu, w:=vt and y:=w2, we write (A11) 0T(tu)αeiπudu=eiπt(tT)tvαeiπvdv=eiπt[0tvαeiπvdv0τvαeiπvdv]=eiπt[t1+α01wαeiπwtdwτ1+α01wαeiπwτdw]=eiπt2[t1+α01yα12eiπtydyτ1+α01yα12eiπyτydy]=eiπt2[I(t)I(τ)],(A11) where I(z):=z1+α01vα12eiπzvdv.

We therefore write, for z{t,τ}, using (EquationA9)–(EquationA10), πz=2r, and identifying a1=α12, I(z)=z1+α01vα12eiπzvdv=z1+α01vα12cos(πzv)dviz1+α01vα12sin(πzv)dv=2z1+αH+1F2(H+2;12,1+H+2;r)izH+4r1+H+1F2(12+H+2;32,32+H+2;r)=zH+h11F2(h1;12,1+h1;π2z24)iπz1+H+h21F2(h2;32,1+h2;π2z24), since α=H12=H+1, h1=H+2 and h2=12+h1. Plugging these into (EquationA11), we obtain 0T(tu)αeiπudu=eiπt2[I(t)I(τ)]=eiπt2[zH+h11F2(h1;12,1+h1;π2z24)iπz1+H+h21F2(h2;32,1+h2;π2z24)]z=teiπt2[zH+h11F2(h1;12,1+h1;π2z24)iπz1+H+h21F2(h2;32,1+h2;π2z24)]z=τ=eiπt2h1[(t)H+1F2(h1;12,1+h1;π2t24)(τ)H+1F2(h1;12,1+h1;π2τ24)]iπeiπt2h2[(t)1+H+1F2(h2;32,1+h2;π2t24)(τ)1+H+1F2(h2;32,1+h2;π2τ24)]=eiπt[ζ12(t,h1)ζ12(τ,h1)iπ(ζ32(t,h2)ζ32(τ,h2))], where χ(z):=14π2z2 and ζ12 and ζ32 as defined in the lemma.

A.4. Proof of lemma 4.4

We first prove (A). For each nN and all t[T,1], recall that KHT[ψn](t)=20T(tu)H12cos(uλn)du=2tTtvH12cos(tvλn)dv, with the change of variables v = tu. Assume Ts<t1. Two situations are possible:

  • If 0sT<tTs<t1, we have |KHT[ψn](t)KHT[ψn](s)|=2|tTtvH12cos(tvλn)dvsTsvH12cos(svλn)dv|2(|tTsvH12(cos(tvλn)cos(svλn))dv|+|stvH12cos(tvλn)dv|+|sTtTvH12cos(svλn)dv|)2(tTsvH12|cos(tvλn)cos(svλn)|dv+stvH12dv+sTtTvH12dv)2(tTsvH12|tsλn|dv+K|ts|H+12+K|ts|H+12)2(|ts|λntTsvH12dv+2K|ts|H+12)2(|ts|λn()H12L1[0,1]+2K|ts|H+12)C~1T|ts|H+12, with C~1T=max{22K,2λn()H12L1[0,1]}=max{22K,2(2n1)π2()H12L1[0,1]}, since cos() is Lipschitz on any compact and 0vH12dv is (H+12)-Hölder continuous.

  • If 0sTstTt1, |KHT[ψn](t)KHT[ψn](s)|=2|tTtvH12cos(tvλn)dvsTsvH12cos(svλn)dv|=2|tTtvH12cos(tvλn)dvsTsvH12cos(svλn)dv+stTvH12cos(tvλn)dvstTvH12cos(tvλn)dv+stTvH12cos(svλn)dvstTvH12cos(svλn)dv|2(|stTvH12(cos(tvλn)cos(svλn))dv|+|stvH12cos(tvλn)dv|+|sTtTvH12cos(svλn)dv|)C~1T|ts|H+12, where the dots correspond to the same computations as in the previous case and leads to the same estimation with the same constant C~1T.

This proves (A).

To prove (B), recall that, for T[0,1] and nN, the function KHT[ψn]:[T,1]R reads (A12) KHT[ψn](t)=20T(ts)H12cos((n12)πs)ds=2mH+120mT(mtu)H12cos(πu)du=:Φm(t).(A12) with the change of variable u=(n12)s=:ms. Denote from now on N~:={m=n12,nN}. From (EquationA12), we deduce, for each mN~ and t[T,1], (A13) mH+12Φm(t)=20mT(mtu)H12cos(πu)du=:2ϕm(t).(A13) To end the proof of (B), it therefore suffices to show that (ϕm(t))mN~,t[T,1] is uniformly bounded since, in that case we have KHT[ψn]=supt[T,1]|KHT[ψn](t)|=supt[T,1]|Φn12(t)|=2(n12)H+12supt[T,1]|ϕn12(t)|2(n12)H+12supt[T,1],mN~|ϕm(t)|2(n12)H+12CC2Tn(H+12), for some C2T>0, proving (B). The following guarantees the uniform boundedness of ϕx in (EquationA13).

Proposition A.1

For any T[0,1], there exists C>0 such that |ϕx(t)|C for all x0, t[T,1].

Proof.

For x>0, we write ϕx(t)=0xT(xtu)H12cos(πu)du={0xT(xtu)H12eiπudu}. Using the representation in lemma 4.5, we are thus left to prove that the maps ζ12(,h1) and ζ32(,h2), defined in (Equation13), are bounded on [0,) by, say L12 and L32. Indeed, in this case, supx>0,t[T,1]|ϕx(t)|=supx>0,t[T,1]|0xT(xtu)H12eiπudu|supx>0,t[T,1]|eiπxt2[(ζ12(xt,h1)ζ12(x(tT),h1))iπ(ζ32(xt,h2)ζ32(x(tT),h2))]|12supy,z[0,)|(ζ12(y,h1)ζ12(z,h1))iπ(ζ32(y,h2)ζ32(z,h2))|π{supy[0,)|ζ12(y,h1)|+supy[0,)|ζ32(y,h2)|}L12+L32=C<+. The maps ζ12(,h1) and ζ32(,h2) are both clearly continuous. Moreover, as z tends to infinity ζk(z,h) converges to a constant ck, for (k,h)({12,32},{h1,h2}). The identities 1F2(h;12,1+h;x)h=01cos(2xu)u1hduand1F2(h;32,1+h;x)h=12x01sin(2xu)u3/2hdu hold (this can be checked with Wolfram Mathematica for example) and therefore, ζ12(z,h1)=z2h12h11F2(h1;12,1+h1;π2z24)=z2h1201uh11cos(πzu)du=z2h120πzx2(h11)(πz)2(h11)cos(x)2xπ2z2dx=1π2h10πzx2h11cos(x)dx, where, in the second line, we used the change of variables x=πzu. In particular, as z tends to infinity, this converges to π2h10+x2h11cos(x)dx=cos(πh1)π2h1Γ(2h1)=:c1/20.440433. Analogously, for k=32, ζ32(z,h2)=z2h22h21F2(h2;32,1+h2;π2z24)=z2h22πz01uh23/2sin(πzu)du=z2h212π0πzx2h23)(πz)2h23)sin(x)2xπ2z2dx=1π2h20πzx2(h21)sin(x)dx, with the same change of variables as before. This converges to π2h20+x2h22sin(x)dx=cos(πh2)π2h2Γ(2h21)=:c3/20.193 as z tends to infinity. For k>0, ζk(z,h)=z2h(1+O(z2)) at zero. Since H(0,12), the two functions are continuous and bounded and the proposition follows.

A.5. Proof of theorem 4.11

We only provide the proof of (Equation15) since, as already noticed, that of (Equation16) follows immediately. Suppose that F:RR is Lipschitz continuous with constant M. By definitions (Equation11) and (Equation14), we have |E[F(VIXT)]E[F(VIXˆTd)]|=|E[F(|1ΔTT+Δv0(t)exp{γZtT,Δ+γ22(0tTK(s)2ds0tK(s)2ds)}dt|12)]E[F(|1ΔTT+Δv0(t)exp{γZˆtT,Δ,d+γ22(0tTK(s)2ds0tK(s)2ds)}dt|12)]|. For clarity, let Z:=ZT,Δ, Zˆ:=ZˆT,Δ,d, H:=TT+Δh(t)eγZtdt and Hˆ:=TT+Δh(t)eγZˆtdt, with h(t):=v0(t)Δexp{γ22(0tTK(s)2ds0tK(s)2ds)},for t[T,T+Δ]. We can therefore write, using the Lipschitz property of F (with constant M) and lemma A.4, |E[F(VIXT)]E[F(VIXˆTd)]|=|E[F(H12)]E[F(Hˆ12)]|E[|F(H12)F(Hˆ12)|]ME[|H12Hˆ12|]ME[(1H+1Hˆ)|HHˆ|]=:ME[A|HHˆ|]ME[ATT+Δh(t)|eγZteγZˆt|dt]ME[ATT+Δh(t)γ(eγZt+eγZˆt)|ZtZˆt|dt]. Now, an application of Hölder's inequality yields |E[F(VIXT)]E[F(VIXˆTd)]|ME[γA|TT+Δh(t)2(eγZt+eγZˆt)2dt|12×|TT+Δ|ZtZˆt|2dt|12]ME[(γA)2TT+Δh(t)2(eγZt+eγZˆt)2dt]12×E[TT+Δ|ZtZˆt|2dt]12=K E[TT+Δ|ZtZˆt|2dt]12, where K:=ME[γ2A2TT+Δh(t)2(eγZt+eγZˆt)2dt]12. It remains to show that K is a strictly positive finite constant. This follows from the fact that {Zt}t[T,T+Δ] does not explode in finite time (and so does not its quantization Zˆ either). The identity (a+b)22(a2+b2) and Hölder's inequality imply K24M2γ2E[(1H+1Hˆ)TT+Δh(t)2(e2γZt+e2γZˆt)dt]4M2γ2E[|1H+1Hˆ|2]12×E[|TT+Δh(t)2(e2γZt+e2γZˆt)dt|2]1216M2γ2E[1H2+1Hˆ2]12×E[|TT+Δh(t)2e2γZtdt|2+|TT+Δh(t)2e2γZˆtdt|2]12=:16M2γ2(A1+A2)12(B1+B2)12. We only need to show that A1,A2,B1 and B2 are finite. Since h is a positive continuous function on the compact interval [T,T+Δ], we have (A14) HTT+Δinfs[T,T+Δ](h(s)eγZs)dtΔinfs[T,T+Δ]h(s)eγZsΔinft[T,T+Δ]h(t)infs[T,T+Δ]eγZsΔh~exp{γinfs[T,T+Δ]Zs},(A14) with h~:=inft[T,T+Δ]h(t)>0. The inequality (EquationA14) implies A1=E[H2]E[exp{2γinfs[T,T+Δ]Zs}]Δ2h~2=E[exp{2γsups[T,T+Δ](Zs)}]Δ2h~2=1Δ2h~2E[exp{2γsups[T,T+Δ]Zs}], since Z and Z have the same law. The process Z=(Zt)t[T,T+Δ] is a continuous centered Gaussian process defined on a compact set. Thus, by theorem 1.5.4 in Adler and Taylor (Citation2007), it is almost surely bounded there. Furthermore, exploiting Lemma A.5 and Borel-TIS inequality (Adler and Taylor Citation2007, Theorem 2.1.1), we have (A15) E[e2γsups[T,T+Δ]Zs]=:E[e2γZ]=0+P(e2γZ>u)du=0+P(Z>log(u)2γ)du=0e2γE[Z]du+e2γE[V]+P(Z>log(u)2γ)du=e2γE[Z]+e2γE[V]+e12(12γlog(u)E[Z]σT)2due2γE[Z]+0+e12(12γlog(u)E[Z]σT)2du,(A15) with Z:=sups[T,T+Δ]Zs and σT2:=supt[T,T+Δ]E[Zt2]. The change of variable log(u)2γ=v in the last term in (EquationA15) yields 0+e12(12γlog(u)E[Z]σT)2du=2γRe12(vE[Z]σT)2e2γvdv=2π2γE[e2γY], since YN(E[Z],σT), and hence A1 is finite. Now, note that, in analogy to the last line of the proof of proposition 3.12, for any t[T,T+Δ], we have (A16) E[Zt|(Zˆs)s[T,T+Δ]]=E[E[Zt|{ξˆnd(n)}n=1,,m]|(Zˆs)s[T,T+Δ]]=E[Zˆt|(Zˆs)s[T,T+Δ]]=Zˆt,(A16) since the sigma-algebra generated by (Zˆs)s[T,T+Δ] is included in the sigma-algebra generated by {ξˆnd(n)}n=1,,m. Now, exploiting, in sequence, (EquationA16), the conditional version of supt[T1,T2]E[ft]E[supt[T1,T2]ft], conditional Jensen's inequality together with the convexity of xeγx, for γ>0 and the tower property, we obtain E[exp{γsupt[T,T+Δ]Zˆt}]=E[exp{γsupt[T,T+Δ]E[Zt|(Zˆs)s[T,T+Δ]]}]E[exp{γE[supt[T,T+Δ]Zt|(Zˆs)s[T,T+Δ]]}]E[E[exp{γsupt[T,T+Δ]Zt}|(Zˆs)s[T,T+Δ]]]=E[exp{γsupt[T,T+Δ]Zt}]. Thus we have A2=E[Hˆ2]1Δ2h~2E[exp{γsupt[T,T+Δ]Zˆt}]s1Δ2h~2E[exp{γsupt[T,T+Δ]Zt}], which is finite because of the proof of the finiteness of A1, above.

Exploiting Fubini's theorem we rewrite B1 as B1=E[(TT+Δh(t)2e2γZtdt)2]=TT+ΔTT+Δh(t)2h(s)2E[e2γ(Zt+Zs)]dtds. Since (Zt)t[T,T+Δ] is centered Gaussian with covariance E[ZtZs]=0TK(tu)K(su)du, then (Zt+Zs)N(0,g(t,s)), with g(t,s):=E[(Zt+Zs)2]=0T(K(tu)+K(su))2du and therefore B1=TT+ΔTT+Δh(t)2h(s)2e2γ2g(t,s)dtds is finite since both h and g are continuous on compact intervals. Finally, for B2 we have B2=E[(TT+Δh(t)2e2γZˆtdt)2]=TT+ΔTT+Δh(t)2h(s)2E[e2γ(Zˆt+Zˆs)]dtdsTT+ΔTT+Δh(t)2h(s)2E[e2γ(Zt+Zs)]dtds=B1, where we have used the fact that for all t,s[T,T+Δ], (Zˆt+Zˆs) is a stationary quantizer for (Zt+Zs) and so E[e2γ(Zˆt+Zˆs)]E[e2γ(Zt+Zs)] since f(x)=e2γx is a convex function (see remark 3.9 in section 3.1). Therefore B2 is finite and the proof follows.

Appendix 2.

Some useful results

We recall some important results used throughout the text. Straightforward proofs are omitted.

Proposition A.2

For a Gaussian random variable ZN(μ,σ), E[|Zμ|p]={(p1)!!σp,if p is even,0,if p is odd.

We recall (Steele Citation2004, Problem 8.5), correcting a small error, used in the proof of Proposition 3.6:

Lemma A.3

Let m,NN and p1,,pm positive real numbers. Then inf{n=1mpnxn2: x1,,xm(0,), n=1mxnN}=mN2m(j=1mpj)1m, where the infimum is attained for xn=N1mpn12(j=1mpj)12m, for all n{1,,m}.

Proof.

The general arithmetic-geometric inequalities imply 1mn=1mpnxn2(n=1mpnxn2)1m=(n=1mpn)1m(n=1m1xn2)1m(n=1mpn)1mN2m, since n=1mxnN by assumption. The right-hand side does not depend on x1,,xm, so inf{n=1mpnxn2: x1,,xm(0,), n=1mxnN}m(n=1mpn)1mN2m. Choosing x~n=N1mpn12(j=1mpj)12m, for all n{1,,m}, we obtain m(n=1mpnN2)1m=n=1mpnx~n2inf{n=1mpnxn2:x1,,xm(0,),n=1mxnN}m(n=1mpnN2)1m, which concludes the proof.

Lemma A.4

The following hold:

  1. For any x, y>0, |xy|(1x+1y)|xy|.

  2. Set C>0. For any x,yR, |eCxeCy|C(eCx+eCy)|xy|.

Lemma A.5

For a positive random variable X on (Ω,F,P), E[X]=0+P(X>u)du.