Abstract
Lévy processes can be used to model asset return's distributions. Monte Carlo methods must frequently be used to value path dependent options in these models, but Monte Carlo methods can be prone to considerable simulation bias when valuing options with continuous reset conditions. This paper shows how to correct for this bias for a range of options by generating a sample from the extremes distribution of the Lévy process on subintervals. The method uses variance‐gamma and normal inverse Gaussian processes. The method gives considerable reductions in bias, so that it becomes feasible to apply variance reduction methods. The method seems to be a very fruitful approach in a framework in which many options do not have analytical solutions.
Acknowledgements
Claudia Ribeiro gratefully acknowledges the support of Fundação para a Ciência e a Tecnologia, CEMPRE ‐ Centro de Estudos Macroeconómicos e Previsão and Faculdade de Economia, Universidade do Porto. We are grateful for insightful comments from Grace Kuan and Gianluca Fusai.
Notes
1. In this paper we are not concerned with change of measure to Q.
2. This procedure was described by Rydberg (Citation1997).
3. The assumption is for expositional simplicity only and can be relaxed with purely notational inconvenience.
4. To value a swing option, with payoff (max t St –min t St –X)+, requires a knowledge of the joint distribution of the max and min of St .
5. Each draw may require more than one uniform variate. For a small number of variates the draws may be fully stratified, leading to much greater speed‐ups.
6. See Dagpunar (Citation1988) and Devroye (Citation1986). Johnk's method fails at our level of machine precision in a small percentage of draws when both α and β are small. When this happens we resample. Experimentation leads us to believe that any bias introduced in our results by resampling is very small.
7. The computations were performed in VBA on a 900 Mhz PC. All computation times are for a single replication of the Monte Carlo procedure.
8. Computed as the ratio of the square of the standard deviations times the computation time for each method. It represents the fraction of the time taken by the superior method to acheive the same standard error as the inferior method.