### Chapter 6 - Statistical error in the simulation of stochastic differential equations - Exercises

Exercise 6.1 (central limit theorem for varying time step $h$ and number of simulations $M$)
We study the CLT-type convergence of $${\rm Error}_{h,M}=\frac 1M\sum_{m=1}^M{\cal E}(f,g,k,X^{(h,m)})-{\mathbb E}({\cal E}(f,g,k,X))$$ by varying both the number of simulations $M$ and the time step $h$ of the Euler scheme $X^{(h)}$, where \begin{align*} {\cal E}(f,g,k,X)&=f(X_T)e^{-\int_0^T k(r,X_r){\rm d}r}+\int_0^T g(s,X_s) e^{-\int_0^s k(r,X_r){\rm d}r}{\rm d}s,\\ {\cal E}(f,g,k,X^{(h)})&=f(X^{(h)}_T)e^{-h\sum_{j=0}^{N-1} k(jh,X^{(h)}_{jh})}+h\sum_{i=0}^{N-1}   g(ih,X^{(h)}_{ih}) e^{-h\sum_{j=0}^{i-1} k(jh,X^{(h)}_{jh})}.
\end{align*} We consider the  asymptotics $M\to +\infty$ and $h\to 0$, with different regimes on $Mh^2$.
1. Assume $Mh^2\to 0$,  that $f,g,k$ are bounded continuous functions, and that the weak error is of order 1 w.r.t. $h$. Show a central limit theorem on $\sqrt M {\rm Error}_{h,M}$  with a limit equal to a centered Gaussian random variable with variance ${\bf Var}({\cal E}(f,g,k,X))$.
Deduce that \begin{align} \frac 1M\sum_{m=1}^M&{\cal E}(f,g,k,X^{(h,m)})-1.96 \frac{ \sigma_{h,M} }{\sqrt M }\\&\leq {\mathbb E}({\cal E}(f,g,k,X)) \leq \frac 1M\sum_{m=1}^M{\cal E}(f,g,k,X^{(h,m)})+1.96 \frac{ \sigma_{h,M} }{\sqrt M },\end{align}is an asymptotic confidence interval at level 95%. Here $\sigma_{h,M}$ is the empirical  standard deviation of ${\cal E}(f,g,k,X^{(h)})$.
2. Assume $Mh^2={\rm Cst}\neq 0$, and prove a central limit theorem but with a non-centered Gaussian random variable at the limit.  For this, we assume that  the weak error can be expanded at order 1 w.r.t. $h$.

Exercise 6.2 (multi-level method with various  strong convergence order)
Assume that the strong convergence of the Euler scheme is of order 1 w.r.t. $h$ (as in the case of constant $\sigma$, see Exercise 5.1 and the weak convergence order is still 1 w.r.t. $h$.
1. By a similar analysis to that of Theorem 6.3.1, determine the optimal allocation of computational effort within the different levels (as a function of number of simulations).
2. What is the global complexity ${\cal C}_{ost}$ as a function of the tolerance error $\varepsilon$?
3. More generally, assume that the Euler scheme converges strongly at order $\alpha \in(0,1]$, and weakly at order $\beta\in (0,1]$ (observe that $\alpha\leq \beta$). Derive the complexity/accuracy analysis associated with a multi-level method. What are the configurations of $(\alpha,\beta)$ for which (after optimizing the effort within levels) $${\cal C}_{ost} \sim_c \varepsilon^{-2},$$ i.e. we retrieve the standard Monte Carlo convergence rate?

Exercise 6.3 (control variate for arithmetic mean and geometric mean)
The following example is inspired by the valuation of Asian options in financial engineering [Kemna and Vorst 1990]. Let $S$ be a geometric Brownian motion of the form $S_t=e^{\sigma W_t+(\mu-\frac{\sigma^2}{2})t}$ with $\mu\in \mathbb{R}$, $\sigma>0$. We aim at computing by Monte Carlo methods the expectation $$\mathbb{E}(A^{\rm arith.})\quad \text{with}\quad A^{\rm arith.}:=(\int_0^1 S_t {\rm d}t - K)_+$$ for some given $K>0$.
1. Justify why $A^{\rm geom.}:=(\exp(\int_0^1 \log(S_t) {\rm d} ) - K)_+$ is a possible control variate for the computation of $\mathbb{E}(A^{\rm arith.})$. We recall that if $Z\overset d= {\cal N}(m-\frac V2, V)$ with $V>0$, then \begin{align*}  \mathbb{E}(e^Z-K)_+=e^m&{\cal N}\left[\frac1{\sqrt V}\ln(e^m/K)  +\frac{\sqrt V}2\right]-K{\cal N}\left[\frac1{\sqrt V}\ln(e^m/K)-\frac{\sqrt V}2\right].\end{align*}
2. For which range of parameters $(\mu,\sigma,T)$ will this control variate be the most efficient
3. Now we approximate $A^{\rm arith.}$ by $$A_n^{\rm arith.}:=(\frac 1n\sum_{i=0}^{n-1} S_{\frac in} - K)_+.$$ Follow the arguments of Exercise 4.2 and show that $$\mathbb{E}(A_n^{\rm arith.})-\mathbb{E}(A^{\rm arith.})=O(n^{-1}).$$
4. What is the natural control variate $A^{\rm geom.}_n$ associated with $A_n^{\rm arith.}$?
5. Write a simulation program for illustrating the previous variance reduction.