Not directly related to option pricing but I worked on a small Value at Risk code. The value at risk is associated with a time horizon and a confidence interval.
So the 95% daily VaR is a negative money amount. it separates our 5% worse outcomes from the rest. So we know that 95% of our days will be better than that number and 5% worse. We don't know how worse at all, just that they will be worse.
Given that the distribution of the prices changes with time, one cannot really know the VaR but one can estimate it.
The parametric method takes a distribution for the underlying, say gaussian, and computes analytically the 5% quantile. A more refined method use the current volatility in the market to scale the gaussian distribution used.
The historical method computes an historical distribution to estimate the quantiles, a refinement would be also to scale it by volatility.
The following python code uses Pandas, a library very handy to manipulate financial data.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 | from pandas.io.data import DataReader import matplotlib.pyplot as plt import datetime import pandas as pd import numpy as np ############################################################################### # Retrieve the data from Internet # Choose a time period d1 = datetime.datetime(2001, 01, 01) d2 = datetime.datetime(2012, 01, 01) #get the tickers price = DataReader("MMM", "yahoo",d1,d2)['Adj Close'] price = price.asfreq('B').fillna(method='pad') ret = price.pct_change() #choose the quantile quantile=0.05 #the vol window volwindow=50 #and the Var window for rolling varwindow=250 |
I choose to use 3M for my stock but that doesn't matter. The volatility will be estimated using a rolling window and I will also test using a rolling window to estimates quantiles.
Note that the window for quantiles needs to be large as we try to estimate the 5% (or 1%) quantile so we need to make sure we have enough observations.
The window for the vol should not be too large as vol changes rather rapidly.
The line "price = price.asfreq('B').fillna(method='pad')" resamples the data every business day, it handles missing price by copying the previous price.
Then the calculations themselves
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | #simple VaR using all the data unnormedquantile=pd.expanding_quantile(ret,quantile) #similar one using a rolling window unnormedquantileR=pd.rolling_quantile(ret,varwindow,quantile) #we can also normalize the returns by the vol vol=pd.rolling_std(ret,volwindow)*np.sqrt(256) unitvol=ret/vol #and get the expanding or rolling quantiles Var=pd.expanding_quantile(unitvol,quantile) VarR=pd.rolling_quantile(unitvol,varwindow,quantile) normedquantile=Var*vol normedquantileR=VarR*vol |
Using rolling functions of Panda makes this rather simple. Rolling_quantile calculates the quantile on a rolling window. expanding_quantile however estimates the quantiles using all the data available until the considered date.
Remember we got a serie of prices from Yahoo, then for each date in the serie we have an estimate of the VaR based on past data, not on all the data as it would be cheating.
We can then plot it:
1 2 3 4 5 6 7 8 9 10 | ret2=ret.shift(-1) courbe=pd.DataFrame({'returns':ret2, 'quantiles':unnormedquantile, 'Rolling quantiles':unnormedquantileR, 'Normed quantiles':normedquantile, 'Rolling Normed quantiles':normedquantileR, }) courbe.plot() plt.show() |
It shows the returns from 3M and the different VaR we computed.
It is a pretty dense graph so I don't reproduce it here but you won't have issue reproducing this.
We see the normed VaR is much more variable as it follows the volatility. Finally we can judge the efficiency of our calculations. The VaR 95% should be broken 5% of the time, the following code tests this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | courbe['nqBreak']=np.sign(ret2-normedquantile)/(-2) +0.5 courbe['nqBreakR']=np.sign(ret2-normedquantileR)/(-2) +0.5 courbe['UnqBreak']=np.sign(ret2-unnormedquantile)/(-2) +0.5 courbe['UnqBreakR']=np.sign(ret2-unnormedquantileR)/(-2) +0.5 nbdays=price.count() print 'Number of returns worse than the VaR' print 'Ideal Var : ', (quantile)*nbdays print 'Simple VaR : ', np.sum(courbe['UnqBreak']) print 'Normalized VaR : ', np.sum(courbe['nqBreak']) print '---------------------------' print 'Ideal Rolling Var : ', (quantile)*(nbdays-varwindow) print 'Rolling VaR : ', np.sum(courbe['UnqBreakR']) print 'Rolling Normalized VaR : ', np.sum(courbe['nqBreakR']) |
The rolling VaR uses a rolling window approach for the quantile estimation, but it need some time before the window is filled so the number of days to test it is different. From these numbers alone, the simple VaR is better, closer to the 5% of VaR breaks. but from the graph, it seem very static and independent from the market and as a measure of risk that is not something I would desire.
One could improve on this by changing the way I calculate the volatility. I use a simple estimator, one could use one based on open/high/low/close instead or a GARCH framework.
Aucun commentaire:
Enregistrer un commentaire