In the last few posts I’ve been writing about deriving claims inflation using an ‘Nth largest loss’ method. The thought popped into my head after posting, that I’d made use of a normal approximation when thinking about a 95% confidence interval, when actually I already had the full Monte Carlo output, so could have just looked at the percentiles of the estimated inflation values directly.
Below I amend the code slightly to just output this range directly.
Continuing my inflation theme, here is another cool balloon shot from João Marta Sanfins
The code below is very similar to what we saw previously, with the exception that I’ve bumped up the number of simulations to 50k to get a nicely filled in histogram, and I’ve then added a few of percentiles at the bottom as an additional output. Here’s the code:
In [3]:
import numpy as np
import pandas as pd
import scipy.stats as scipy
from math import exp
from math import log
from math import sqrt
from scipy.stats import lognorm
from scipy.stats import poisson
from scipy.stats import linregress
import matplotlib.pyplot as plt
In [4]:
Distmean = 1000000.0
DistStdDev = Distmean*1.5
AverageFreq = 100
years = 10
ExposureGrowth = 0.02
Mu = log(Distmean/(sqrt(1+DistStdDev**2/Distmean**2)))
Sigma = sqrt(log(1+DistStdDev**2/Distmean**2))
LLThreshold = 1e6
Inflation = 0.05
s = Sigma
scale= exp(Mu)
results=["Distmean","DistStdDev","AverageFreq","years","LLThreshold","Exposure Growth","Inflation"]
In [7]:
MedianTop10Method = []
AllLnOutput = []
for sim in range(50000):
SimOutputFGU = []
SimOutputLL = []
Frequency= []
for year in range(years):
FrequencyInc = poisson.rvs(AverageFreq*(1+ExposureGrowth)**year,size = 1)
Frequency.append(FrequencyInc)
r = lognorm.rvs(s,scale = scale, size = FrequencyInc[0])
r = np.multiply(r,(1+Inflation)**year)
# r = np.sort(r)[::1]
r_LLOnly = r[(r>= LLThreshold)]
r_LLOnly = np.sort(r_LLOnly)[::1]
# SimOutputFGU.append(np.transpose(r))
SimOutputLL.append(np.transpose(r_LLOnly))
SimOutputFGU = pd.DataFrame(SimOutputFGU).transpose()
SimOutputLL = pd.DataFrame(SimOutputLL).transpose()
SimOutputLLRowtoUse = []
for iColumn in range(len(Frequency)):
iRow = round(5 *Frequency[iColumn][0]/AverageFreq)
SimOutputLLRowtoUse.append(SimOutputLL[iColumn].iloc[iRow])
SimOutputLLRowtoUse = pd.DataFrame(SimOutputLLRowtoUse)
a = np.log(SimOutputLLRowtoUse)
AllLnOutput.append(a[0])
b = linregress(a.index,a[0]).slope
MedianTop10Method.append(b)
AllLnOutputdf = pd.DataFrame(AllLnOutput)
dfMedianTop10Method= pd.DataFrame(MedianTop10Method)
dfMedianTop10Method['Exp1'] = np.exp(dfMedianTop10Method[0]) 1
print(np.mean(dfMedianTop10Method['Exp1']))
print(np.std(dfMedianTop10Method['Exp1']))
plt.hist(dfMedianTop10Method['Exp1'], bins=500)
plt.show()
print(np.percentile(dfMedianTop10Method['Exp1'],50))
print(np.percentile(dfMedianTop10Method['Exp1'],2.5))
print(np.percentile(dfMedianTop10Method['Exp1'],97.5))
0.05132008448661077 0.024698588857988785 0.05105661369515313 0.0035685650028953945 0.1005266199404593
In [ ]:
Results The final two numbers being outputted are the 2.5% and 97.5% percentiles, i.e. what we need for a 95% CI basis ... and... drum roll..... the range comes out at near enough exactly [0%,10%] i.e precisely what we predicted based on the normal approximation. So you could say not an interesting result, but at least we should be more comfortable with this approximation in the future now. 
AuthorI work as an actuary and underwriter at a global reinsurer in London. Categories
All
Archives
February 2024

Leave a Reply.