I wrote a quick script to backtest one particular method of deriving claims inflation from loss data. I first came across the method in 'Pricing in General Insurance' by Pietro Parodi [1], but I'm not sure if the method pre-dates the book or not. In order to run the method all we require is a large loss bordereaux, which is useful from a data perspective. Unlike many methods which focus on fitting a curve through attritional loss ratios, or looking at ultimate attritional losses per unit of exposure over time, this method can easily produce a *large loss* inflation pick. Which is important as the two can often be materially different.
Source: Willis Building and Lloyd's building, @Colin, https://commons.wikimedia.org/wiki/User:Colin
The code works by simulating 10 years of individual losses from a Poisson-Lognormal model, and then applying 5% inflation pa. We then throw away all losses below the large loss threshold, to put ourselves in the situation as if we'd only been supplied with a large loss claims listing. We then analyse the change over time of the 'median of the top 10 claims'. We select this slightly funky looking statistic as it should increase over time in the presence of inflation, but by looking at the median rather than the mean, we've taken out some of the spikiness. Since we hardcoded 5% inflation into the simulated data, we are looking to arrive back at this value when we apply the method to the synthetic data.
I've pasted the code below, but jumping to the conclusions, here's a few take-aways:
In [1]:
import numpy as np
import pandas as pd
import scipy.stats as scipy
from math import exp
from math import log
from math import sqrt
from scipy.stats import lognorm
from scipy.stats import poisson
from scipy.stats import linregress
In [2]:
Distmean = 1000000.0
DistStdDev = Distmean*1.5
AverageFreq = 100
years = 10
ExposureGrowth = 0.0
Mu = log(Distmean/(sqrt(1+DistStdDev**2/Distmean**2)))
Sigma = sqrt(log(1+DistStdDev**2/Distmean**2))
LLThreshold = 1e6
Inflation = 0.05
s = Sigma
scale= exp(Mu)
In [3]:
MedianTop10Method = []
AllLnOutput = []
for sim in range(5000):
SimOutputFGU = []
SimOutputLL = []
year = 0
Frequency= []
for year in range(years):
FrequencyInc = poisson.rvs(AverageFreq*(1+ExposureGrowth)**year,size = 1)
Frequency.append(FrequencyInc)
r = lognorm.rvs(s,scale = scale, size = FrequencyInc[0])
r = np.multiply(r,(1+Inflation)**year)
r = np.sort(r)[::-1]
r_LLOnly = r[(r>= LLThreshold)]
SimOutputFGU.append(np.transpose(r))
SimOutputLL.append(np.transpose(r_LLOnly))
SimOutputFGU = pd.DataFrame(SimOutputFGU).transpose()
SimOutputLL = pd.DataFrame(SimOutputLL).transpose()
a = np.log(SimOutputLL.iloc[5])
AllLnOutput.append(a)
b = linregress(a.index,a).slope
MedianTop10Method.append(b)
AllLnOutputdf = pd.DataFrame(AllLnOutput)
dfMedianTop10Method= pd.DataFrame(MedianTop10Method)
dfMedianTop10Method['Exp-1'] = np.exp(dfMedianTop10Method[0]) -1
print(np.mean(dfMedianTop10Method['Exp-1']))
print(np.std(dfMedianTop10Method['Exp-1']))
0.050423461401442896 0.02631028930074786
[1] - Pricing in General Insurance, By Pietro Parodi, ISBN 9781466581449, Chapman and Hall/CRC
|
AuthorI work as an actuary and underwriter at a global reinsurer in London. Categories
All
Archives
April 2024
|
Leave a Reply.