THE REINSURANCE ACTUARY
  • Blog
  • Project Euler
  • Category Theory
  • Disclaimer

Exposure inflation vs Exposure inflation

18/2/2021

 
The term exposure inflation can refer to a couple of different phenomena within insurance. A friend mentioned a couple of weeks ago that he was looking up the term in the context of pricing a property cat layer and he stumbled on one of my blog posts where I use the term. Apparently my blog post was one of the top search results, and there wasn’t really much other useful info, but I was actually talking about a different type of exposure inflation, so it wasn’t really helpful for him.
​
So as a public service announcement, for all those people Googling the term in the future, here are my thoughts on two types of exposure inflation:

Read More

FAQs about Lloyd’s of London

16/11/2020

 
I sometimes get emails from individuals who have stumbled across my website and have questions about Lloyd's of London which they can't find the answers to online. Below I've collated some of these questions and my responses, plus some extra questions chucked in which I thought might be helpful.

A brief caveat - while I've had a fair amount of interaction with Lloyd's syndicates over the years, I have never actually worked within Lloyd's for a syndicate, and these answers below just represent my understanding and my personal view, other views do exist! If you disagree with anything, or if you think anything below is incorrect please let me know!


Are Lloyd’s of London and Lloyds bank related at all?
They are not, they just happen to have a similar name. Lloyd’s of London is an insurance market, whereas Lloyd’s bank is a bank. They were both set up by people with the surname Lloyd - Lloyds bank was formed by John Taylor and Sampson Lloyd, Lloyd’s of London by Edward Lloyd. Perhaps in the mists of time those two were distantly related but that’s about it for a link.
​


Read More

Two black swans? Or No Black Swans?

24/5/2020

 
Dan Glaser, CEO of Guy Carp, stated last week that he believes that the current fallout from Coronavirus represents two simultaneous black swans.

Nassim Taleb meanwhile, the very guy who brought the term ‘black swan’ into popular consciousness, has stated that what we are dealing with at the moment isn’t even a black swan!

So what’s going on here? And who is right?

Picture

Read More

Motor triangles just got more difficult to analyse… again.

10/5/2020

 
Picture
If you are an actuary, you'll probably have done a fair bit of triangle analysis, and you'll know that triangle analysis tends to works pretty well if you have what I'd call 'nice smooth consistent' data, that is - data without sharp corners, no large one off events, and without substantially growth. Unfortunately, over the last few years, motor triangles have been anything but nice, smooth or consistent. These days, using them often seems to require more assumptions than there are data points in the entire triangle.

Read More

Back of the envelope on Aon’s salary cuts

3/5/2020

 
Picture
​In case you missed it, Aon announced [1] last week that in response to the Covid19 outbreak, and the subsequent expected loss of revenue stemming from the fallout, they would be taking a series of preemptive actions. The message was that no one would lose their job, but that a majority of staff would be asked to accept a 20% salary cut.

​The cuts would be made to:
  • Approx 70% of the 50,000 employees
  • Cuts would be approx. 20% of relevant employee’s salaries
  • The cuts would be temporary
  • Executives would take a 50% paycut
  • Share buy-backs would be suspended
  • The dividends would be continued to be paid

​So how significant will the cost savings be here? And is it fair that Aon is continuing with their dividend? I did a couple of back of the envelope calcs to investigate.

Read More

The problem with Floating Deductibles

4/1/2020

 
Picture
Photo by David Preston
What is a floating deductible?

Excess of Loss contacts for Aviation books, specifically those covering airline risks (planes with more than 50 seats) often use a special type of deductible,  called a floating deductible. Instead of applying a fixed amount to the loss in order to calculate recoveries, the deductible varies based on the size of the market loss and the line written by the insurer. These types of deductibles are reasonably common, I’d estimate something like 25% of airline accounts I’ve seen have had one.

As an aside, these policy features are almost always referred to as deductibles, but technically are not actually deductibles from a legal perspective, they should probably be referred to as floating attachment instead. The definition of a deductible requires that it be deducted from the policy limit rather than specifying the point above which the policy limit sits. That’s a discussion for another day though!

The idea is that the floating deductible should be lower for an airline on which an insurer takes a smaller line, and should be higher for an airline for which the insurer takes a bigger line. In this sense they operate somewhat like a surplus lines contract in property reinsurance.

Before I get into my issues with them, let’s quickly review how they work in the first place.

An example

When binding an Excess of Loss contract with a floating deductible, we need to specify the following values upfront:
  • Limit = USD18.5m
  • Fixed attachment = USD1.5m
  • Original Market Loss = USD150m

And we need to know the following  additional information about a given loss in order to calculate recoveries from said loss:
  • The insurer takes a 0.75% line on the risk
  • The insurer’s limit is USD 1bn
  • The risk suffers a USD 200m market loss.

A standard XoL recovery calculation with the fixed attachment given above, would first calculate the UNL (200m*0.75%=1.5m), and then deduct the fixed attachment from this (1.5m-1.5m=0). Meaning in this case, for this loss and this line size, nothing would be recovered from the XoL.

To calculate the recovery from XoL with a floating deductible, we would once again calculate the insured’s UNL 1.5m. However we now need to calculate the applicable deductible, this will be the lesser of 1.5m (the fixed attachment), and the insurer’s effective line (defined as their UNL divided by the market loss = 1.5m/200m) multiplied by the Original Market Loss as defined in the contract. In this case, the effective line would be 0.75%, and the Original Market Loss would be 150m, hence; 0.75%*150m = 1.125m. Since this is less than the 1.5m fixed attachment, the attachment we should use is 1.125m our limit is always just 18.5m, and doesn’t change if the attachment drops down. We would therefore calculate recoveries to this contract, for this loss size and risk, as if the layer was a 18.5m xs 1.125. Meaning the ceded loss would be 0.375m, and the net position would be 1.125m.
​
Here’s the same calculation in an easier to follow format:

Picture
So…. what’s the issue?

This may seem quite sensible so far, however the issue is with the wording. The following is an example of a fairly standard London Market wording, taken from an anonymised slip which I came across a few years ago.

Priority: USD 10,000,000 each and every loss or an amount equal to the “Reinsured’s Portion’  of the total Original Insured Loss sustained by the original insured(s) of USD 200,000,000 each and every loss, whichever the lesser.
…
Reinsuring Clause
Reinsurers shall only be liable if and when the ultimate net loss paid by the Reinsured in respect of the interest as defined herein exceeds USD 10,000,000 each and every loss or an amount equal to the Reinsured’s Proportion of the total Original Insured Loss sustained by the original insured(s) of USD 200,000,000 or currency equivalent, each and every loss, whichever the lesser (herein referred to as the “Priority”)
For the purpose herein, the Reinsured’s Proportion shall be deemed to be a percentage calculated as follows, irrespective of the attachment dates of the policies giving rise to the Reinsured’s ultimate net loss and the Original Insured Loss:
Reinsured Ultimate Net Loss
/
Original Insured Loss
…
The Original Insured Loss shall be defined as the total amount incurred by the insurance industry including any proportional co-insurance and or self-insurance of the original insured(s), net of any recovery from any other source
What’s going on here is that we’ve defined the effective line to be the Reinsured’s unl divided by the 100% market loss.


First problem

From a legal perspective, how would an insurer (or reinsurer for that matter), prove what the 100% insured market loss is? The insurer obviously knows their share of the loss, however what if this is a split placement with 70% placed in London on the same slip, 15% placed in a local market (let’s say Indonesia?), and a shortfall cover (15%) placed in Bermuda. Due to the different jurisdictions, let’s say the Bermudian cover has a number of exclusions and subjectivities, and the Indonesian cover operates under the Indonesian legal system which does not publically disclose private contract details.

Even if the insurer is able to find out through a friendly broker what the other markets are paying, and therefore have a good sense of what the 100% market loss is, they may not have a legal right to this information. The airline does have a legal right to the information, however the reinsurance contract is a contract between the insured and reinsured, the airline is not a party to the reinsurance contract. The point is whether the insurer and reinsured have the legal right to the information.

The above issues may sound quite theoretical, and in practice there are normally no issues with collecting on these types of contracts. But to my mind, legal language should bear up to scrutiny even when stretched – that’s precisely when you are going to rely on it.  My contention is that as a general rule, it is a bad idea to rely on information in a contract which you do not have an automatic legal right to obtain.

The second problem

The intention with this wording, and with contracts of this form is that the effective line should basically be the same as the insured’s signed line. Assuming everything is straightforward, if the insurer takes a x% line with a limit of Y bn. If the loss is anything less than Y bn, then the insured’s effective line will simply be x%*Size of Loss / Size of loss. i.e. x%.

My guess as to why it is worded this way rather than just taking the actual signed line is that we don’t want to open ourselves to a issues around what exactly we mean by ‘the signed line’ – what if the insured has exposure through two contracts both of which have different signed lines, what if there is an inuring Risk Excess which effectively nets down the gross signed line – should we then use the gross or net line? By couching the contract in terms of UNLs and Market losses we attempt to avoid these ambiguities

Let me give you a scenario though where this wording does fall down:

Scenario 1 – clash loss

Let’s suppose there is a mid-air collision between two planes. Each results in an insured market loss of USD 1bn, then the Original Insured Loss is  USD 2bn. If our insurer takes a 10% line on the first airline, but does not write the second airline, then their effective line is 10% * 1bn / 2bn = 5%... hmmm this is definitely equal to their signed line of 10%.

You may think this is a pretty remote possibility, after all in the history of modern commercial aviation such an event has not occurred. What about the following scenario which does occur fairly regularly?

Scenario 2 – airline/manufacturer split liability

Suppose now there is a loss involving a single plane, and the size of the loss is once again USD 1bn, and that our insurer once again has a 10% line. In this case though, what if the manufacturer is found 50% responsible? Now the insurer only has a UNL of USD 500m, and yet once again, in the calculation of their floating deductible, we do the following:  10% * 500m/1bn = 5%.

Hmmm, once again our effective line is below our signed line, and the floating deductible will drop down even further than intended.

Suggested alternative wording

My suggested wording, and I’m not a lawyer so this is categorically not legal advice, is to retain the basic definition of effective line - as UNL divided by some version of the 100% market loss -  by doing so we still neatly sidestep the issues mentioned above around gross vs net lines, or exposure from multiple slips, but instead to replace the definition of Original Insured Loss with some variation of the following ‘the proportion of the Original Insured Loss, for which the insured derives a UNL through their involvement in some contract of insurance, or otherwise’.

Basically the intention is to restrict the market loss, only to those contracts through which the insurer has an involvement. This deals with both issues – the insurer would not be able to net down their line further through references to insured losses which are nothing to do with them, as in the case of scenario 1 and 2 above, and secondly it restrict the information requirements to contracts which the insurer has an automatic legal right to have knowledge of since by definition they will be a party to the contract.

I did run this idea past a few reinsurance brokers a couple of years ago, and they thought it made sense. The only downside from their perspective is that it makes the client's reinsurance slightly less responsive i.e. they knew about the strange quirk whereby the floating deductible dropped in the event of a manufacturer involvement, and saw it as a bonus for their client, which was often not fully priced in by the reinsurer. They therefore had little incentive to attempt to drive through such a change. The only people who would have an incentive to push through this change would be the larger reinsurers, though I suspect they will not do so until they've already been burnt and attempted to rely on the wording in a court case and, at which point they may find it does not quite operate in the way they intended.



Should I inflate my loss ratios?

14/12/2019

 

​I remember being told as a relatively new actuarial analyst that you "shouldn't inflate loss ratios" when experience rating. This must have been sitting at the back of my mind ever since, because last week, when a colleague asked me basically the same question about adjusting loss ratios for claims inflation, I remembered the conversation I'd had with my old boss and it finally clicked.

Let's go back a few years - it's 2016 - Justin Bieber has a song out in which he keeps apologising, and  to all of us in the UK, Donald Trump (if you've even heard of him) is still just the America's version of Alan Sugar. I was working on the pricing for a Quota Share, I can't remember the class of business, but I'd been given an aggregate loss triangle, ultimate premium estimates, and rate change information. I had carefully and meticulously projected my losses to ultimate, applied rate changes, and then set the trended and developed losses against ultimate premiums. I ended up with a table that looked something like this:

(Note these numbers are completely made up but should give you a gist of what I'm talking about.)
Picture

I then thought to myself ‘okay this is a property class, I should probably inflate losses by about $3\%$ pa’, the definition of a loss ratio is just losses divided by premium, therefore the correct way to adjust is to just inflate the ULR by $3\%$ pa. I did this, sent the analysis to my boss at the time to review, and was told ‘you shouldn’t inflate loss ratios for claims inflation, otherwise you'd need to inflate the premium as well’ – in my head I was like ‘hmmm, I don’t really get that...’ we’ve accounted for the change in premium by applying the rate change, claims certainly do increase each year, but I don't get how premiums also 'inflate' beyond rate movements?! but since he was the kind of actuary who is basically never wrong and we were short on time, I just took his word for it.

I didn’t really think of it again, other than to remember that ‘you shouldn’t inflate loss ratios’, until last week one of my colleagues asked me if I knew what exactly this ‘Exposure trend’ adjustment in the experience rating modelling he’d been sent was. The actuaries who had prepared the work had taken the loss ratios, inflated them in line with claims inflation (what you're not supposed to do), but then applied an ‘exposure inflation’ to the premium. Ah-ha I thought to myself, this must be what my old boss meant by inflating premium.

I'm not sure why it took me so long to get to the bottom of what, is when you get down to it, a fairly simple adjustment. In my defence, you really don’t see this approach in ‘London Market’ style actuarial modelling - it's not covered in the IFoA exams for example. Having investigated a little, it does seem to be an approach which is used by US actuaries more – possibly it’s in the CAS exams?

When I googled the term 'Exposure Trend', not a huge amount of useful info came up – there are a few threads on Actuarial Outpost which kinda mention it, but after mulling it over for a while I think I understand what is going on. I thought I’d write up my understanding in case anyone else is curious and stumbles across this post.

Proof by Example

I thought it would be best to explain through an example, let’s suppose we are analysing a single risk over the course of one renewal. To keep things simple, we’ll assume it’s some form of property risk, which is covering Total Loss Only (TLO), i.e. we only pay out if the entire property is destroyed.

Let’s suppose for $2018$, the TIV is $1m$ USD, we are getting a net premium rate of $1\%$ of TIV, and we think there is a $0.5\%$ chance of a total loss. For $2019$, the value of the property has increased by $5\%$, we are still getting a net rate of $1\%$, and we think the underlying probability of a total loss is the same.

In this case we would say the rate change is $0\%$. That is:

$$ \frac{\text{Net rate}_{19}}{\text{Net rate}_{18}} = \frac{1\%}{1\%} = 1 $$

However we would say that claim inflation is $2.5\%$, which is the increase in expected claims this follows from:

$$ \text{Claim Inflation} = \frac{ \text{Expected Claims}_{19}}{ \text{Expected Claims}_{18}} = \frac{0.5\%*1.05m}{0.5\%*1m} = 1.05$$

From first principles, our expected gross gross ratio (GLR) for $2018$ is:
$$\frac{0.5 \% *(TIV_{18})}{1 \% *(TIV_{18})} = 50 \%$$ And for $2019$ is: $$\frac{0.5\%*(TIV_{19})}{1\%*(TIV_{19})} = 50\%$$
i.e. they are the same!

The correct adjustment when on-levelling $2018$ to $2019$ should therefore result in a flat GLR – this follows as we’ve got the same GLR in each year when we calculated above from first principles. If we’d taken the $18$ GLR, applied the claims inflation $1.05$ and applied the rate change $1.0$, then we might erroneously think the Gross Loss Ratio would be $50\%*1.05 = 52.5\%$. This would be equivalent to what I did in the opening paragraph of this post, the issue being, that we haven’t accounted for trend in exposure and our rate change is a measure of the change in net rate. If we include this exposure trend as an additional explicit adjustment this gives $50\%*1.05*1/1.05 = 50\%$. Which is the correct answer, as we can see by comparing to our first principles calculation.

So the fundamental problem, is that our measure of rate change is a measure in the movement of rate on TIV, whereas our claim inflation is a measure of the movement of aggregate claims. These two are misaligned, if our rate change was instead a measure in the movement of overall premium, then the two measures would be consistent and we would not need the additional adjustment. However it’s much more common in this type of situation to get given rate change as a measure of change in rate on TIV.
​
An advantage of making an explicit adjustment for exposure trend and claims inflation is that it allows us to apply different rates – which is probably more accurate. There’s no a-priori justification as to why the two should always be the same. Claim inflation will be affected by additional factors beyond changes in the inflation of the assets being insured, this may include changes in frequency, changes in court award inflation, etc…

It’s also interesting to note that the clam inflation here is of a different nature to what we would expect to see in a standard Collective Risk Model. In that case we inflate individual losses by the average change in severity i.e. ignoring any change in frequency. When adjusting the LR above, we are adjusting for both the change in frequency and severity together, i.e. in the aggregate loss.

The above discussion also shows the importance of understanding exactly what someone means by ‘rate change’. It may sound obvious but there are actually a number of subtle differences in what exactly we are attempting to measure when using this concept. Is it change in premium per unit of exposure, is it change in rate per dollar of exposure, or is it even change in rate adequacy? At various points I’ve seen all of these referred to as ‘rate change’.

Poisson Distribution for small Lambda

23/4/2019

 
I was asked an interesting question a couple of weeks ago when talking through some modelling with a client.
​
We were modelling an airline account, and for various reasons we had decided to base our large loss modelling on a very basic top-down allocation method. We would take a view of the market losses at a few different return periods, and then using a scenario approach,  would allocate losses to our client proportionately. Using this method, the frequency of losses is then scaled down by the % of major policies written, and the severity of losses is scaled down by the average line size.

To give some concrete numbers (which I’ve made up as I probably shouldn’t go into exactly what the client’s numbers were), let's say the company was planning on taking a line on around 10% of the Major Airline Risks, and their average line was around 1%. We came up with a table of return periods for market level losses. The table looked something like following (the actual one was also different to the table below, but not miles off):
Picture
​Then applying the 10% hit factor if there is a loss, and the 1% line written, we get the following table of return periods for our client:
Picture
Hopefully all quite straightforward so far. As an aside, it is quite interesting to sometimes pare back all the assumptions to come up with something transparent and simple like the above. For airline risks, the largest single policy limit is around USD 2.5bn, so we are saying our worst case scenario is a single full limit loss, and that each year this has around a 1 in 50 chance of occurring. We can then directly translate that into an expected loss, in this case it equates to 50m (i.e. 2.5bn *0.02) of pure loss cost. If we don't think the market is paying this level of premium for this type of risk, then we better have a good reason for why we are writing the policy!

So all of this is interesting (I hope), but what was the original question the client asked me?

We can see from the chart that for the market level the highest return period we have listed is 1 in 50. Clearly this does translate to a much longer return period at the client level, but in the meeting where I was asked the original question, we were just talking about the market level. The client was interested in what the 1 in 200 at the market level was and what was driving this in the modelling.

The way I had structured the model was to use four separate risk sources, each with a Poisson frequency (lambda set to be equal to the relevant return period), and a fixed severity. So what this question translates to is, for small Lambdas $(<<1)$, what is the probability that $n=2$, $n=3$, etc.? And at what return period is the $n=2$ driving the $1$ in $200$?

Let’s start with the definition of the Poisson distribution:

Let $N \sim Poi(\lambda)$, then:

$$P(N=n) = e^{-\lambda} \frac{ \lambda ^ n}{ n !} $$

We are interested in small $\lambda$ – note that for large $\lambda$ we can use a different approach and apply sterling’s approximation instead. Which if you are interested, I’ve written about here:
www.lewiswalsh.net/blog/poisson-distribution-what-is-the-probability-the-distribution-is-equal-to-the-mean
For small lambda, the insight is to use a Taylor expansion of the $e^{-\lambda}$ term. The Taylor expansion of $e^{-\lambda}$ is:
$$ e^{-\lambda} = \sum_{i=0}^{\infty} \frac{\lambda^i}{ i!} = 1  - \lambda + \frac{\lambda^2}{2} + o(\lambda^2) $$
​
We can then examine the pdf of the Poisson distribution using this approximation:

$$P(N=1) =\lambda e^{-\lambda} = \lambda ( 1 – \lambda + \frac{\lambda^2}{2} + o(\lambda^2) ) = \lambda - \lambda^2 +o(\lambda^2)$$
as in our example above, we have:
$$ P(N=1) ≈ \frac{1}{50} – {\frac{1}{50}}^2$$

This means that, for small lambda, the probability that $N$ is equal to $1$ is always slightly less than lambda.

Now taking the case $N=2$:

 $$P(N=2) = \frac{\lambda^2}{2} e^{-\lambda} = \frac{\lambda^2}{2} (1 – \lambda +\frac{\lambda^2}{2} + o(\lambda^2)) = \frac{\lambda^2}{2} -\frac{\lambda^3}{2} +\frac{\lambda^4}{2} + o(\lambda^2) = \frac{\lambda^2}{2} + o(\lambda^2)$$

So once again, for $\lambda =\frac{ 1}{50}$ we have: ​

$$P(N=2) ≈ 1/50 ^ 2 /2 = P(N=1) * \lambda / 2$$

In this case, for our ‘1 in 50’ sized loss, we would expect to have two such losses in a year once every 5000 years! So this is definitely not driving our 1 in 200 result.

We can add some extra columns to our market level return periods as follows:
Picture
So we see for the assumptions we made, around the 1 in 200 level our losses are still primarily being driven by the P(N=1) of the 2.5bn loss, but then in addition we will have some losses coming through corresponding to P(N=2) and P(N=3) of the 250m and 500m level, and also combinations of the other return periods.

So is this the answer I gave to the client in the meeting? …. Kinda, I waffled on a bit about this kind of thing, but then it was only after getting back to the office that I thought about trying to breakdown analytically which loss levels we can expect to kick in at various return periods.

Of course all of the above is nice but there is an easier way to see the answer, since we’d already stochastically generated a YLT based on these assumptions, we could have just looked at our YLT, sorted by loss size and then gone to the 99.5 percentile and see what sort of losses make up that level.

The above analysis would have been more complicated if we have also varied the loss size stochastically. You would normally do this for all but the most basic analysis. The reason we didn’t in this case was so as to keep the model as simple and transparent as possible. If we had varied the loss size stochastically then the 1 in 200 would have been made up of frequency picks of various return periods, combined with severity picks of various return periods. We would have had to arbitrarily fix one in order to say anything interesting about the other one, which would not have been as interesting.
 

Converting a Return Period to a RoL

15/3/2019

 

I came across a useful way of looking at Rate on Lines last week, I was talking to a broker about what return periods to use in a model for various levels of airline market loss (USD250m, USD500m, etc.). The model was intended to be just a very high level, transparent market level model which we could use as a framework to discuss with an underwriter. We were talking through the reasonableness of the assumptions when the broker came out with the following:
 
'Well, you’d pay about 12.5 on line in the retro market at that attachment level, so that’s a 1 in 7 break-even right?'
 
My response was:

'ummmm, come again?'

 
His reasoning was as follows:
 
Suppose the ILW pays $1$ @ $100$% reinstatements, and that it costs $12.5$​% on line.
Then if the layer suffers a loss, the insured will have a net position on the contract of $75$%. This is the $100$% limit which they receive due to the loss, minus the original $12.5$% Premium, minus an additional $12.5$% reinstatement Premium. The reinsurer will now need another $6$ years at $12.5$% RoL $(0.0125 * 6 = 0.75)$ to recover the limit and be at break-even.
 
Here is a breakdown of the cashflow over the seven years for a $10m$ stretch at $12.5$% RoL:
 
Picture
So the loss year plus the six clean years, tells us that if a loss occurs once every 7 years, then the contract is at break-even for this level of RoL.

So this is kind of cool - any time we have a RoL for a retro layer, we can immediately convert it to a Return Period for a loss which would trigger the layer.
 
Generalisation 1 – various rates on line
 
We can then generalise this reasoning to apply to a layer with an arbitrary RoL. Using the same reasoning as above, the break-even return period ends up being:
 
$RP= 1 + \frac{(1-2*RoL)}{RoL}$
 
Inverting this gives:
 
$RoL = \frac{1}{(1 + RP)}$
 
So let's say we have an ILW costing $7.5$% on line, the break-even return period is:
 
$1 + \frac{(1-0.15)}{0.075} = 11.3$
 
Or let’s suppose we have a $1$ in $19$ return period, the RoL will be:
 
$0.05 = \frac{1}{(1 + 19)}$
 
Generalisation 2 – other non-proportional layers
 
The formula we derived above was originally intended to apply to ILWs, but it also holds any time we think the loss to the layer, if it occurs, will be a total loss. This might be the case for a cat layer, or a clash layers (layers which have an attachment above the underwriting limit for a single risk), or any layer with a relatively high attachment point compared to the underwriting limit.
 
Adjustments to the formulas
 
There are a few of adjustments we might need to make to these formulas before using them in practice.
 
Firstly, the RoL above has no allowance for profit or expense loading, we can account for this by converting the market RoL to a technical RoL, this is done by simply dividing the RoL by $120-130$% (or any other appropriate profit/expense loading). This has the effect of increasing the number of years before the loss is expected to occur.
 
Alternately, if layer does not have a paid reinstatement, or has a different factor than $100$%, then we would need to amend the multiple we are multiplying the RoL by in the formula above. For example, with nil paid reinstatements, the formula would be:
 
$RP = 1 + \frac{(1-RoL)}{RoL}$
 
Another refinement we might wish to make would be to weaken the total loss assumption.  We would then need to reduce the RoL by an appropriate amount to account for the possibility of partial losses. It’s going to be quite hard to say how much this should be adjusted for – the lower the layer the more it would need to be.

Correlations, Friedrich Gauss, and Copulas

29/7/2018

 

It's the second week of your new job Capital Modelling job. After days spent sorting IT issues, getting lost coming back from the toilets, and perfecting your new commute to work (probability of getting a seat + probability of delay * average journey temperature.) your boss has finally given you your first real project to work on.

You've been asked to carry out an annual update of the Underwriting Risk Capital Charge for a minor part of the company's Motor book. Not the grandest of analysis you'll admit, this particular class only makes up about 0.2% of the company's Gross Written Premium, and the Actuaries who reserve the company's bigger classes would probably consider the number of decimal places used in the annual report more material than your entire analysis. But you know in your heart of hearts that this is just another stepping stone on your inevitable meteoric rise to Chief Actuary in the Merger and Acquisition department, where one day you will pass judgement on billion dollar deals in-between expensive lunches with CFOs, and drinks with journalists on glamorous rooftop bars.

The company uses in-house reserving software, but since you're not that familiar with it, and because you want to make a good impression, you decide to carry out extensive checking of the results in Excel. You fire up the Capital Modelling Software (which may or may not have a name that means a house made out of ice), put in your headphones and grind it out.

Hours later you emerge triumphant, and you've really nailed it, your choice of correlation (0.4), and correlation method (Gaussian Copula) is perfect. As planned you run extracts of all the outputs, and go about checking them in Excel. But what's this? You set the correlation to be 0.4 in the software, but when you check the correlation yourself in Excel, it's only coming out at 0.384?! What's going on?


Simulating using Copulas

The above is basically what happened to me (minus most of the actual details. but I did set up some modelling with correlated random variables and then checked it myself in Excel and was surprised to find that the actual correlation in the generated output was always lower than the input.) I looked online but couldn't find anything explaining this phenomenon, so I did some investigating myself.

So just to restate the problem, when using Monte Carlo simulation, and generating correlated random variables using the Copula method. When we actually check the correlation of the generated sample, it always has a lower correlation than the correlation we specified when setting up the modelling.

My first thought for why this was happening was that were we not running enough simulations and that the correlations would eventually converge if we just jacked up the number of simulations. This is the kind of behaviour you see when using Monte Carlo simulation and not getting the mean or standard deviation expected from the sample. If you just churn through more simulations, your output will eventually converge. When creating Copulas using the Gaussian Method, this is not the case though, and we can test this.

I generated the graph below in R to show the actual correlation we get when generating correlated random variables using the Copula method for a range of different numbers of simulations. There does seem to be some sort of loose limiting behaviour, as the number of simulations increases, but the limit appears to be around 0.384 rather than 0.4.

Picture

The actual explanation

First, we need to briefly review the algorithm for 
generating random variables with a given correlation using the normal copula.

Step 1 - Simulate from a multivariate normal distribution with the given covariance matrix.

Step 2 - Apply an inverse gaussian transformation to generate random variables with marginal uniform distribution, but which still maintain a dependency structure

Step 3 - Apply the marginal distributions we want to the random variables generated in step 2

We can work through these three steps ourselves, and check at each step what the correlation is.

The first step is to generate a sample from the multivariate normal. I'll use a correlation of 0.4 though out this example. Here is the R code to generate the sample:

a <- library(MASS)
library(psych)

set.seed(100)

m <- 2
n <- 1000
sigma <- matrix(c(1, 0.4,
                  0.4, 1), 
                nrow=2)
z <- mvrnorm(n,mu=rep(0, m),Sigma=sigma,empirical=T)

​

And here is a Scatterplot of the generated sample from the multivariate normal distribution:
​
Picture

We now want to check the product moment correlation of our sample, which we can do using the following code:
​

cor(z,method='pearson')


Which gives us the following result:

> cor(z,method='pearson')
     [,1] [,2]
[1,]  1.0  0.4
[2,]  0.4  1.0


So we see that the correlation is 0.4 as expected. The Psych package has a useful function which produces a summary showing a Scatterplot, the two marginal distribution, and the correlation:

Picture

Let us also check Kendall's Tau and Spearman's rank at this point. This will be instructive later on. We can do this using the following code:


cor(z,method='spearman')

cor(z,method='Kendall')


Which gives us the following results:

> cor(z,method='spearman')
          [,1]      [,2]
[1,] 1.0000000 0.3787886
[2,] 0.3787886 1.0000000


> cor(z,method='kendall')
          [,1]      [,2]
[1,] 1.0000000 0.2588952
[2,] 0.2588952 1.0000000


Note that this is less than 0.4 as well, but we will discuss this further later on.
We now need to apply step 2 of the algorithm, which is applying the inverse Gaussian transformation to our multivariate normal distribution. We can do this using the following code:

u <- pnorm(z)


We now want to check the correlation again, which we can do using the following code:


cor(z,method='spearman')


Which gives the following result:

> cor(z,method='spearman')
          [,1]      [,2]
[1,] 1.0000000 0.3787886
[2,] 0.3787886 1.0000000


​Here is the Psych summary again:

Picture

u is now marginally uniform (hence the name). We can see this by looking at the Scatterplot and marginal pdfs above.

We also see that the correlation has dropped to 0.379, down from 0.4 at step 1. The Pearson correlation measures the linear correlation between two random variables. We generated normal random variables, which had the required correlation, but then we applied a non-linear (inverse Gaussian) transformation. This non-linear step is the source of the dropped correlation in our algorithm.

We can also retest Kendall's Tau, and Spearman's at this point using the following code:

cor(z,method='spearman')

cor(z,method='Kendall')


This gives us the following result:

> cor(u,method='spearman')
          [,1]      [,2]
[1,] 1.0000000 0.3781471
[2,] 0.3781471 1.0000000

> cor(u,method='kendall')
          [,1]      [,2]
[1,] 1.0000000 0.2587187
[2,] 0.2587187 1.0000000


Interestingly, these values have not changed from above! i.e. we have preserved these measures of correlation between step 1 and step 2. It's only the Pearson correlation measure (which is a measure of linear correlation) which has not been preserved.

Let's now apply the step 3, and once again retest our three correlations. The code to carry out step 3 is below:


x1 <- qgamma(u[,1],shape=2,scale=1)
x2 <- qbeta(u[,2],2,2)

df <- cbind(x1,x2)
pairs.panels(df)

The summary for step 3 looks like the following.
Picture

This is the end goal of our method. We see that our two marginal distributions have the required distribution, and we have a correlation between them of 0.37. Let's recheck our three measures of correlation.
​
cor(df,method='pearson')

cor(df,meth='spearman')

cor(df,method='kendall')

> cor(df,method='pearson')
          x1        x2
x1 1.0000000 0.3666192
x2 0.3666192 1.0000000

> cor(df,meth='spearman')
          x1        x2
x1 1.0000000 0.3781471
x2 0.3781471 1.0000000

> cor(df,method='kendall')
          x1        x2
x1 1.0000000 0.2587187
x2 0.2587187 1.0000000

So the Pearson has reduced again at this step, but the Spearman and Kendall's Tau are once again the same.

Does this matter?

This does matter, let's suppose you are carrying out capital modelling and using this method to correlate your risk sources. Then you would be underestimating the correlation between random variables, and therefore potentially underestimating the risk you are modelling.


Is this just because we are using a Gaussian Copula? No, this is the case for all Copulas. Is there anything you can do about it? Yes, one solution is to just increase the input correlation by a small amount, until we get the output we want.

A more elegant solution would be to build this scaling into the method. The amount of correlation lost at the second step is dependent just on the input value selected, so we could pre-compute a table of input and output correlations, and then based on the desired output, we would be able to look up the exact input value to use.   

Pricing a Reinstatement Premium Protection Cover (RPP)

19/2/2018

 

It is quite simple to calculate the Reinstatement Premium resulting from a loss to an Excess of Loss contract. Therefore, it seems reasonable that we should be able to come up with a simple formula relating the price charged for the Excess of Loss contract to the price charged for the Reinstatement Premium Protection (RPP) cover.

I was in a meeting last week with two brokers who were trying to do just this. We had come up with an indicative price for an XoL layer and we were trying to use this to price the equivalent RPP cover. At the time I didn't have an easy way to do it, and when I did a quick Google search nothing came up. Upon further reflection, there are a couple of easy approximate methods we can use. 

Below I discuss three different methods which can be used to price an RPP cover, two of which do not require any stochastic modelling.

Let's quickly review a few definitions, feel free to skip this section if you just want the formula.

What is a Reinstatement Premium?

A majority of Excess of Loss contracts will have some form of reinstatement premium. This is a payment from the Insurer to the Reinsurer to reinstate the protection in the event some of the limit is eroded. In the London market, most contracts will have either $1$, $2$, or $3$ reinstatements and generally these will be payable at $100 \%$. From the point of view of the insurer, this additional payment comes at the worst possible time, the Reinsured is being asked to fork over another large premium to the Reinsurer just after having suffered a loss.

What is a Reinstatement Premium Protection (RPP)?

Reinsurers developed a product called a Reinstatement Premium Protection cover (RPP cover). This cover pays the Reinsured's Reinstatement Premium for them, giveing the insurer further indemnification in the event of a loss. Here's an example of how it works in practice:

Let's suppose we are considering a $5m$ xs $5m$ Excess of Loss contract, there is one reinstatement at $100 \%$ (written $1$ @ $100 \%$), and the Rate on Line is $25 \%$. The Rate on Line is just the Premium divided by the Limit. So the Premium can be found by multiplying the Limit and the RoL:

$$5m* 25 \% = 1.25m$$

So we see that the Insurer will have to pay the Reinsurer $1.25m$ at the start of the contract. Now let's suppose there is a loss of $7m$. The Insurer will recover $2m$ from the Resinsurer, but they will also have to make a payment  to cover the reinstatement premium of: $\frac {2m}  {5m} * (5m * 25 \% ) = 2m * 25 \% = 0.5m$ to reinstate the cover. So the Insurer will actually have to pay out $5.5m$. The RPP cover, if purchased by the insurer, would pay the additional $0.5m$ on behalf of the insurer, in exchange for a further upfront premium.

Now that we know how it works, how would we price the RPP cover?

Three methods for pricing an RPP cover

Method 1 - Full stochastic model


If we have priced the original Excess of Loss layer ourselves using a Monte Carlo model, then it should be relatively straight forward to price the RPP cover. We can just look at the expected Reinstatements, and apply a suitable loading for profit and expenses. This loading will probably be broadly in line with the loading that is applied to the expected losses to the Excess of Loss layer, but accounting for the fact that the writer of the RPP cover will not receive any form of Reinstatement for their Reinsurance.

What if we do not have a stochastic model set up to price the Excess of Loss layer? What if all we know is the price being charged for the Excess of Loss layer?

Method 2 - Simple formula

Here is a simple formula we can use which gives the price to charge for an RPP, based on just the deposit premium and the Rate on Line, full derivation below:

$$RPP = DP * ROL $$

When attempting to price the RPP last week, I did not have a stochastic model set up. We had come up with the pricing just based off the burning cost and a couple of 'commercial adjustments'. The brokers wanted to use this to come up with the price for the RPP cover. The two should be related, as they pay out dependant on the same underlying losses. So what can we say?

If we denote the Expected Losses to the layer by $EL$, then the Expected Reinstatement Premium should be:
$$EL * ROL $$
To see this is the case, I used the following reasoning; if we had losses in one year equal to the $EL$ (I'm talking about actual losses, not expected losses here), then the Reinstatement Premium for that year would be the proportion of the layer which had been exhausted $\frac {EL}  {Limit} $ multiplied by the Deposit Premium $Limit * ROL$ i.e.:

$$ RPP = \frac{EL}  {Limit} * Limit * ROL  = EL * ROL$$

Great! So we have our formula right? The issue now is that we don't know what the $EL$ is. We do however know the $ROL$, does this help?

If we let $DP$ denote the deposit premium, which is the amount we initially pay for the Excess of Loss layer and we assume that we are dealing with a working layer, then we can assume that:

$$DP = EL * (1 + \text{ Profit and Expense Loading } ) $$

Plugging this into our formula above, we can then conclude that the expected Reinstatement Premiums will be:

$$\frac {DP} { \text{ Profit and Expense Loading } } * ROL $$

In order to turn this into a price (which we will denote $RPP$) rather than an expected loss, we then need to load our formula for profit and expenses i.e.

$$RPP = \frac {DP} {\text{ Profit and Expense Loading }} * ROL * ( \text{ Profit and Expense Loading } ) $$

Which with cancellation gives us:

$$RPP = DP * ROL $$


Which is our first very simple formula for the price that should be charged for an RPP. Was there anything we missed out though in our analysis? 
​
Method 3 - A more complicated formula:

There is one subtlety we glossed over in order to get our simple formula. The writer of the Excess of Loss layer will also receive the Reinstatement Premiums during the course of the contract. The writer of the RPP cover on the other hand, will not receive any reinstatement premiums (or anything equivalent to a reinstatement premium). Therefore, when comparing the Premium charged for an Excess of Loss layer against the Premium charged for the equivalent RPP layer, we should actually consider the total expected Premium for the Excess of Loss Layer rather than just the Deposit Premium.

What will the additional premium be? We already have a formula for the expected Reinstatement premium:

$$EL * ROL $$

Therefore the total expected premium for the Excess of Loss Layer is the Deposit Premium plus the additional Premium:

$$ DP + EL * ROL $$

This total expected premium is charged in exchange for an expected loss of $EL$.

So at this point we know the Total Expected Premium for the Excess of Loss contract, and we can relate the expected loss to the Excess of Loss layer to the Expected Loss to the RPP contract. 

i.e. For an expected loss to the RPP of $EL * ROL$, we would actually expect an equivalent premium for the RPP to be:


$$ RPP =  (DP + EL * ROL) * ROL $$

This formula is already loaded for Profit and Expenses, as it is based on the total premium charged for the Excess of Loss contract. It does however still contain the $EL$ as one of its terms which we do not know.

We have two choices at this point. We can either come up with an assumption for the profit and expense loading (which in this hard market might be as little as only be $5 \% - 10 \%$ ). And then replace $EL$ with a scaled down $DP$:

$$RPP =  \frac{DP} {1.075} * ( 1 + ROL) * ROL $$

Or we could simply replace the $EL$ with the $DP$, which is partially justified by the fact that the $EL$ is only used to multiply the $ROL$, and will therefore have a relatively small impact on the result. Giving us the following formula:

$$RPP =  DP ( 1 + ROL) * ROL $$

Which of the three methods is the best?

The full stochastic model is always going to be the most accurate in my opinion. If we do not have access to one though, then out of the two formulas, the more complicated formula we derived should be more accurate (by which I mean more actuarially correct). If I was doing this in practice, I would probably calculate both, to generate some sort of range, but tend towards the second formula.

That being said, when I compared the prices that the Brokers had come up with, which is based on what they thought they could actually place in the market, against my formulas, I found that the simple version of the formula was actually closer to the Broker's estimate of how much these contacts could be placed for in the market. Since the simple formula always comes out with a lower price than the more complicated formula, this suggests that there is a tendency for RPPs to be under-priced in the market.


This systematic under-pricing may be driven by commercial considerations rather than faulty reasoning on the part of market participants. According to the Broker I was discussing these contracts with, a common reason for placing an RPP is to give a Reinsurer who does not currently have a line on the underlying Excess of Loss layer, but who would like to start writing it, a chance to have an involvement in the same risk, without diminishing the signed lines for the existing markets. So let's say that Reinsurer A writes $100 \%$ of the Excess of Loss contract, and Reinsurer B would like to take a line on the contract. The only way to give them a line on the Excess of Loss contract is to reduce the line that Reinsurer A has. The insurer may not wish to do this though if Reinsurer A is keen to maintain their line. So the Insurer may allow Reinsurer B to write the RPP cover instead, and leave Reinsurer A with $100 \%$ of the Excess of Loss contract. This commercial factor may be one of the reasons that traditionally writers of an RPP would be inclined to give favourable terms relative to the Excess of Loss layer so as to encourage the insurer to allow them on to the main programme and to encourage them to allow them to wrte the RPP cover at all.

Moral Hazard

​One point that is quite interesting to note about how these deals are structured is that RPP covers can have quite a significant moral hazard effect on the Insurer. The existence of Reinstatement Premiums is at least partially a mechanism to prevent moral hazard on the part of the Insurer. To see why this is the case, let's go back to our example of the $5m$ xs $5m$ layer. An insurer who purchases this layer is now exposed to the first $5m$ of any loss. But they are indemnified for the portion of the loss above $5m$, up to a limit of $5m$. If the insurer is presented with two risks which are seeking insurance - one with a total sum insured of $10m$, and another with a total sum insured of $6m$, the net retained exposure is the same for both risks from the point of view of the insurer. By including a reinstatement premium as part of the Excess of Loss layer, an therefore ensuring that the insurer has to make a payment any time a loss ceded to the layer, the reinsurer is ensuring that the insurer keeps their financial incentive to not have losses in this range. 

By purchasing an RPP cover, the insurer is removing their financial interest in losses which are ceded to the layer. There is an interesting conflict of interest in that the RPP cover will almost always be written by a different reinsurer to the Excess of Loss layer. The Reinsurer that is writing the RPP cover is therefore increasing the moral hazard risk whichever Reinsurer has written the Excess of Loss layer. Which will almost always be business written by one of the Reinsurer's competitors! 

Working Layers and unlimited Reinstatements

Another point to note is that this pricing analysis makes a couple of implicit assumptions. The first is that there is a sensible relationship between the expected loss to the layer and the premium charged for the layer. This will normally only be the case for 'working layers'. These are layers to which a reasonable amount of loss activity is expected. If we are dealing with clash or other higher layers, then the pricing of these layers will be more heavily driven by considerations beyond the expected loss to the layer. These might be capital considerations on the part of the Reinsurer, commercial considerations such as 

Another implicit assumption in this analysis is that the reinstatements offered are unlimited,. If this is not the case, then the statement that the expected reinstatement is $EL * ROL$ no longer holds. If we have limited reinstatements (which is the case in practice most of the time) then we would expect the expected reinstatement to be less than or equal to this.
​

Compound Poisson Loss Model in VBA

13/12/2017

 

I was attempting to set up a Loss Model in VBA at work yesterday. The model was a Compound-Poisson Frequency-Severity model, where the number of events is simulated from a Poisson distribution, and the Severity of events is simulated from a Severity curve.

There are a couple of issues you naturally come across when writing this kind of model in VBA. Firstly, the inbuilt array methods are pretty useless, in particular dynamically resizing an array is not easy, and therefore when initialising each array it's easier to come up with an upper bound on the size of the array at the beginning of the program and then not have to amend the array size later on. Secondly, Excel has quite a low memory limit compared to the total available memory. This is made worse by the fact that we are still using 32-bit Office on most of our computers (for compatibility reasons) which has even lower limits. This memory limit is the reason we've all seen the annoying 'Out of Memory' error, forcing you to close Excel completely and reopen it in order to run a macro.

The output of the VBA model was going to be a YLT (Yearly Loss Table), which could then easily be pasted into another model. Here is an example of a YLT with some made up numbers to give you an idea:
Picture

It is much quicker in VBA to create the entire YLT in VBA and then paste it to Excel at the end, rather than pasting one row at a time to Excel. Especially since we would normally run between 10,000 and 50,000 simulations when carrying out a Monte Carlo Simulation. We therefore need to create and store an array when running the program with enough rows for the total number of losses across all simulations, but we won't know how many losses  we will have until we actually simulate them.
​
And this is where we come across our main problem. We need to come up with an upper bound for the size of this array due to the issues with dynamically resizing arrays, but since this is going to be a massive array, we want the upper bound to be as small as possible so as to reduce the chance of a memory overflow error.

Upper Bound

What we need then is an upper bound on the total number of losses across all the simulations years. Let us denote our Frequency Distribution by $N_i$, and the number of Simulations by $n$. We know that $N_i$ ~ $ Poi( \lambda ) \: \forall i$. 

Lets denote the total size of the YLT array by $T$. We know that $T$ is going to be:

$$T = \sum_{1}^{n} N_i$$
​We now use the result that the sum of two independent Poisson distributions is also a Poisson distribution with parameter equal to the sum of the two parameters. That is, if $X$ ~ $Poi( \lambda)$ , and $Y$ ~ $Poi( \mu)$, then $X + Y$ ~ $Poi( \lambda + \mu)$. By induction this result can then be extended to any finite sum of independent Poisson Distributions. Allowing us to rewrite $T$ as:

$$ T \sim Poi( n \lambda ) $$

We now use another result, a Poisson Distribution approaches a Normally Distribution as $ \lambda \to \infty $. In this case, $ n \lambda $ is certainly large, as $n$ is going to be set to be at least $10,000$. We can therefore say that:


$$ T \sim N ( n \lambda , n \lambda ) $$
Remember that $T$ is the distribution of the total number of losses in the YLT, and that we are interested in coming up with an upper bound for $T$.

Let's say we are willing to accept a probabilistic upper bound. If our upper bound works 1 in 1,000,000 times, then we are happy to base our program on it. If this were the case, even if we had a team of 20 people, running the program 10 times a day each, the probability of the program failing even once in an entire year is only 4%.

I then calculated the $Z$ values for a range of probabilities, where $Z$ is the unit Normal Distribution, in particular, I included the 1 in 1,000,000 Z value.
Picture

We then need to convert our requirement on $T$ to an equivalent requirement on $Z$.

$$ P ( T \leq x ) = p $$ 

If we now adjust $T$ so that it can be replaced with  a standard Normal Distribution, we get:

$$P \left( \frac {T - n \lambda} { \sqrt{ n \lambda } } \leq \frac {x - n \lambda} { \sqrt{ n \lambda } } \right) = p $$

Now replacing the left hand side with $Z$ gives:

$$P \left( Z \leq \frac {x - n \lambda} { \sqrt{ n \lambda } } \right) = p $$

Hence, our upper bound is given by:

​$$T \lessapprox Z \sqrt{n \lambda} + n \lambda $$

Dividing through by $n \lambda $ converts this to an upper bound on the factor above the mean of the distribution. Giving us the following:

$$ T \lessapprox Z \frac {1} { \sqrt{n \lambda}} + 1 $$

We can see that given $n \lambda$ is expected to be very large and the $Z$ values relatively modest, this bound is actually very tight.

For example, if we assume that $n = 50,000$, and $\lambda = 3$, then we have the following bounds:
Picture


So we see that even at the 1 in 1,000,000 level, we only need to set the YLT array size to be 1.2% above the mean in order to not have any overflow errors on our array.


References
(1) Proof that the sum of two independent Poisson Distributions is another Poisson Distribution
math.stackexchange.com/questions/221078/poisson-distribution-of-sum-of-two-random-independent-variables-x-y
(2) Normal Approximation to the Poisson Distribution.
stats.stackexchange.com/questions/83283/normal-approximation-to-the-poisson-distribution

Combining two Rate Change Indices

12/12/2017

 

I came across this problem at work last week, and I don't think there's anything in the notes on how to deal with it, so I have written up an explanation of how I solved it.

Suppose we are carrying out a rating exercise on a number of lines of business. For two of the lines of business, Marine Hull and Marine Cargo, we have very few claims so we have decided to group these two lines together for the purpose of the Experience Rating exercise. 

We have been supplied with separate rate change indices for all the lines of business. How do we combine the Hull and Cargo Rate Change indices?

Firstly, let's review a few basics:

What is a rate change?

It is a measure of the change in price of insurance from one year to the next when all else is constant.

Why do we need a rate change?

We will often use historic premium as an exposure measure when experience rating a book of business. If we do not adjust for the change in price of insurance then we may under or over estimate the rate of historic losses. 

What do we mean when we say the rate change 'for 2018'?
​

This answer is debatable, and if there is any uncertainty it is always better to check with whoever compiled the data, but generally the '2018' rate change means the 2017 - 2018 rate change.

How do we combine the rate changes from two lines of business?

Let's work though an example to show how to do this. I am going to be using some data I made up. These figures were generated using the random number generator in excel, please don't use them for any actual work!

Suppose we are given the following Premium estimates:
Picture

And that we are also given the following rate changes:

Picture

Then first we need to adjust the rate changes so that they are in an index. We do this by setting 2018 to be equal to 100%, and then recursively calculating the previous years using:

$${Index}_n = {Index}_{n-1} (1 - {(Rate \: Change )}_{n-1} ) $$
Picture

We can then calculate our On-Levelled Premiums, by simply multiplying our Premium figures by the Rate Change Index.

$${ ( On \: Levelled \: Premium ) }_n = ({Premium}_n) \: ({Index}_n) $$
Picture

Using the combined on-levelled premiums, we can then calculate our combined Rate Index using the following formula:

$${Index}_n = \frac{ { ( On \: Levelled \: Premium ) }_n } { {Premium}_n }  $$
And our combined Rate Change using the following formula:

$${Rate Change}_{n-1} = \frac{ {Index}_n } { {Index}_{n-1}} -1  $$
Picture

Most of the time we will only be interested in the combined On-Levelled Premium. The combined Rate Change is only a means to an end to obtain the combined On-Levelled Premium. If we have a model though where we need to input a combined Rate Change in order to group the two classes of business, then the method above can be used to obtain the required Rate Changes.

Types of Excess of Loss Reinsurance

24/10/2017

 
One thing that I got slightly confused about when I started to work at my current job was the difference between the various types of Excess of Loss Reinsurance. The descriptions given in the IFoA notes, those given on Wikipedia, and the use of the terms in the London market are all different.
Picture


The underlying contracts are all the same, but different groups have different names for them. I thought I would make a post explaining the differences.

Here are the names of the sub-types of Excess of Loss Reinsurance that are used in the London Market:
  • Risk Excess
  • Excess of Loss
  • Cat XL
  • Agg XL

(The descriptions given below just describe the basic functionality of the contracts. There will be a lot more detail in the contracts. It's always a good idea to read the slip if possible to properly understand the contract. Also, bear in mind that some people in the London Market might use these terms differently. This just represents what I would understand if someone said one of these terms to me in everyday work.)

Risk Excess (RXS)

The limit and attachment for this contract applies individually per risk rather than in aggregate per loss. (hence why it is called a Risk Excess) So if our RXS is 5m xs 1m and we have a loss involving two risks each of which is individually a 3m loss. The total recovery will be 4m = (3m-1m) + (3m - 1m)

​Excess of Loss (XoL)

The limit and attachment for this contract apply in aggregate per loss rather than individually per risk. So if our XoL is 5m xs 1m and we have a loss involving two risks each of which is individually a 3m loss. The total recovery will be 5m = (6m-1m)
​
Catastrophe XL (Cat XL)

The limit and attachment for this contract apply in aggregate for losses to all policies covered by the contract during the duration of a Catastrophe. So if our Cat XL is 500m xs 100m, and there is a Hurricane which causes insured losses of 300m, then the total recovery will be 200m = (300m - 100m)

Aggregate XL (Agg XL)


The limit and attachment for this contract apply in aggregate for losses to all policies covered by the contract. This will normally be all policies in a single class. So if our Agg XL is 50m xs 10m and covers an Insurer's Aviation account. If the total Aviation losses for the year are 30m. Then the total recovery will be 20m = (30m - 10m)



Picture


​The IFoA notes

The IFoA notes distinguish between three types of Excess of Loss contract.
  • Risk Excess of Loss
  • Aggregate Excess of Loss
  • Catastrophe Excess of Loss

The definitions for Risk Excess of Loss and Catastrophe Excess of Loss are basically the same as those commonly used in the London Market. The IFoA definition of Aggregate Excess of Loss is different though. The Institute defines an Aggregate Excess of Loss Contract to be an Excess of Loss contract which aggregates losses across multiple risks in some way. This could be across all risks involved in one event, across all policies for a given year, across all losses in a given sub-class, etc. So our standard Excess of Loss contract which we defined above, which aggregates across all risks involved in a single loss, would be considered an example of an aggregate contract according to the IFoA definition! Don't go around Lloyd's calling it an Aggregate Excess of Loss though, people will get very confused.

The IFoA definitions are more logical than the ones used in the London Market, where there is an arbitrary distinction between two types of aggregation. Our standard XoL contract does aggregate losses, therefore why not call it an Agg XoL? The reason we do call it that is because everyone else does, which when talking about definitions is a pretty compelling reason, even if it is not the most logical name.


Wikipedia

The Wikipedia page for Reinsurance (link below) distinguishes between three types of Excess of Loss contract.
en.wikipedia.org/wiki/Reinsurance
  • Per Risk Excess of Loss
  • Catastrophe Excess of Loss
  • Aggregate Excess of Loss

Per Risk Excess of Loss is once again defined consistently, and the Aggregate Excess of Loss is also consistent with the common usage in the London Market. However, in this case, our standard Excess of Loss contract now falls under the definition of a Catastrophe Excess of Loss Layer. The Wiki article defines a Cat Excess of Loss contract to be one that aggregates across an Occurrence or Event - where event can either be a catastrophe, such as a Hurricane, or a single loss involving multiple risks.

Summary

You shouldn't get caught up in who is right or wrong, as long as you are clear which definitions you are using. Fundamentally we are talking about the same underlying contracts, it's all just semantics. The definitions that are commonly used in the London Market  are not online anywhere that I could see, and it caused me some confusion when I noticed the inconsistency did some googling, and nothing came up. Hopefully this helps clarify the situation to anyone else who gets confused in the future.

Ogden Rates - Why have they gone up so much and why is everyone up in arms about it?

4/3/2017

 


What even are Ogden Rates anyway?

The Ogden tables are tables of annuity factors, published by the Government's Actuary Department, which are used to calculate court awards for claimants who have had life changing injuries or a fatal accident and are eligible for a payout from their insurance policy.

For example, consider a 50 year old, male, primary school teacher who suffers a car accident which means that they will not be able to work for the rest of their life. The Ogden Tables will be used to calculate how much they should be paid now to compensate them for their loss of earnings. Suppose the teacher is earning a salary of £33,000 when they have the accident, then under the Ogden Rates prior to March 2017, the teacher would be paid a lump sum of £33,000* 20.53 = £677,490 where 20.53 is the factor from the tables.

How did the Government's Actuary Department come up with these factors?

The factors in the table are based on two main pieces of information, how long the person is expected to live, and how much money they can earn from the lump sum once they are given it (called the discount rate). It's this second part which has caused all the problems between the Ministry of Justice and the Insurance Industry.

The discount rate should be selected to match the return generated on assets. For example, if the claimant puts all their money in shares then on average, they will generate much more income than if they put the lump sum in a savings account.  So what should we assume our school teacher will invest their lump sum in? Since the school teacher will not be able to work again, and therefore will need to live off this money for the rest of their life, they will not want to risk losing all their money by investing in something too risky. In technical terms, we would say that the claimant is a risk adverse investor.

In order to mimic the investment style of this risk adverse investor, when the Ogden tables were first set, it was decided to assume that the investor would put all their money in index-linked bonds. There are a couple of reasons to assume this, many risk adverse institutional investors do purchase a lot of index-linked bonds, and also, the average discount rate for these bonds is readily available as it is already published by the UK DMO.

At the time the tables were set up, this seemed like a great idea, but recently it 
has made a lot of people very angry and been widely regarded as a bad move.

What are Index-linked bonds again?

In the 1981 the UK government started issuing a series of gilts which instead of paying a fixed coupon, paid a floating coupon which was a fixed percentage above the rate of inflation.

The UK Debt Management Office is responsible for issuing these bonds, and the following website has details of the bonds that are currently in issue. It's quite interesting to see how it all works:

www.dmo.gov.uk/reportView.aspx?rptCode=D1D&rptName=50545854&reportpage=D1D

The basic principle is if you purchase a bond that pays 2% coupons, if inflation is 3%, they would pay 3%+2%, if inflation was 5%, then they would pay 5% + 2%. Due to the fact that these bonds always gave a fixed real return (2% in this case), institutional investors really like them. Because there is no inflation risk, on average index-linked bonds cost more than fixed coupon bonds once you account for the effects of inflation. Pension Schemes in particular purchase a lot of these bonds, 


Why do Pension Schemes like these bonds so much?

Most pensions are increased annually in line with inflation, due to this Pension Schemes like to hold assets that also go up in line with inflation every year. In order to get real returns on their investments, Pension Schemes traditionally held a mix of shares and index-linked bonds, the shares gave better returns, but the bonds were more safe.

This all started to go very wrong after the financial crisis . A huge drop in interest rates and investment returns, combined with soaring life expectancy lead to more and more pension schemes winding up and the remaining ones have funding issues. As the schemes started winding up they became more and more risk adverse and started to move away from the more volatile assets like shares and moved towards index-linked bonds instead.

This table from the PPF's Purple Book shows the move away from shares into bonds.
Picture
We can see that back in 2006, prior to the financial crisis, Pension Schemes were on average holding around 61% of their assets in equities. When we look again at 2014 this percentage has dropped to 33% and the slack has largely been taken up by bonds.

Pension Schemes like these assets so much in fact that Schroders estimated that 80% of the long term index-linked gilts market is held by private sector pension schemes as the following chart shows.

Source:

www.schroders.co.uk/en/SysGlobalAssets/schroders/sites/ukpensions/pdfs/2016-06-pension-schemes-and-index-linked-gilts.pdf​
​
Picture

Does it matter that Pension Schemes own such a high proportion of these gilts?

The problem with the index-linked gilt market being dominated by Pension Schemes is one of supply and demand. The demand for these bonds from Pension Schemes far outweighs the supply of the bonds. Another chart from Schroder's estimates the demand for the bonds is almost 5 times the supply.

Source:
www.schroders.co.uk/en/SysGlobalAssets/schroders/sites/ukpensions/pdfs/2016-06-pension-schemes-and-index-linked-gilts.pdf


Picture
As you might expect with such a disparity between supply and demand, ​Pension funds have been chasing these assets so much that yields have actually become negative. This means that Pension Schemes on average are paying the government to hold their money for them, as long as it's protected against inflation.

Here is a chart showing the yield over the last 5 years for a 1.25% 2032 index-linked gilt.

Source:
www.fixedincomeinvestor.co.uk/x/bondchart.html?id=3473&stash=F67129F0&groupid=3530

Picture

So what does this have to do with Ogden Rates?

So now we are in a position to link this back to the recent change in the Ogden Rate.

Because the yield on index-linked bonds has traditionally been used as a proxy for a risk-free real return, the yield is still used to decide the discount rate that should be used to calculate court award payouts.

Because Pension Schemes have been driving up the price of these bonds so much, we have the bizarre situation that the amount that insurance companies have to pay out to claimants has suddenly jumped up considerably. In the case of a 20 year old female for example, the amount that would be paid out has almost tripled. As these pay outs are already considerable, the financial impact of this change has been massive.


So what should the Government do?

There is no easy answer, if the Government doesn't use the yield on index-linked gilts to calculate the Ogden rate then there is no obvious alternative. I think the most reasonable alternative would be to use a weighted average of returns on the types of assets that an average claimant would hold. For example, we might assume the claimant is going to hold 50% of their lump sum in cash, 30% in shares, and 20% in bonds, and we would then calculate the weighted return from this portfolio.

The issues with doing nothing is that the additional cost from these increased pay outs will inevitably be passed on to the policyholders through higher premiums. So ultimately there is an issue of fairness whereby people who are receiving payouts are being paid a disproportionate amount of money, and this is being subsidised by policyholders other policyholders.
<<Previous

    Author

    ​​I work as a pricing actuary at a reinsurer in London.

    I mainly write about Maths, Finance, and Technology.
    ​
    If you would like to get in touch, then feel free to send me an email at:
    ​LewisWalshActuary@gmail.com

    Categories

    All
    Actuarial Career
    Actuarial Exams
    Actuarial Modelling
    Bitcoin/Blockchain
    Book Reviews
    Economics
    Finance
    Fun
    Insurance
    Law
    Machine Learning
    Maths
    Modelling
    Physics/Chemistry
    Poker
    Puzzles/Problems
    R
    Statistics
    Technology
    VBA
    Web Scraping

    Archives

    February 2021
    January 2021
    December 2020
    November 2020
    October 2020
    September 2020
    August 2020
    May 2020
    March 2020
    February 2020
    January 2020
    December 2019
    November 2019
    October 2019
    September 2019
    April 2019
    March 2019
    August 2018
    July 2018
    June 2018
    March 2018
    February 2018
    January 2018
    December 2017
    November 2017
    October 2017
    September 2017
    June 2017
    May 2017
    April 2017
    March 2017
    February 2017
    December 2016
    November 2016
    October 2016
    September 2016
    August 2016
    July 2016
    June 2016
    April 2016
    January 2016

    RSS Feed

  • Blog
  • Project Euler
  • Category Theory
  • Disclaimer