THE REINSURANCE ACTUARY
  • Blog
  • Project Euler
  • Category Theory
  • Disclaimer

Nuclear Verdicts and shenanigans with graphs part 2

26/10/2023

 

I was thinking more about the post I made last week, and I realised there’s another feature of the graphs that is kind of interesting. None of the graphs adequately isolates what we in insurance would term ‘severity’ inflation. That is, the increase in the average individual verdict over time.

You might think that the bottom graph of the three, tracking the ‘Median Corporate Nuclear Verdict’ does this. If verdicts are increasing on average year by year due to social inflation, then surely the median nuclear verdict should increase as well right?!?

​Actually, the answer to this is no. Let's see why.​

Picture

Let’s start by looking at exactly what the chart is showing - in this context, a ‘nuclear verdict’ is defined as a US corporate civil verdict, which exceeds a fixed dollar value of $10m. You should hopefully be aware (for example see the paper by Brazauskas et al. (2009)) [1] that as a measure of trend, examining the increase in average claim excess of a fixed dollar threshold does not accurately reflect the underlying trend. It can be geared so as to cause a ‘greater than underlying’ trend, and it can also be less than the underlying trend (or even give a value of 0) depending on the exact shape of the distribution. For example, if our underlying trend is 8%, then the increase in average claim excess $10m may present itself in the data as 12%, as 4%, or even as 0% depending on the exact shape of the data. And this is not a ‘lack of data’ issue, i.e. the real answer will converge over time with more data, there is an inherent bias in the methodology.
The intuition behind the phenomenon is that we might have a whole bunch of losses just under the threshold, which when inflated are pushed to just above the threshold, and thereby adding a whole load of new small claims, bringing down the average of the losses above the threshold.

To be exact, the paper by Brazauskas et al. referenced above, shows that analysing the *mean* excess value does not accurately reflect trend, but our chart uses the *median* excess value. The obvious follow up question is does the median also have the same issue as the mean? Intuitively, the same arguments relating to verdicts just under the threshold seems to apply, but let’s run a quick python script to check.

The following script runs 50k monte carlo simulations, each simulation generates values from a lognormal distribution. We then inflate the simulated values by 5% pa, and then restrict ourselves to just the 'nuclear' verdicts, i.e. those above the threshold of 10m. Finally we analyse the median of these nuclear verdicts and assess the trend in the median over time. Here's our Python code:
In [1]:
import numpy as np
import pandas as pd
import scipy.stats as scipy
from math import exp
from math import log
from math import sqrt
from scipy.stats import lognorm
from scipy.stats import poisson
from scipy.stats import linregress
In [2]:
Distmean = 10000000.0
DistStdDev = Distmean*1.5
AverageFreq = 100
years = 10
ExposureGrowth = 0.0

Mu = log(Distmean/(sqrt(1+DistStdDev**2/Distmean**2)))
Sigma = sqrt(log(1+DistStdDev**2/Distmean**2))

LLThreshold = 1e7
Inflation = 0.05

s = Sigma
scale= exp(Mu)
In [3]:
MedianLL = []
AllLnOutput = []

for sim in range(50000):

    SimOutputFGU = []
    SimOutputLL = []
    year = 0
    Frequency= []
    for year in range(years):
        FrequencyInc = poisson.rvs(AverageFreq*(1+ExposureGrowth)**year,size = 1)
        Frequency.append(FrequencyInc)
        r = lognorm.rvs(s,scale = scale, size = FrequencyInc[0])
        r = np.multiply(r,(1+Inflation)**year)
        r = np.sort(r)[::-1]
        r_LLOnly = r[(r>= LLThreshold)]
        SimOutputFGU.append(np.transpose(r))
        SimOutputLL.append(np.transpose(r_LLOnly))
        
    SimOutputFGU = pd.DataFrame(SimOutputFGU).transpose()
    SimOutputLL = pd.DataFrame(SimOutputLL).transpose()    
    a = np.log(SimOutputLL.median())
    AllLnOutput.append(a)
    b = linregress(a.index,a).slope
    MedianLL.append(b)

AllLnOutputdf = pd.DataFrame(AllLnOutput)

dfMedianLL = pd.DataFrame(MedianLL)
dfMedianLL['Exp-1'] = np.exp(dfMedianLL[0]) -1
print(np.mean(dfMedianLL['Exp-1']))
print(np.std(dfMedianLL['Exp-1']))
0.01414199139951309
0.013860637023388517
In [ ]:
 

The value of 1.4% we have outputted at the bottom is the average trend in the median of the nuclear verdicts. We can see that based on this analysis, even though we know that the underlying data has a trend of 5%, the median of the values greater than 10m, only increases by 1.4% pa.

What could we have done instead? We could have analysed the median of the top X verdicts, where X could for example be 10, 20, 50, 100. For a write up of this method, see the following:
www.lewiswalsh.net/blog/backtesting-inflation-modelling-median-of-top-x-losses

Your comment will be posted after it is approved.


Leave a Reply.

    Author

    ​​I work as an actuary and underwriter at a global reinsurer in London.

    I mainly write about Maths, Finance, and Technology.
    ​
    If you would like to get in touch, then feel free to send me an email at:

    ​[email protected]

      Sign up to get updates when new posts are added​

    Subscribe

    RSS Feed

    Categories

    All
    Actuarial Careers/Exams
    Actuarial Modelling
    Bitcoin/Blockchain
    Book Reviews
    Economics
    Finance
    Forecasting
    Insurance
    Law
    Machine Learning
    Maths
    Misc
    Physics/Chemistry
    Poker
    Puzzles/Problems
    Statistics
    VBA

    Archives

    February 2025
    April 2024
    February 2024
    November 2023
    October 2023
    September 2023
    August 2023
    July 2023
    June 2023
    March 2023
    February 2023
    October 2022
    July 2022
    June 2022
    May 2022
    April 2022
    March 2022
    October 2021
    September 2021
    August 2021
    July 2021
    April 2021
    March 2021
    February 2021
    January 2021
    December 2020
    November 2020
    October 2020
    September 2020
    August 2020
    May 2020
    March 2020
    February 2020
    January 2020
    December 2019
    November 2019
    October 2019
    September 2019
    April 2019
    March 2019
    August 2018
    July 2018
    June 2018
    March 2018
    February 2018
    January 2018
    December 2017
    November 2017
    October 2017
    September 2017
    June 2017
    May 2017
    April 2017
    March 2017
    February 2017
    December 2016
    November 2016
    October 2016
    September 2016
    August 2016
    July 2016
    June 2016
    April 2016
    January 2016

  • Blog
  • Project Euler
  • Category Theory
  • Disclaimer