Investopedia defines TBVPS to be:
"Tangible book value per share (TBVPS) is the value of a company’s tangible assets divided by its current outstanding shares." 
I'm pretty sure this is incorrect. or at best misleading!
Canary Wharf, source: https://commons.wikimedia.org/wiki/User:King_of_Hearts
Did you know about this cool tool, which allows you to download data from a Wikipedia table as a csv:
Here's a useful trick that you might not have seen before. Suppose we have some data with includes rows with values missing, then we can use the below formula to apply linear interpolate to fill in the missing datapoints, without having to laboriously type in the interpolation formula long hand (which I used to do all the time)
I wrote a python script which uses Selenium to scrape the predictions for Fed rate movements from the CME FedWatch tool.
The tool works by converting the price of a 30 day Fed Fund future into an implied probability of a given range of yields.
The CME website embeds the output in a something called an 'iframe', which I had never heard of before, and the iframe then contains a dashboard powered by something called Quikstrike. It took me a while to figure out how to switch focus to the iframe, as you can't simply reference elements inside the iframe without first switching focus.
The script below may not look too complicated, but believe me, it took a while to write.
Old Federal Reserve building Philadelphia, Source: https://commons.wikimedia.org/wiki/User:Beyond_My_Ken
The official Microsoft documentation for the Excel Forecast.ETS function is pretty weak . Below I’ve written a few notes on how the function works, and the underlying formulas.
Source: Microsoft office in Seattle, @Coolcaesar, https://en.wikipedia.org/wiki/File:Building92microsoft.jpg
It's still very early days to understand the true fallout from Russia's invasion of Ukraine, but I thought it would be interesting to tally a few of the estimates for the insured loss we've seen so far, all of the below come from the Insider.
Please note, I'm not endorsing any of these estimates, merely collating them for the interested reader!
Kiv Perchersk Lavra Monastery, Kyiv. @Andriy155
There's some interesting literature from the world of forecasting and natural sciences on the best way to aggregate predictions from multiple models/sources.
For a well-written, moderately technical introduction, see the following by Jaime Sevilla:
Jaime’s article suggests a geometric mean of odds as the preferred method of aggregating predictions. I would argue however that when it comes to actuarial pricing, I'm more of a fan of the arithmetic mean, I'll explain why below.
I wrote a quick Python script to download the latest odds from PredictIt, and then output to an Excel file. I've pasted it below as an extract from a Jupyter notebook:
PredictIt is an online prediction website, mainly focused on Political events:
I think it's great that PredictIt allow access like this, before I realised the API exists I was using Selenium to scrape the info through Chrome, which was much slower to run, and also occasionally buggy.
The Cefor curves provide quite a lot of ancillary info, interestingly (and hopefully you agree since you're reading this blog), had we not been provided with the 'proportion of all losses which come from total losses', we could have derived it by analysing the difference between the two curves (the partial loss and the all claims curve)
Below I demonstrate how to go from the 'partial loss' curve and the share of total claims % to the 'all claims' curve, but you could solve for any one of the three pieces of info given two of them using the formulas below.
Source: Niels Johannes https://commons.wikimedia.org/wiki/File:Ocean_Countess_(2012).jpg
I hadn't see this before, but Cefor (the Nordic association of Marine Insurers), publishes Exposure Curves for Ocean Hull risks. Pretty useful if you are looking to price Marine RI. I've included a quick comparison to some London Market curves below and the source links below.
In which we try out the best performing algorithm from our house price prediction problem - Gradient Boosted Regression - on the Titanic problem, but don't actually manage to improve on our old score...
This post is a follow up to two previous posts, which I would recommend reading first:
Since our last post, the loss creep for the July 2021 German flooding has continued, sources are now talking about a EUR 8bn (\$9.3bn) insured loss.  This figure is just in respect of Germany, not including Belgium, France, etc., and up from \$8.3bn previously.
But interestingly (and bear with me, I promise these is something interesting about this) when we compare this \$9.3bn loss to the OEP table in our previous modelling, it puts the flooding at just past a 1-in-200 level.
Photo @ Jonathan Kemper - https://unsplash.com/@jupp
Here are two events that you might think were linked:
Every year around the month of May, the National Oceanic and Atmospheric Administration (NOAA) releases their predictions on the severity of the forthcoming Atlantic Hurricane season.
Around the same time, US insurers will be busy negotiating their upcoming 1st June or 1st July annual reinsurance renewals with their reinsurance panel. At the renewal (for a price to be negotiated) they will purchase reinsurance which will in effect offload a portion of their North American windstorm risk.
You might reasonably think – ‘if there is an expectation that windstorms will be particularly severe this year, then more risk is being transferred and so the price should be higher’. And if the NOAA predicts an above average season, shouldn’t we expect more windstorms? In which case, wouldn't it make sense if the pricing zig-zags up and down in line with the NOAA predictions for the year?
Well in practice, no, it just doesn’t really happen like that.
Source: NASA - Hurricane Florence, from the International Space Station
This post is a follow up to a previous post, which I would recommend reading first if you haven't already:
In our previous modelling, in order to assess how extreme the 2021 German floods were, we compared the consensus estimate at the time for the floods (\$6bn insured loss) against a distribution parameterised using historic flood losses in Germany from 1994-2020. Since I posted that modelling however, as often happens in these cases, the consensus estimate has changed. The insurance press is now reporting a value of around \$8.3 bn . So what does that do for our modelling and our conclusions from last time?
As I’m sure you are aware July 2021 saw some of the worst flooding in Germany in living memory. Die Welt currently has the death toll for Germany at 166 .
Obviously this is a very sad time for Germany, but one aspect of the reporting that caught my attention was how much emphasis was placed on climate change when reporting on the floods. For example, the BBC , the Guardian , and even the Telegraph  all bring up the role that climate change played in the contributing to the severity of the flooding.
The question that came to my mind, is can we really infer the presence of climate change just from this one event? The flooding has been described as a ‘1-in-100 year event’ , but does this bear out when we analyse the data, and how strong evidence is this of the presence of climate change?
Image - https://unsplash.com/@kurokami04
I work as an actuary and underwriter at a global reinsurer in London.