I'm pretty sure this is incorrect. or at best misleading!

Canary Wharf, source: https://commons.wikimedia.org/wiki/User:King_of_Hearts

Tangible book value, should be defined as tangible *net* assets, or tangible *book value*, but not simply tangible *assets*. The difference being net assets have had liabilities subtracted, similarly with book value. Simply using 'tangible assets' does not make this adjustment so would overstate tangible book value.

I emailed Investopedia in April to let them know, and got a polite response letting me know they'd look into it, but so far, no correction.

[1] https://www.investopedia.com/terms/t/tbvps.asp

Did you know about this cool tool, which allows you to download data from a Wikipedia table as a csv:

wikitable2csv.ggor.de/ ]]>

You can download a Spreadsheet with an example at the following link:

github.com/Lewis-Walsh/Excel-Linear-Interpolate-Forecast-Function/blob/main/Excel%20function%20-%20follow%20up.xlsx

www.cmegroup.com/trading/interest-rates/countdown-to-fomc.html#resources

The tool works by converting the price of a 30 day Fed Fund future into an implied probability of a given range of yields.

The CME website embeds the output in a something called an 'iframe', which I had never heard of before, and the iframe then contains a dashboard powered by something called Quikstrike. It took me a while to figure out how to switch focus to the iframe, as you can't simply reference elements inside the iframe without first switching focus.

The script below may not look too complicated, but believe me, it took a while to write.

Old Federal Reserve building Philadelphia, Source: https://commons.wikimedia.org/wiki/User:Beyond_My_Ken

In [ ]:

`from selenium import webdriverfrom selenium.webdriver.common.keys import Keysfrom selenium.webdriver.common.by import Byimport pandas as pd`

In [ ]:

`driver = webdriver.Chrome(executable_path='C:/temp/chromedriver.exe')driver.set_page_load_timeout(10)driver.get("https://www.cmegroup.com/trading/interest-rates/countdown-to-fomc.html")time.sleep(2)driver.switch_to.frame(driver.find_element_by_tag_name("iframe"))driver.find_element_by_link_text('Probabilities').click()time.sleep(2)xpath = "//*[@id='MainContent_pnlContainer']/div[3]/div/div/table[3]/tbody"table = driver.find_element_by_xpath(xpath)tabletext = table.textrows = table.find_elements(By.TAG_NAME, "tr") # get all of the rows in the tablei = 0MeetingDates = []DaysToMeeting = []Easeprob = []NoChangeprob = []Hikeprob = []for row in rows: if i > 1: # Get the columns (all the column 2) col = row.find_elements(By.TAG_NAME, "td") MeetingDates.append(col[0].text) DaysToMeeting.append(col[1].text) Easeprob.append(col[2].text) NoChangeprob.append(col[3].text) Hikeprob.append(col[4].text) i = i +1CMEdf = pd.DataFrame( {'MeetingDate': MeetingDates, 'DaysToMeeting': DaysToMeeting, 'Easeprob': Easeprob, 'NoChangeprob': NoChangeprob, 'Hikeprob': Hikeprob, })CMEdf.to_csv(path + "\CMEFedWatch.csv")driver.close()`

The official Microsoft documentation for the Excel Forecast.ETS function is pretty weak [1]. Below I’ve written a few notes on how the function works, and the underlying formulas.

Source: Microsoft office in Seattle, @Coolcaesar, https://en.wikipedia.org/wiki/File:Building92microsoft.jpg

The use case is when we have a time series, and we wish to forecast the values of the times series into the future.

For example, suppose we have the following index, which we wish to project into the future.

Then we can either call the Forecast.ETS function directly, by typing =Forecast.ETS into a cell, or by using the ‘Forecast Sheet’ button in the ‘Data’ ribbon. The Forecast Sheet produces the following output:

Note that this chart provides us with a central estimate, plus a two-sided confidence interval.

**What formula is the Excel function using?**

The Microsoft documentation gives us the following fairly terse description:

*“… using the AAA version of the Exponential Smoothing (ETS) algorithm. “*

A bit of detective work reveals that ‘AAA’ in this context means the additive version of Holt’s Winter Seasonal method. From Rob Hyndman’s excellent open-source online textbook [2], the formula are as per the middle cell of the table below (the table is for the Additive version, and we then need A from Trend, and A from Seasonal, hence the three A’s = AAA)

The Microsoft documentation gives us the following fairly terse description:

A bit of detective work reveals that ‘AAA’ in this context means the additive version of Holt’s Winter Seasonal method. From Rob Hyndman’s excellent open-source online textbook [2], the formula are as per the middle cell of the table below (the table is for the Additive version, and we then need A from Trend, and A from Seasonal, hence the three A’s = AAA)

The following spreadsheet recreates the results of the forecast function.

The values for alpha, beta, gamma, etc. are taken from the Excel Forecast sheet, but can also be output directly using FORECAST.ETS.STATS. Note that Hyndman’s formulas use a different parameterisation of beta.

You can download the Spreadsheet with the reconciliation from Github:

github.com/Lewis-Walsh/Excel_Forecast.ETS/blob/main/Forecast_ETS.xlsx

Source links:

[1] https://support.microsoft.com/en-us/office/forecast-ets-function-15389b8b-677e-4fbd-bd95-21d464333f41

[2] https://otexts.com/fpp2/holt-winters.html

It's still very early days to understand the true fallout from Russia's invasion of Ukraine, but I thought it would be interesting to tally a few of the estimates for the insured loss we've seen so far, all of the below come from the Insider.

Kiv Perchersk Lavra Monastery, Kyiv. @Andriy155

These are all estimates of the gross London market exposure to the events in Ukraine.

Political Violence = $3-5bn [1]

Credit & Political Risk = $2-6bn (I'm assuming no double counting from PV) [2]

Renewables = $313m (counting Syvash and Primorskaya wind farms, but nothing further) [3] & [4]

Marine Hull war = \$ 100m- \$ 5bn (based on 200 vessels trapped, 4 reported damaged already @ $25m per vessel) [5]

Aviation = $12-15bn [6]

Low estimate = $17.4bn

High estimate = $31.3bn

Mid point = $24.4bn

The overall market loss of \$ 24bn, on the face of it, should be quite a manageable number for the industry. In 2021 for example, the industry absorbed something like $30bn from Hurricane Ida without any major deterioration in capital. However, there are two important differences with the Ukraine losses, firstly, these losses will primarily fall on specialty writers, who can often be smaller and less broadly capitalised than US property cat writers, but the second interesting point is that not everyone is going to have the cover they expected from their Reinsurance.

Pulling some numbers out of the air - a Lloyd's syndicate may have purchased Aviation War Reinsurance up to a RDS limit based on one of the following:

- Largest single aircraft value
- Max average exposure in a single airport at a given time
- An as-if based on the Tripoli airport loss
- Clash loss of two largest aircrafts

This might only have suggested they need a limit of 50m-100m, whereas now, based on the scenarios described in [6], they may be sitting on a 100m+ loss, which they will have to absorb net once they've used up their reinsurance. Based on a quick back of the envelope, just looking at Aviation losses, I don't think this should completely wipe anyone out or hit the central fund, but it could easily eat into capital enough that some companies are unable to continue to write next year without fresh capital injections from their parents.

[1] https://www.insuranceinsider.com/article/29tqvca86rnnfhd6o1ekg/london-pv-market-faces-potential-3bn-5bn-ukraine-exposure

[2] https://www.insuranceinsider.com/article/29sc3o51cu5zdvmkseolc/credit-and-political-risk-insurers-brace-for-fallout-from-ukraine-conflict

[3] https://www.insuranceinsider.com/article/29u6br0uppsc1i627ka9s/london-pv-market-facing-potential-eur200mn-syvash-wind-farm-loss-in-ukraine

[4] https://www.insuranceinsider.com/article/29ugpkuv7xqchv0dlko3k/london-pv-market-braced-for-second-wind-farm-loss-in-ukraine

[5] https://www.insuranceinsider.com/article/29szllsmpaxfqrj7bez28/ukraine-premiums-soar-and-losses-loom-in-uncertain-marine-war-picture

[6] https://www.insuranceinsider.com/article/29t4bvwekmc38vgsyw7i8/russian-sanctions-likely-to-spawn-10bn-london-market-aviation-litigation

For a well-written, moderately technical introduction, see the following by Jaime Sevilla:

forum.effectivealtruism.org/posts/sMjcjnnpoAQCcedL2/when-pooling-forecasts-use-the-geometric-mean-of-odds

Jaime’s article suggests a geometric mean of odds as the preferred method of aggregating predictions. I would argue however that when it comes to actuarial pricing, I'm more of a fan of the arithmetic mean, I'll explain why below.

Let’s set up a mini problem so we’ve got some concrete numbers to discuss. Suppose we've been provided with two cat model outputs - the first gives a 0.1% chance of a 100m loss (1 in 1,000), the other a 2% chance of a 100m loss (1 in 50). How should we go about combining the models to come up with a single aggregate prediction?

The obvious first thing to try would be to just average them -> (0.1% + 2% )*0.5= 1.05%. This is known as the arithmetic average.

If we knew one of the models was more accurate (perhaps it's outperformed in the past), we could extend this to an arithmetic weighted average -> 0.1% *0.6 + 2% * 0.4 = 0.086%

But Jaime agues (as supported by much literature) that a better method is to take the geometric average of the odds. [1]

The geometric average of odds works as follows:

The odds of the first model are -> 0.1%/(1-0.1%) = 0.1%

The odds of the second model are -> 2%/(1-2%) = 2.04%

The geometric average of odds = sqrt(0.1% * 2.04%) = 0.45%

So this method gives a lower estimate than above (0.45% vs 1.05%). Another way of thinking of this is to say that it gives much more weight to the first model, the equivalent of a weighted average which gives about an 80% weighting to the 0.1% pick.

If we play around some more and change the first model to a 0.01% chance, the geometric average of odds drops to 0.143%, quite a big jump.

So why is the geometric mean of odds preferred? Firstly, it outperforms in some empirical studies (geopolitical forecasting when assessed against a Brier score [3]). Secondly, it also outperforms in simulation studies. [2] and thirdly it has desirable theoretical properties, such as ‘external Bayesianity’. [2]

The ‘downside’ of the arithmetic mean can be made clear by just a simple example. Consider the difference between the agg prediction of (0.1%, 2%), and (0.01%, 2%). Both models average to basically 1% (1.05%, and 1.005%), but the reasoning goes that model one is signalling a significantly lower probability (by a factor of 10), so shouldn’t this move our agg prediction by more than the tiny amount given by the arithmetic average?

This issue with applying this reasoning to actuarial modelling, is often (but not always) predictions of 0.01% are essentially meaningless. In the context of a cat model, a 0.01% is a once in ten thousand-year event, something that has happened maybe once in recorded history. How would you ever know if this prediction was well calibrated? These are very complex models with many moving pieces, I’ve seen examples of cat models where the probability of a given level of loss moves by a factor of 10-20, just based on a few parameter selections and how the portfolio of risks is coded.

As well as tails being far from perfectly credible, in my experience, they tend to err in the direction of being chronically under-weight. An arithmetic average, precisely because of this lack of sensitivity to changes in the extreme tail, is going to be more robust against this type of tail under-estimation.

Let’s go back to our toy example, our two models give a 0.01% chance of a 100m loss, and a 2% chance, it’s a pretty brave move to select 0.143% as your agg pick. If we do so, and then sell a cat XOL layer at 0.33% GROL, we might think we’ve written to a 43% GLR. But what if the actual loss cost is 0.75% (I would argue this is perfectly consistent with the two modelled outputs we’ve received 0.01%, 2%), then we’d actually be writing to a 230% GLR.

[1] https://forum.effectivealtruism.org/posts/sMjcjnnpoAQCcedL2/when-pooling-forecasts-use-the-geometric-mean-of-odds

[2] https://link.springer.com/article/10.1007/s11004-012-9396-3

[3] https://www.sciencedirect.com/science/article/abs/pii/S0169207013001635?subid1=20210831-0347-5383-a803-d0425dbe2b3b&via%3Dihub

PredictIt is an online prediction website, mainly focused on Political events:

www.predictit.org/

I think it's great that PredictIt allow access like this, before I realised the API exists I was using Selenium to scrape the info through Chrome, which was much slower to run, and also occasionally buggy.