The term exposure inflation can refer to a couple of different phenomena within insurance. A friend mentioned a couple of weeks ago that he was looking up the term in the context of pricing a property cat layer and he stumbled on one of my blog posts where I use the term. Apparently my blog post was one of the top search results, and there wasn’t really much other useful info, but I was actually talking about a different type of exposure inflation, so it wasn’t really helpful for him.
So as a public service announcement, for all those people Googling the term in the future, here are my thoughts on two types of exposure inflation:
In which we correct our label encoding method from last time, try out a new algorithm - Gradient Boosted Regression - and finally managed to improve our score (by quite a lot it turns out)
An Actuary learns Machine Learning - Part 9 - Cross Validation / Label Encoding / Feature Engineering
In which we set up K-fold Cross Validation to assess model performance, spend quite a while tweaking our model, use hyper-parameter tuning, but then end up not actually improving our model.
An Actuary learns Machine Learning - Part 8 - Data Cleaning / more Null Values / more Random Forests
In which we deal with those pesky null values, add additional variables to our Random Forest model, but only actually improve our score by a marginal amount.
In which we plot an excessive number of graphs, fix our problems with null values, re-run our algorithm, and significantly improve our accuracy.
In which we start a new Kaggle challenge, try out a new Python IDE, build our first regression model, but most importantly - make these blog posts look much cleaner.
I work as a pricing actuary at a reinsurer in London.