Following on from last week’s post on Principled Bayesian Workflow I want to reflect on how to motivate a model.The purpose of most models is to understand change, and yet, considering what doesn’t change and should be kept constant can be equally important.I will go through a couple of models in this post to illustrate this idea. The purpose of the model I want to build today is to predict how much ice cream is sold for different temperatures ((t)). Related To leave a comment for the author, please follow the link and comment on their blog: R on mages’ blog. R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time…

Original Post: Models are about what changes, and what doesn’t

# Posts by R on mages amp 039 blog

## PK/PD reserving models

My updated model is not much different to the one presented in the earlier post, apart from the fact that I allow for the correlation between (RLR) and (RRF) and the mean function (tilde{f}) is the integral of the ODEs above.[begin{aligned}y(t) & sim mathcal{N}(tilde{f}(t, Pi, beta_{er}, k_p, RLR_{[i]}, RRF_{[i]}), sigma_{y[delta]}^2) \begin{pmatrix} RLR_{[i]} RRF_{[i]}end{pmatrix} & simmathcal{N} left(begin{pmatrix}mu_{RLR} \mu_{RRF}end{pmatrix},begin{pmatrix}sigma_{RLR}^2 & rho sigma_{RLR} sigma_{RRF}\rho sigma_{RLR} sigma_{RRF} & sigma_{RRF}^2end{pmatrix}right)end{aligned}] Implementation with brms Let’s load the data back into R’s memory: library(data.table) lossData0 <- fread(“https://raw.githubusercontent.com/mages/diesunddas/master/Data/WorkersComp337.csv”) Jake shows in the appendices of his paper how to implement this model in R with the nlmeODE (Tornoe (2012)) package, together with more flexible models in OpenBUGS (Lunn et al. (2000)). However, I will continue with brms and Stan. Using the ODEs with brms requires a little extra coding, as I have to provide the integration…

Original Post: PK/PD reserving models

## Hierarchical compartmental reserving models

Non-linear Least Squares Before I build a complex Bayesian model I start with a simple non-linear least squares model. Or in other words, I believe the data is generated from a Normal distribution, with the mean described by an analytical function (mu(t)=f(t,dots)) and constant variance (sigma^2). [begin{aligned}y(t) & sim mathcal{N}(mu(t), sigma^2) \mu(t) & = f(t, Pi, k_{er}, k_p, RLR, RRF, delta)& = Pi cdot [(1 – delta) frac{ RLR cdot k_{er}}{k_{er} – k_p} cdotleft(exp(-k_p t) – exp(-k_{er} t) right) + & delta frac{ RLR cdot RRF}{k_{er} – k_p}left( k_{er} cdot (1 – exp(-k_p t) – k_p cdot (1 – exp(-k_{er}t ) right)]\delta & = begin{cases}1 mbox{ if } y mbox{ is outstanding claim} mbox{ if } ymbox{ is paid claim}end{cases}end{aligned}] To ensure all parameters stay positive I will use the same approach as Jake does in this paper, that is…

Original Post: Hierarchical compartmental reserving models

## Insurance Data Science Conference 2018

Following five R in Insurance conferences, we are organising the first Insurance Data Science conference at Cass Business School London, 16 July 2018. In 2013, we started with the aim to bring practitioners of industry and academia together to discuss and exchange ideas and needs from both sides. R was and is a perfect glue between the two groups, a tool which both side embrace and which has fostered the knowledge transfer between the two. However, R is just one example and other languages serve this purpose equally well. Python is another popular language, but also Julia and Stan have gained momentum. For that reason we have rebranded our conference series to “Insurance Data Science”. We believe by removing the explicit link to “R” we have more freedom to stay relevant and embrace whatever technology may evolve in the future.…

Original Post: Insurance Data Science Conference 2018

## Correlated log-normal chain-ladder model

On 23 November Glenn Meyers gave a fascinating talk about The Bayesian Revolution in Stochastic Loss Reserving at the 10th Bayesian Mixer Meetup in London. Glenn Meyers speaking at the Bayesian Mixer Glenn worked for many years as a research actuary at Verisk/ ISO, he helped to set up the CAS Loss Reserve Database and published a monograph on Stochastic loss reserving using Bayesian MCMC models. In this blog post I will go through the Correlated Log-normal Chain-Ladder Model from his presentation. It is discussed in more detailed in his monograph. Glenn kindly shared his code as well, which I have used as a basis for this post. The CAS Loss Reserve Database is an excellent data source to test reserving models. It is hosted on the CAS website and contains historical regulatory filings of US insurance companies. The following…

Original Post: Correlated log-normal chain-ladder model