11.6 Bayesian Models (Cumulative)

Inputs

  1. Prior distribution is needed for each parameters (similar to Verall)

    • Wide priors (diffuse)

    • Narrow priors: Use expert knowledge in selecting mean and variance of the parameters

  2. Parameters:

    • \(\alpha_w\): row parameters

    • \(\beta_d\): column parameters

    • \(\sigma_d\): variance parameters (mostly constant across columns)

    • \(\tau\): trend

    • \(\gamma\): change in closure rate

  3. Data: Paid or incurred Triangle

Output

  • The posterior distribution of the parameters is expressed as simulated outputs (not closed form distribution)

11.6.1 Leveled Chain Ladder

Data: Cumulative incurred

Model Specification

\(C_{wd}\) has a lognormal distribution with log means \(\mu_{wd}\) and log standard deviation \(\sigma_d\)

\[\begin{equation} C_{wd} \sim \ln \mathcal{N}(\mu_{wd} , \sigma_d) \tag{11.4} \end{equation}\] \[\begin{equation} \mu_{wd} = \alpha_w + \beta_d \tag{11.5} \end{equation}\]

Remark. \(\alpha_w\)

  • Is a random variable

  • Not value on the diagonal (incurred to date)

  • Model select an \(\alpha_w\) for each instance of the simulation based on wide priors

  • Main feature of this model for adding variability

  • \(e^{\alpha}\) is sort of the ultimate for the AY

Remark. \(\beta_d\)

  • \(\beta_{10} = 0\) so we have 100% at 10 years and not overdetermining the model

  • \(e^{\beta_d} < 100\%\) most of the time (i.e. \(\beta_d < 0\)), represents % paid (incurred) to date

Remark. \(\sigma_d\)

  • Subject to the following constraints:
\[\begin{equation} \sigma_1 > \sigma_2 > \cdots > \sigma_{10} \tag{11.6} \end{equation}\]
  • Highest variability @ early ages

  • Variance only varies by column (\(d\)) (not by AYs)

  • Since as \(d\) increases there are fewer claims that are open and subject to random outcomes

Priors for \(\{\alpha_w\}\), \(\{\sigma_d\}\), and \(\{\beta_d\}\)

  • Wide prior distribution (all you need to know for exam, below is just FYI)

  • Each \(\alpha_w \sim \mathcal{N}(\ln(Premium_w) + logelr, \sqrt{10})\)

    • \(logelr \sim U(-1,0.5)\)

    • JAGS expression for a normal distribution uses a precision parameter equal to the reciprocal of the variance \(\Rightarrow\) \(\sqrt{10}\) corresponds to a low precision of 0.1

  • Each \(\beta_d \sim U(-5,5)\) for \(d<10\)

  • Each \(\sigma_d = \sum_{i=d}^{10} a_i\) where \(a_i \sim U(0,1)\)

Test Results

  • Can compare the variability (s.d.) with Mack by plotting the log(s.d.) of the 2 models

  • Model still does not capture the tail appropriately

11.6.2 Correlated Chain-Ladder

Build upon the Leveled Chain-Ladder by adding \(\rho\) to create correlation of losses in one AY and the previous AY

Data: Cumulative incurred & paid

Model Specification

\(C_{wd}\) follows a lognormal distribution similar to CCL (11.4), but with log means:

\[\begin{equation} \mu_{wd} = \begin{cases} \alpha_1 + \beta_d & \text{if } w = 1\\ \alpha_w + \beta_d + \rho \cdot \left[ \mathrm{ln}\left(C_{w-1, d}\right) - \mu_{w-1,d} \right] & \text{if } w > 1\\ \end{cases} \tag{11.7} \end{equation}\]

Remark. \(\rho \cdot \left[ \mathrm{ln}\left(C_{w-1, d}\right) - \mu_{w-1,d} \right]\)

  • If parameters \(\{\alpha_w\}\), \(\{\alpha_d\}\) and \(\rho\) are given:

    \(\rho\) is the correlation coefficient between \(\ln(C_{w-1,d})\) and \(\ln(C_{wd})\)

  • \(\rho\) is applied to the difference between the log of actual losses and the log mean of the expected loss from the prior AY

  • Higher losses in one row \(\Rightarrow\) higher expected losses in the following row

  • The correlation \(\rho\) here is what drives the additional variability

  • Model reduces to LCL when \(\rho = 0\)

Priors for \(\{\alpha_w\}\), \(\{\sigma_d\}\), \(\{\beta_d\}\) and \(\rho\)

  • Prior is still wide priors

  • \(\{\alpha_w\}\), \(\{\sigma_d\}\), \(\{\beta_d\}\) has the same distribution as in LCL

  • \(\rho \sim U(-1 ,1)\), the full permissible range

Test Results

Incurred

Results and K-S test show that this model is sufficient

Paid

Worst than ODP and Mack, biased high for all lines

11.6.2.1 Predictive Distribution Simulation Process Example

This section is still work in progress, and it is not important for the exam

Predictive distribution of outcomes is a mixed distribution

  • Mixing is specified by the posterior distribution of parameters

Below is a summary for the CCL R script provided by Meyers

Predictive distribution for \(\sum_{w=1}^{10} C_{w,10}\) (ultimate loss for all AYs @ age 10) is generated by a simulation

For each of parameter set \(\{\alpha_w\}\), \(\{\sigma_d\}\), \(\{\beta_d\}\) and \(\{\rho\}\), start with the given \(C_{1,10}\) and calculate the mean \(\mu_{2,10}\). Then simulate \(C_{2,10}\) from a lognormal distribution with log mean \(\mu_{2,10}\) and log standard deviation \(\sigma_{10}\)

Similarly, use the result of this simulation to simulate \(C_{2,10},...C_{10,10}\) Then form the sum \(C_{1,10} + \sum_{w=2}^{10} C_{w,10}\)

11.6.3 Changing Settlement Rate

Based on LCL with \(\gamma\) that allows for speed up in claim payments (likely from claims being reported and settled faster due to technology)

Data: Cumulative paid

Model Specification

\(C_{wd}\) follows a lognormal distribution similar to CCL (11.4), but with log means:

\[\begin{equation} \mu_{wd} = \alpha_w + \left[ \beta_d \cdot (1-\gamma)^{w-1}\right] \tag{11.8} \end{equation}\]

Remark. \(\gamma\)

  • \(\gamma >0\) reflects increase in payment speed as \((1-\gamma)^{w-1} < 1\)

    • A positive \(\gamma\) will cause \(\beta_d \cdot (1 - \gamma)^{w-1}\) to increase with \(w\) \(\Rightarrow\) indicate speedup in claim settlement

    • Negative \(\gamma\) will indicate a slow down in claim settlement rate

  • \(\gamma\) has less impact further out in the tail as there are less payments happening out there

  • Model fits one \(\gamma\) for the whole triangle

Priors for \(\{\alpha_w\}\), \(\{\sigma_d\}\), \(\{\beta_d\}\) and \(\rho\)

  • Wide prior similar to LCL and CCL

  • \(\gamma \sim \mathcal{N}(0, 0.025)\)

Test Results

Overall fits well, slightly biased high on Personal Auto but is a big improvement over the other models