23.4 Estimation Risk

Estimation risk is due to the uncertainty in selecting parameters (freq, sev, trend, variability) and specifically the downstream impact of this

Maximum Likelihood Estimate

For large data sets it has the lowest estimation error of unbiased estimators

Uncertainty depends on the slop of the likelihood surface

  • Steeper the slope near the MLE, the more confident we are in the estimate

  • Measured by taking the 2nd derivative of the likelihood w.r.t. each parameters

  • Negative of the 2nd derivative of the log likelihood = information matrix (easier to work with)

\[\begin{equation} I = \dfrac{\partial^2 [\overbrace{ -LL }^{\text{Neg Log Likelihood}}]}{\partial \underbrace{\vec{\alpha}}_{\text{Parameters}}} \tag{23.3} \end{equation}\]
  • Inverse of the information matrix = co-variance matrix
\[\begin{equation} \mathbf{\Sigma} = I^{ -1 } \tag{23.4} \end{equation}\]
  • If slope of \(-LL\) is steep near the selection \(\Rightarrow\) More confidence in the selection

Model parameter uncertainty by:

  • Assume join lognormal distribution for the parameters and use the correlation from \(\mathbf{\Sigma}\) from the MLE

    • e.g. for Gamma, we might we’ll have a mean and \(\sigma\) for \(\alpha\) and \(\gamma\) and correlation between \(\ln(\alpha)\) and \(\ln(\beta)\)
  • A new set of parameters are selection for each iterations in the simulation

Joint LogNormal distribution is selected (over joint normal) because:

  • Eliminates negative losses

  • Parameters estimates themselves are heavy tailed for heavy tailed distribution like Pareto

    • e.g. \(\alpha\) in simple Pareto \(F(x) = 1 - \left( \frac{\theta}{x} \right)^{\alpha}\) follows inverse gamma, which is similar to LogNormal
  • Results from simulations on small data sets showed that joint lognormal is reasonable