23.4 Estimation Risk
Estimation risk is due to the uncertainty in selecting parameters (freq, sev, trend, variability) and specifically the downstream impact of this
Maximum Likelihood Estimate
For large data sets it has the lowest estimation error of unbiased estimators
Uncertainty depends on the slop of the likelihood surface
Steeper the slope near the MLE, the more confident we are in the estimate
Measured by taking the 2nd derivative of the likelihood w.r.t. each parameters
Negative of the 2nd derivative of the log likelihood = information matrix (easier to work with)
- Inverse of the information matrix = co-variance matrix
- If slope of \(-LL\) is steep near the selection \(\Rightarrow\) More confidence in the selection
Model parameter uncertainty by:
Assume join lognormal distribution for the parameters and use the correlation from \(\mathbf{\Sigma}\) from the MLE
- e.g. for Gamma, we might we’ll have a mean and \(\sigma\) for \(\alpha\) and \(\gamma\) and correlation between \(\ln(\alpha)\) and \(\ln(\beta)\)
A new set of parameters are selection for each iterations in the simulation
Joint LogNormal distribution is selected (over joint normal) because:
Eliminates negative losses
Parameters estimates themselves are heavy tailed for heavy tailed distribution like Pareto
- e.g. \(\alpha\) in simple Pareto \(F(x) = 1 - \left( \frac{\theta}{x} \right)^{\alpha}\) follows inverse gamma, which is similar to LogNormal
Results from simulations on small data sets showed that joint lognormal is reasonable