6.2 Distribution of Actual Loss Emergence and Maximum Likelihood

Estimate parameters with MLE

6.2.1 Process Variance

Key assumption:

\[\begin{equation} \mathrm{Var}(c_i) \propto \mathrm{E}[c_i] = \sigma^2 \mathrm{E}[c_i] \tag{6.3} \end{equation}\]
  • Assume ratio of the variance to mean is constant for each cell in the triangle

  • \(c_i\) is the incremental loss here

  • \(\sigma^2\) is the same for the entire triangle

Remark. Compare variance assumption with Mack and Venter

  • Mack-1994:

    Proposition 4.3: \(\mathrm{Var}\left (c_{i,k+1} \mid c_{i,1} \cdots c_{i,k}\right ) = \alpha_k^2 \: c_{i,k}\)

    Constant is same for each column \(k\) (development period)

    n = includes all here?

  • Venter Factors:

    2 version of BF method 5.4

    Variance is either constant or varied by \(\propto f(d)h(w)\)

    n = predicted?

Estimate \(\mathbf{\sigma^2}\) based on the entire triangle:

\[\begin{equation} \dfrac{Variance}{Mean} = \sigma^2 = \dfrac{1}{n-p}\sum\limits_{i \in \Delta}^n\dfrac{(c_i - \mu_i)^2}{\mu_i} \tag{6.4} \end{equation}\]
  • \(n =\) # of data points in triangle

  • \(p =\) # of parameters

    • Cape Cod: \(p=3\)

      (\(\omega, \theta, ELR\))

    • LDF: \(p=2 +\) # of AYs

      (\(\omega, \theta,\) row parameters)

  • \(c_i =\) actual incremental loss emergence

  • \(\mu_i =\) expected incremental loss emergence

  • \(\sigma^2\) for \(LDF\) will tend to be higher due to more parameters used

  • This calculation is similar to Shapland’s dispersion factor

Assume incremental loss follows ODP Poisson

\[\begin{equation} C_i \sim ODP(\lambda_i, \sigma^2) \tag{6.5} \end{equation}\]
  • Use ODP so that variance \(\neq\) mean

  • \(C_i = \sigma^2 X_i\) where \(X_i \sim Poi(\lambda_i)\)

  • \(\mathrm{E}[C_i] = \sigma^2 \lambda_i = \mu_i\)

  • \(\mathrm{Var}(C_i) = \sigma^2 \mu_i\)

  • Here \(C_i\) means the r.v. while \(c_i\) is the observation

Caveat:

  • Potential issue with ODP is that some granularity is lost since reserves are estimated in multiple of \(\sigma^2\)

  • However \(\sigma^2\) is generally small so little precision is lost

6.2.2 MLE for Best Parameters

Not super testable

Given a set of observed incremental losses \(\{c_i\}\), we want to find the \(\omega\), \(\theta\), and \(ELR\) that best fit the actual losses

Proposition 6.1 Maximum likelihood function

\[l = \sum \limits_{i \in \Delta} c_i \mathrm{ln}(\mu_i) - \mu_i\]

  • Maximize each \(l\) across each parameters \(\omega\), \(\theta\) and \(ELR\) by taking the derivative of \(l\) w.r.t. each of the parameters and setting the equation to zero

Proof. For each cell \(i\) we have incremental losses \(C_i\) with mean:

\[\mu_i = \sigma^2 \lambda_i\]

The likelihood function is:

\[\prod \limits_{i \in \Delta} \Pr(C_i = c_i) = \prod \limits_{i \in \Delta} \dfrac{\lambda_i ^{c_i / \sigma^2} e^{-\lambda_i}}{(c_i / \sigma^2)!} = \prod \limits_{i \in \Delta} \dfrac{(\mu_1 / \sigma^2) ^{c_i / \sigma^2} e^{-\lambda_i}}{(c_i / \sigma^2)!}\]

Take the log of the above

\[\sum_{i \in \Delta} \dfrac{c_i}{\sigma^2} \ln \left( \dfrac{\mu_i}{\sigma^2} \right) - \dfrac{\mu_i}{\sigma^2} - \ln \left[ \left(\dfrac{c_i}{\sigma^2} \right)! \right]\]

If we assume the \(\sigma^2\) is known and constant then we have function in proposition 6.1

  • We can remove \(\sigma^2\) in the first 2 terms as it is a constant

  • The 3rd term goes away as the whole thing is now a constant

6.2.2.1 MLE for Method 1

When maximize the MLE for \(ELR\) we get:

\[ELR = \dfrac{\sum_{i \in \Delta} c_i}{\sum_{i \in \Delta} P_i \times [G(y) - G(x)]}\]

  • This is the sum of all incremental losses in the triangle \(\div\) Premium \(\times\) Expected portion of claims paid

  • Which is the Cape Cod \(ELR\) from Hurlimann

6.2.2.2 MLE for Method 2

When we maximize the MLE for \(ULT_{AY}\) we get:

\[ULT_{AY} = \dfrac{\sum_{i \in AY} c_i}{\sum_{i \in AY}[G(y) - G(x)]}\]

  • Sum of claims reported to date \(\div\) % expected reported to date in a row

  • The is the LDF method of estimating ultimate

6.2.3 Parameter Variance

Not super testable

Information matrix \(I\):
(2nd derivative matrix \(l\) vs each parameter)

\[\begin{equation} \begin{bmatrix} \dfrac{\partial^2 l}{\partial^2 ELR} & \dfrac{\partial^2 l}{\partial ELR \: \partial \omega} & \dfrac{\partial^2 l}{\partial ELR \: \partial \theta}\\ \dfrac{\partial^2 l}{\partial \omega \: \partial ELR} & \dfrac{\partial^2 l}{\partial^2 \omega} & \dfrac{\partial^2 l}{\partial \omega \: \partial \theta}\\ \dfrac{\partial^2 l}{\partial \theta \: \partial ELR} & \dfrac{\partial^2 l}{\partial \theta \: \partial \omega} & \dfrac{\partial^2 l}{\partial^2 \theta}\\ \end{bmatrix} \tag{6.6} \end{equation}\]

Covariance matrix:

\[\begin{equation} \mathbf{\Sigma} = -\sigma^2 \times I^{-1} \tag{6.7} \end{equation}\]
  • \(3 \times 3\) matrix for Cape Cod

  • \((n+2) \times (n+2)\) for LDF Method

6.2.4 Variance of the Reserves

Process Variance of \(R\)

\[\sigma^2 \sum_i \mu_i\]

  • Technically testable if given \(\sigma^2\)

  • Process variance \(\propto\) to the mean with proportion \(\sigma^2\)

Parameter Variance of \(R\)

\[\mathrm{Var}(\mathrm{E}[R]) = (\partial R)'\mathbf{\Sigma} (\partial R)\]

  • \(\mathbf{\Sigma}\) is from equation (6.7) above

  • \(\partial R\) = vector that is the derivative of the reserve by each parameter

  • Calculation heavy, not testable