Discuss the assessment of operational, liquidity and insurance risks
Interest in the active management of op-risk has been kick-started in recent times by:
Advent of ERM
Introduction of new regulatory capital requirements
Increased emphasis on sophisticated quantitative models for other types of risks
There is no inherent upside to op-risk
High profile problems at other companies caused by op-risk failures
Reasons why a more formal approach is advantageous:
Op-risk has been the main driver behind many cases of major financial disaster
Op-risk is inter-linked with credit
and market risk
Important to minimize the likelihood of op-risk failure during already stressed market conditions
Op-risk may otherwise be treated differently in different areas of the company
Can lead to key risks being over looked and decisions being taken based on inaccurate information or an incorrect assessment of a BU’s risk-adjusted return
Benefit of consistent
and effective
op-risk management (that is distinct from the general benefits arising from management of other types of risks)
Minimize impact of reputational damage from incident linked to op-loss
These incidents are more likely to give the company the appearance of being badly managed and ill-equipped to deal with errors than other risk events
Minimizes day-to-day losses
and reduces the potential for more extreme and costly incidents
Improves ability to meet business objectives
(less time spend on crisis management)
Strengthens overall ERM process and framework
Op-risk management is still very much a developing area but is widely accepted that all companies should be considering this issue
A comprehensive approach should be adopted
Focus should primarily be on the management of op-risk (later module)
Rather than attempt to (spuriously) precisely measure the risk present
Organizations have only recently started gathering operational loss data
Based on initial analysis of publicly available data:
Distribution is skewed to the right
Severities have a heavy tailed distribution
Losses occur randomly in time
Loss frequency may vary considerably over time
2 types of losses to distinguish
Small day-to-day mistake made in the course of business
May be modelled
e.g. using statistical distributions or high-claim-frequency non-life reserving techniques
Infrequent, large events
(e.g. major frauds or failed projects)
Require methods such as extreme value techniques (fitting GPD)
Model approach
Estimates the operational risk capital by starting the analysis at a low level of detail
(i.e. looking at each category of op-risk in return)
and then aggregating the results
Use readily available data
and fairly simple calculations
to give a general picture of the operational risk of a company
Estimates the operational risk capital by starting the analysis at a low level of detail (i.e. looking at each category of op-risk in return) and then aggregating the results
Need model that cope well with the outer tail of the loss distribution
EVT (Module 20) may be suitable for assessing infrequent, potentially catastrophic risks
Reasonable to assume the inflation adjusted loss amount at least have a common severity distribution
Some suggests the GPD as one of the most useful tools to fit loss distribution in the extreme tails
However EVT may be appropriate but this approach is not widely use due to the lack of internal data
Given sufficient data to come up with an estimated loss distribution, then Monte Carlo might be used to estimate op-risk capital for a given CI
Useful technique due to the potential linkage between op-risk
and other risk
Steps for applying scenario analysis to op-risk assessment
Group risk exposures into broad categories
e.g. risk invovling financial fraud
; system errors
, etc
Develop plausible adverse scenario for each group of risk
Need to be plausible to determine the consequence of the risk event
The scenario is deemed to be representative of all risk in the group
Calculate the consequence of the risk event occurring for each scenario
Also will involve senior staff input
Financial consequences could include:
Redress paid to those affected
Cost of correcting systems and records
Regulatory fees and fines
Ppportunity costs while any changes are made, etc
In practice the mid-point of a range of possible values is usually taken
Total costs calculated are taken as the financial cost of all risks represented by the chosen scenario
Assessment of likelihood and severity made by a scenario analysis can be displayed on a risk map
Benefits of scenario analysis
Captures opinions
, concerns
and experience
of risk managers
Not rely heavily on availability
/ accuracy
/ relevance
of historical data
Provide opportunity to identify hard to predict, high-impact events
Identify and improve understanding of cause and effect relationship
Reduce risk-reward arbitrage opportunities
Advantage of bottom-up model
Limitations
Difficult to break down reported aggregate losses into their constituent components
Little robust internal historical data
Esp. for low probability and high impact events
Application of external data is difficult due to difference between companies
Basel advanced measurement approach (AMA)
Under the Basel AMA, op-risk is assessed using internal models (stat analysis) and scenario analysis
(Subject to approval and continual checking by the supervisory authorities)
Standard is a 1-year holding period of a 99.9% CI
Consistent with the Basel standard for credit risk analysis (mod 30)
Op-risk loss categories:
8 business lines
(e.g. retail banking, agency service, etc)
Further into 7 loss-event types
(e.g. internal fraud, damage to physical assets etc)
3 specific areas that need credible data (on probability and expected size of potential losses)
Internal data on repetitive high frequency losses over a 3-5 years period
External data on non-repetitive, low frequency losses
Suitable stress scenarios to consider
Overall, statistical methods are difficult to apply due to the lack of data
(as banks have only recently started such information gathering)
Simpler approach (due to lack of data) that assumed losses are related to the volume of transactions
Apply a weighting to the actual or expected volume of transactions
(e.g. Basel indicator and standardized approaches)
Disadvantage: Op-risk exposure may no be proportional to business volumes (might not a good proxy)
Does not account for complexity of risk the org. faces
International companies faces additional difficulties and potential op-risk
Some companies have taken great steps to avoid op-risk by documenting procedures, having well trainged staff etc
Basel indicator approach (BIA)
Operational risk capital (\(K_{BIA}\))
\(K_{BIA} = \dfrac{\sum \limits_{t=1}^3 \alpha \: max(GI_t,0)}{\sum \limits_{t=1}^3 I(GI_t >0)}\)
\(GI_t\): Gross income in the prior year \(t\)
Basel Committee suggest that \(\alpha\) should be 15%
Basel standardized approach(SA)
Similar to BIA except that gross income is split down and attributed to each of 8 business lines with a different multiplier each
\(K_{SA} = \dfrac{1}{3} \sum \limits_{t=1}^3 max \left(\sum \limits_{j=1}^8 \beta_j GI_{j,t}, 0\right)\)
\(\beta_j\) is between 12% and 18% depending on the business
\(GI_{j,t}\) is the gross income for business line \(j\) in the prior year \(t\)
Use readily available data
and fairly simple calculations
to give a general picture of the operational risk of a company
Look at 4 models, but they all fail to capture successfully low probability high consequence risk events
Operational risk capital = Total risk capital - Non-op-risk capital
Advantages: Simple and forward looking
Limitations
Total risk capital needs to be estimated (not easy)
Inter-relationship between the different types of risks are ignored
Does not capture cause and effect scenarios
(i.e. where op-risk arises in the company and its specific impact)
Looks at income volatility as the primary factor determining capital allocation
Operational risk income volatility = Total income-volatility - Non-op-risk income-volatility
Relative advantage over method 1.
Limitations
Ignores the rapid evolution of companies and industries (over time the income volatility of companies will change)
(i.e. not forward looking)
Focus on income rather than value
(Does not capture the “softer” measures of risk, such as opportunity cost and the value of reputation/brand)
CAPM
Assumes that all market information is included in a company’s share price
Impact of any publicized op-loss can be identified by looking at the movement in a company’s share price and stripping out the overall market movement
Relative advantage over method 2.
Includes both the aggregate effect of specific risk events and the “softer” issues
(e.g. opportunity cost and/or damage to reputation/brand)
Limitations
No information is provided on losses due to specific risks (only aggregate)
Level of op-risk capital is unaffected by any controls put in place
(little motivation to improve the risk management process)
Tail-end risks are not accounted for thoroughly
Does not help anticipate incidents of operational risk as there is no consideration of individual risks in isolation
Use data from similar companies to derive operational risk capital
Useful if there is little internal data available (but debatable how accurately one company mirrors another in terms of risk profile)
Systematic process for managing a company’s op-risk
Capital allocation and performance measurement (cover in later module)
Risk mitigation and control (cover in later module)
Risk transfer and finance (cover in later module)
A comprehensive operational risk management policy should include:
Principles for operational risk management
Definitions and taxonomy for op-risk
Objectives and goals of op-risk management
Op-risk management processes and tools
Org structure, as it applies to operational risk management
Roles and responsibilities of different business areas involved in operational risk management
Need a wide range of tools and techniques as it covers a broad range of issues
Loss incident databases
Help a company to learn from previous loss incidents
Help analyse trends
Support root causes analysis of op-losses
Support any mitigation strategies applied
Controls self-assessment
Internal analysis of the key risks and their controls and management implications
Risk mapping
(Discussed in Mod 13) Ranks the company’s key risk exposures by severity and frequency
Helps prioritize risk
Ensure resources are targeted to the most important risks
Risk indicators and minimum acceptable performance triggers
Quantitative measures can be set up (e.g. # of customer complaints)
Used to establish goals and levels of minimum acceptable performance (MAP)
Action plans should be in place to deal with any non attainment of the MAP