Description
This is a presentation explains different credit rsik models.
CREDIT RISK MODELS
PRESENTED BY:
GROUP-1
Credit Risk
• Credit risk is the risk of loss due to a debtor's non-payment of a loan or other line of credit (either the principal or interest (coupon) or both)
• lenders employ own models (Credit Scorecards) to rank customers – risk wise ? apply strategy eg.unsecured loans – higher risk Lenders to consumers • Setting of credit limits (eg. Credit card)
Lender to business
• limit the borrower's ability e.g., by buying back shares • allow for monitoring the debt requiring audits, and monthly reports
Faced by business
• Internal risk rating departments • Rating agencies eg. S&P, Moody etc.
Faced by individuals
• Eg. As depositors in banks • Eg. As employees – firms ability to pay wages
Credit Risk
RBI definition
: Credit risk is defined as the possibility of losses associated with diminution in the
credit quality of borrowers or counterparties.
In a bank’s portfolio, losses stem from outright default due to inability or unwillingness of a customer or counterparty to meet commitments in relation to lending, trading, settlement and other financial transactions. Alternatively, losses result from reduction in portfolio value arising from actual or perceived deterioration in credit quality.
External factors changes in government policies - trade policy - fiscal policy - import-export policy slow down in economy changes in market variables Internal factors -Business failure risk -Business management risk -Financial management risk -Settlement/pre-settlement risk on derivative products Portfolio risk -Adverse distribution -Adverse concentration -Large exposure -Correlation between industry sectors
3
Willful default
Size of Expected Loss
“Expected Loss“
=
EL =
1. What is the probability of a counterparty going into default?
“Probability of Default”
=
PD X
2. How much will that customer owe the bank in the case of default? (Expected Exposure)
“Loan Equivalency” (Exposure at Default)
=
EaD X
3. How much of that exposure is the bank going to lose?
“Severity” (Loss Given Default)
=
LGD
Credit Risk Models
• Structural models - use the evolution of firms’ structural variables, such as asset and debt values, to determine the time of default • Reduced form models - do not consider the relation between default and firm value in an explicit manner. The parameters governing the default hazard rate are inferred from market data
Structural Models
• Merton’s model (1974) was the first modern model of default and is considered the first structural model • Structural default models provide a link between the credit quality of a firm and the firm’s economic and financial conditions • Whereas reduced models exogenously specify recovery rates, in structural models the value of the firm’s assets and liabilities at default will determine recovery rates
MERTON’S MODEL
Merton’s Model
• Merton (1974) makes use of the Black and Scholes (1973) option pricing model to value corporate liabilities
• This is an straightforward application only if one adapts the firm’s capital structure and the default assumptions to the requirements of the Black-Scholes model.
Merton’s Model
Equity Capital Structure of a firm Zero coupon bond Face value D
Maturity T
Their values at time t are denoted by Et and z (t, T) respectively, for 0 ? t ? T. Vt is the sum of equity and debt values
Equity represents a call option on the firm’s assets with maturity T and strike price of D Implicit in this argument is the fact that the firm can only default at time T
This assumption is important to be able to treat the firm’s equity as a vanilla European call option, and therefore apply the Black-Scholes pricing formula.
Merton’s Model
Equity represents a call option on the firm’s assets with maturity T and strike price of D
At maturity T
VT > D
VT < D
The firm’s asset value VT is enough to pay back the face value of the debt D,
•Firm does not default •Shareholders receive VT-D
•Firm defaults •Bondholders take control of the firm, and shareholders receive nothing
Merton’s Model
Other assumptions are: • Inexistence of transaction costs, bankruptcy costs, taxes or problems with indivisibilities of assets • Continuous time trading • Unrestricted borrowing and lending at a constant interest rate r • No restrictions on the short selling of the assets • The value of the firm is invariant under changes in its capital structure (Modigliani-Miller Theorem) •The firm’s asset value follows a diffusion process given by
where ?V is the (relative) asset volatility and Wt is a Brownian motion
Merton’s Model
• The payoffs to equity holders and bondholders at time T under the assumptions of this model are:
• Applying the Black-Scholes pricing formula, the value of equity at time t (0 ? t ? T) is given by
where ? (.) is the distribution function of a standard normal random variable and d1and d2 are given by
Merton’s Model
• The probability of default at time T is given by
• Therefore, the value of the debt at time t is z (t, T) = Vt ? Et • In order to implement Merton’s model we have to estimate the firm’s asset value Vt, its volatility ?V (both unobservable processes), and we have to transform the debt structure of the firm into a zero-coupon bond with maturity T and face value D. • The maturity T of the zero-coupon bond can be chosen either to represent the maturity structure of the debt, for example as the Macaulay duration of all the liabilities, or simply as a required time horizon (for example, in case we are pricing a credit derivative with some specific maturity).
Merton’s Model – Advantages & Criticisms
• • The main advantage of Merton’s model is that it allows to directly apply the theory of European options pricing developed by Black and Scholes (1973). One problem of Merton’s model is the restriction of default time to the maturity of the debt, ruling out the possibility of an early default, no matter what happens with the firm’s value before the maturity of the debt Another handicap of the model is that the usual capital structure of a firm is much more complicated than a simple zero-coupon bond Another characteristic of Merton’s model, which will also be present in some of the FPM, is the predictability of default. Since the firm’s asset value is modelled as a geometric Brownian motion and default can only happen at the maturity of the debt, it can be predicted with increasing precision as the maturity of the debt comes near. As a result, in this approach default does not come as a surprise, which makes the models generate very low short-term credit spreads. introducing jumps in the process followed by the asset value has been one of the solutions considered to this problem.
• •
FIRST PASSAGE MODEL
First Passage Models
• First Passage Models (FPM) were introduced by Black and Cox (1976) extending the Merton model to the case when the firm may default at any time, not only at the maturity date of the debt • Although Black and Cox (1976) considered a time dependent default threshold, let us assume first a constant default threshold K > 0. If we are at time t ? 0 and default has not been triggered yet and Vt > K, then the time of default ? is given by • Using the properties of the Brownian motion Wt, in particular the reflection principle, we can infer the default probability from time t to time T
• FPM have been extended to account for stochastic interest rates, bankruptcy costs, taxes, debt subordination, strategic default, time dependent and stochastic default barrier, jumps in the asset value process, etc. • More analytically complex • The default threshold (+ve) can be interpreted in various ways. We can think of it as a safety covenant of the firm’s debt which allows the bondholders to take control of the company once its asset value has reached this level • The safety covenant would act as a protection mechanism for the bondholders against an unsatisfactory corporate performance
FPM - Drawbacks
• The principal drawback of FPM is the analytical complexity that they introduced • This mathematical complexity makes difficult to obtain closed form expressions for the value of the firm’s equity and debt, or even for the default probability • Predictability of defaults • Predictability of recovery
VASICEK’S MODEL
VASICEK’S MODELS
There are 2 models used by BASEL II given by Vasicek:
1. Vasicek’s Stochastic Interest Rate Model
2. Vasicek’s Probability of Loss (PL) on Loss Portfolio Model
I VASICEK’S MODEL on Interest rates
• The Vasicek model is a mathematical model describing the evolution of interest rates. • It is a type of "one-factor model" (short rate model) as it describes interest rate movements as driven by only one source of market risk. The model can be used in the valuation of interest rate derivatives, and has also been adapted for credit markets. It was introduced in 1977 by Oldrich Vasicek. • Vasicek's model was the first one to capture mean reversion, an essential characteristic of the interest rate that sets it apart from other financial prices. Thus, as opposed to stock prices for instance, interest rates cannot rise indefinitely. • This is because at very high levels they would hamper economic activity, prompting a decrease in interest rates. Similarly, interest rates can not decrease indefinitely. As a result, interest rates move in a limited range, showing a tendency to revert to a long run value.
Ornstein-Uhlenbeck stochastic process
The model specifies that the instantaneous interest rate follows the stochastic differential equation:
Ornstein-Uhlenbeck stochastic process Ornstein-Uhlenbeck stochastic process
The model specifies that the instantaneous interest rate follows the stochastic differential equation:
Wt is a Wiener process modelling the random market risk factor.
Ornstein-Uhlenbeck stochastic process Ornstein-Uhlenbeck stochastic process
The model specifies that the instantaneous interest rate follows the stochastic differential equation:
The standard deviation parameter, ?, determines the volatility of the interest rate
Ornstein-Uhlenbeck stochastic process Ornstein-Uhlenbeck stochastic process
The model specifies that the instantaneous interest rate follows the stochastic differential equation:
The parameter b represents the long run equilibrium value towards which the interest rate reverts
Ornstein-Uhlenbeck stochastic process Ornstein-Uhlenbeck stochastic process
The model specifies that the instantaneous interest rate follows the stochastic differential equation:
The parameter a, governing the speed of adjustment, needs to be positive to ensure stability around the long term value. For example, when rt is below b, the drift term a(b ? rt) becomes positive for positive a, generating a tendency for the interest rate to move upwards (toward equilibrium).
Ornstein-Uhlenbeck stochastic process Ornstein-Uhlenbeck stochastic process
The model specifies that the instantaneous interest rate follows the stochastic differential equation:
The drift factor a(b ? rt) represents the expected instantaneous change in the interest rate at time t.
Ornstein-Uhlenbeck stochastic process Solving the stochastic equation
Ornstein-Uhlenbeck Stochastic IR Modeling with stochastic process
• Forecasting of future Interest rates • The short term Interest rate is assumed to be constant in Merton’s model and thus the model disregards the Interest rate risk. This shortcoming can be overcome by allowing for a stochastic interest rates according to Vasicek’s model
Ornstein-Uhlenbeck stochastic process Disadvantages
•The main disadvantage is that, under Vasicek's model, it is theoretically possible for the interest rate to become negative, an undesirable feature. (This shortcoming was fixed in the Cox-Ingersoll-Ross model. The Vasicek model was further extended in the Hull-White model.) •The use of Vasicek's model does not improve the predictive power of current market data significantly. The evidence strongly suggests the need to use a multiple-factor model to improve the fit to the observed yield curve and the predictive ability of the model for future interest rate movements, if the market data contains any information.
Ornstein-Uhlenbeck stochastic process II. Vasicek’s Model & BASEL II
Ornstein-Uhlenbeck stochastic process Vasicek’s Model & BASEL II
• Vasicek applied Merton’s model to a portfolio of borrowers. As in Merton’s model, each borrower is characterised by a lognormal asset value process. A borrower defaults when its asset-value falls below a threshold defined by its liabilities. • Correlations between borrowers’ defaults arise from correlation between their asset values. Correlations between asset values, in turn, are described by a common dependence of asset returns on a single systematic factor, representing the state of the economy. Thus the asset return for each borrower has a systematic component, reflecting the effect of the economy on the borrower, and an idiosyncratic component, describing fortunes and misfortunes unique for the borrower.
Ornstein-Uhlenbeck stochastic process Vasicek’s Model & BASEL II
• Assuming homogeneous portfolio of loans with zero recoveries, Vasicek derived the distribution function for the portfolio loss. The Vasicek model has also been refined and extended to include nonhomogeneous portfolios and non-zero stochastic recoveries.
• To describe different systematic effects for firms belonging to different industries and/or different geographical regions, the single systematic factor in the Vasicek model was replaced by a set of correlated systematic factors. This multi-factor extension of the Vasicek model lies in the foundation of such industry models as KMV’s PortfolioManager™ and RiskMetrics’ CreditMetrics™. Not all models of portfolio credit risk are of the Merton-Vasicek type (eg, CSFB’s Credit Risk Plus™).
Ornstein-Uhlenbeck stochastic process Vasicek’s Model & BASEL II
The cumulative probability that the percentage loss on a portfolio of n loans does not exceed ? is
………………Equation 1
where Pk are given by an integral expression in Oldrich Vasicek’s memo, Probability of Loss on Loan Portfolio,
Ornstein-Uhlenbeck stochastic process Vasicek’s Model & BASEL II
Now,
Substituting in equation 1
Ornstein-Uhlenbeck stochastic process Vasicek’s Model & BASEL II
……..Continued on next slide.
Ornstein-Uhlenbeck stochastic process Vasicek’s Model & BASEL II
Ornstein-Uhlenbeck stochastic process Vasicek’s Model & BASEL II
PROBABILITY OF LOSS ON LOAN PORTFOLIO Consider a portfolio consisting of n loans in equal dollar amounts. Let the probability of default on any one loan be p, and assume that the values of the borrowing companies’ assets are correlated with a coefficient ? for any two companies. We wish to calculate the probability distribution of the percentage gross loss L on the portfolio, that is,
……..cont. on next slide
Ornstein-Uhlenbeck stochastic process Vasicek’s Model & BASEL II
PROBABILITY OF LOSS ON LOAN PORTFOLIO The company defaults on its loan if the value of its assets drops below the contractual value of its obligations Di payable at time T. We thus have
Ornstein-Uhlenbeck stochastic process Vasicek’s Model & BASEL II
PROBABILITY OF LOSS ON LOAN PORTFOLIO Because of the joint normality and the equal correlations, the processes zi can be represented as
Ornstein-Uhlenbeck stochastic process Vasicek’s Model & BASEL II
PROBABILITY OF LOSS ON LOAN PORTFOLIO
MOODY’S KMV MODEL
Elements of credit risk
The elements of credit risk can therefore be grouped as follows: Standalone Risk • Default probability—the probability that the counterparty or borrower will fail to service obligations. • Loss given default—the extent of the loss incurred in the event the borrower or counterparty defaults. Portfolio Risk • Default correlations—the degree to which the default risks of the borrowers and counterparties in the portfolio are related. • Exposure—the size, or proportion, of the portfolio exposed to the default risk of each counterparty and borrower.
Measuring default probability: the problem
• Three main elements that determine the default probability of a firm: • Value of Assets: the market value of the firm's assets
• Asset Risk: the uncertainty or risk of the asset value. This is a measure of the firm's business and industry risk.
• Leverage: the extent of the firm's contractual liabilities.
MOODY’S KMV Model
• Whereas the relevant measure of the firm's assets is always their market value, the book value of liabilities relative to the market value of assets is the pertinent measure of the firm's leverage, since that is the amount the firm must repay.
• The default risk of the firm increases as the value of the assets approaches the book value of the liabilities, until finally the firm defaults when the market value of the assets is insufficient to repay the liabilities.
MOODY’S KMV Model
• Figure illustrates the evolution of the asset value and book liabilities of Winstar Communications, a New York Telephone company that filed for Chapter 11 bankruptcy protection in April 2001.
MOODY’S KMV Model
• In study of defaults, it is found that in general firms do not default when their asset value reaches the book value of their total liabilities. • The default point, the asset value at which the firm will default, generally lies somewhere between total liabilities and current, or short-term, liabilities. • The relevant net worth of the firm is therefore the market value of the firm's assets minus the firm's default point.
MOODY’S KMV Model
Asset risk and Asset volatility
• The asset risk is measured by the asset volatility, the standard deviation of the annual percentage change in the asset value. • Asset volatility is related to the size and nature of the firm's business.
• Asset value, business risk and leverage can be combined into a single measure of default risk • This ratio is called as the distance-to-default and it is calculated as:
Measuring Default Probability: Practical Approach
• Three basic types of information available that are relevant to the default probability of a publicly traded firm:
– financial statements, – market prices of the firm's debt and equity, – subjective appraisals of the firm's prospects and risk
• MKMV has implemented the VK model to calculate an Expected Default Frequency™ (EDF™) credit measure which is the probability of default during the forthcoming year, or years for firms with publicly traded equity • The EDF value requires equity prices and certain items from financial statements as inputs
• EDF credit measure assumes that Default is defined as the nonpayment of any scheduled payment, interest or principal.
Determining Default Probability
Three steps in the determination of the default probability of a firm: • Estimate asset value and volatility: In this step the asset value and asset volatility of the firm is estimated from the market value and volatility of equity and the book value of liabilities. • Calculate the distance-to-default: The distance-to-default (DD) is calculated from the asset value and asset volatility (estimated in the first step) and the book value of liabilities.
• Calculate the default probability: The default probability is determined directly from the distance-to-default and the default rate for given levels of distance-to-default.
Step 1:Estimate Asset Value and Volatility
• If the market price of equity is available, the market value and volatility of assets can be determined directly using an options pricing based approach, which recognizes equity as a call option on the underlying assets of the firm. • For example, consider a simplified case where there is only one class of debt and one class of equity.
• The limited liability feature of equity means that the equity holders have the right, but not the obligation, to pay off the debt holders and take over the remaining assets of the firm. • The holders of the other liabilities of the firm essentially own the firm until those liabilities are paid off in full by the equity holders. • In the simplest case, equity is the same as a call option on the firm’s assets with a strike price equal to the book value of the firm’s liabilities.
• The model uses this option nature of equity to derive the underlying asset value and asset volatility implied by the market value, volatility of equity, and the book value of liabilities.
• Assume that we initially invest $20 in the firm and borrow a further $80 from a bank. The proceeds,$100, are invested in equities. • At the end of five years, if the market value of the assets is $60 then the value of equity will be zero. If the value of the assets is $110, then the value of the equity will be $30, and so on.
• Assume now that we are interested in valuing our equity prior to the final winding up of the firm.
• Marked the equities to market and their value is determined to be $80, there are two years remaining before we wind the firm up • Value of equity will be greater than zero because it is the value of the assets two years hence that really matters and there is still a chance that the asset value will be greater than $80 in two years time
• Asset value and volatility are the only unknown quantities in these relationships and thus the two equations can be solved to determine the values implied by the current equity value, volatility and capital structure.
Step 2:Calculate the Distance-todefault
• There are six variables that determine the default probability of a firm over some horizon, from now until time H 1. The current asset value. 2. The distribution of the asset value at time H . 3. The volatility of the future assets value at time H . 4. The level of the default point, the book value of the liabilities. 5. The expected rate of growth in the asset value over the horizon. 6. The length of the horizon, H .
• If the value of the assets falls below the default point, then the firm defaults. Therefore, the probability of default is the probability that the asset value will fall below the default point. This is the shaded area (EDF value) below the default point in Figure
• However, in practice, the distribution of the distance-to-default is difficult to measure. • MKMV first measures the distance-to-default as the number of standard deviations the asset value is away from default • Then uses empirical data to determine the corresponding default probability
Step 3: Calculate the Default Probability
• The relationship between distance-to-default and default probability from data on historical default and bankruptcy frequencies • Database includes over 250,000 company-years of data and over 4,700 incidents of default or bankruptcy • From this data, a lookup or frequency table can be generated which relates the likelihood of default to various levels of distance-todefault.
Option nature of equity to derive the market value and volatility of assets
• BS model - The market value of the firm’s underlying assets follows the following stochastic process: dV =? V dt+? V dz where - VA ,dVA are the firm’s asset value and change in asset value, - ? ,? A are the firm’s asset value drift rate and volatility, and - dz is a Wiener process - The BS model allows only two types of liabilities, a single class of debt and a single class of equity.
• If X is the book value of the debt which is due at time T then the market value of equity and the market value of assets are related by the following expression:
• The probability of default is the probability that the market value of the firm’s assets will be less than the book value of the firm’s liabilities by the time the debt matures. That is:
• The BS model assumes that the random component of the firm’s asset returns is Normally distributed, ?~Na0,1f and as a result we can define the default probability in terms of the cumulative Normal distribution:
MOODY’S KMV Model ( Private Companies)
• The Moody’s RiskCalc v3.1 model introduces the next-generation default prediction technology for private, middle-market companies • The performance of the RiskCalc v3.1 model is based in part on an extraordinary and proprietary database developed by Moody’s KMV: the Credit Research Database • Significant investment to expand and refine this core data set, increasing its cross-sectional and time series coverage of private firm data
• Credit Research Database contained more than 6,500,000 financial statements on more than 1,500,000 unique private firms with more than 97,000 default events worldwide
• Cutting-edge processes for cleaning the data and addressing differences in local accounting and reporting practices • Data Integrity - Clean the data to detect obvious data problems or data collection issues, such as whether default information or financial statements are missing for a given institution in a given region or time period, and whether balance sheets balance.
• Designed to meet the requirements for default models found in the New Basel Capital Accord (or Basel II) • Model supports critical requirements for delivering consistent risk estimates, risk ratings, default probabilities, and model validation
Basel II and Model
• Consistent Risk Estimates • The RiskCalc v3.1 model will always produce the same estimate of default risk for a given set of inputs, which meets a critical requirement of the Basel II Accord:
“The overarching principle behind these requirements is that rating and risk estimation systems . . . provide for a meaningful differentiation of risk, and accurate and consistent quantitative estimates of risk.” [Basel II, paragraph 351]
• Forward-looking Risk Ratings • In addition to fundamental financial statement inputs, the RiskCalc v3.1 model incorporates the collective perspective of the market sector in which a firm operates. This is consistent with the Basel II Accord requirement that risk-rating models use all available information in determining a borrower’s rating—including the impact of future economic conditions: “A borrower rating must represent the bank’s assessment of the borrower’s ability and willingness to contractually perform despite adverse economic conditions or the occurrence of unexpected events.” *Basel II, paragraph 376+
• Stress Testing Default Probabilities • The RiskCalc v3.1 model is uniquely designed to stress test a firm’s sensitivity to the probability of default at different stages of a credit cycle. This feature satisfies a leading imperative of the New Basel Capital Accord: “An IRB (internal ratings-based) bank must have in place sound stress testing processes for use in the assessment of capital adequacy. Stress testing should involve identifying possible events or future changes in economic conditions that could have unfavorable effects on a bank’s credit exposures and assessment of the bank’s ability to withstand such changes. Examples of scenarios that usefully could be examined are: (i) economic or industry downturns; (ii) market-risk events; and (iii) liquidity conditions.” [Basel II, paragraph 396]
• Validation • RiskCalc v3.1 models are designed to meet the Basel II Accord’s stringent requirements for validating ratings: “Banks must have a robust system in place to validate the accuracy and consistency of rating systems, processes, and the estimation of all relevant risk components.” *Basel II, paragraph 463]
• Validate models using a rigorous testing process that demonstrates their power outside the development sample. • These tests include of out-of- sample testing (using defaults and non-defaults which were not used in the model development, such as a “hold-out” sample) and comparisons to other models
The Model
• For users who desire a stable estimate of a firm’s default risk based only on a firm’s financial statements, the RiskCalc v3.1 model can be configured in Financial Statement Only (FSO) mode. • For most accurate determination of the default risk of private company credits and an efficient monthly monitoring process, the RiskCalc v3.1 model can be configured in complete version
The Financial Statement Only Mode
• The mode includes financial statement variables that capture a firm’s long-run performance. • Predictions of a middle-market firm’s default risk update only as often as the firm updates its financial statements • Selected specific financial ratios to develop a robust and informative model that is transparent, intuitive, and highly predictive for out-of-sample data
Ratios • Generally, experience is that including a large number of ratios in a quantitative model may yield a model that is “overfitted.” • Selected a limited number of financial ratios that yield a powerful model. • Following broad risk factors of financial performance: • Profitability • Leverage • Debt coverage • Growth variables • Liquidity • Activity ratios • Size
• In building financial statement model - include at least one variable from each group. • High profitability reduces the probability of default. • High leverage increases the probability of default • Growth variables behave like a double-edged sword: both rapid growth and rapid decline (negative growth) will tend to increase a firm’s default probability. • High liquidity reduces the probability of default. • A large stock of inventories relative to sales increases the probability of default • Large firms default less often
• These ratios relates to varying degrees to credit risk • Research shows a nonlinear relationship between many of these ratios and a firm’s probability of default.
FSO Functional Form
• FSO models are based on the following functional form:
• where x1,...,xN are the input ratios; I1,...,IK are indicator variables for each of the industry classifications; • ? and ? are estimated coefficients; • ? is the cumulative normal distribution; • F and T1,...,TN are non-parametric transforms; and FSO EDF is the financial statement-only EDF credit measure. • The Ts capture non-linear impacts of financial ratios on the default likelihood
The Complete Version
• Moody claim it as the most predictive measures of credit risk for private, middle market companies • The model combines forward-looking equity market information that reflects the general credit cycle and the state of the firm’s industry with firm-specific data about private companies • This model delivers the following:
- Significantly more accurate EDF levels - More frequent updates of all relevant information - The ability to stress test EDF credit measures under different credit cycle scenarios (a Basel II imperative)
Distance-To-Default: Using Market Data from Public Firm Model to Improve Private Firm Predictions • Market prices for claims on the assets of private firms are generally not available • RiskCalc v3.1 imbeds credit insight from market information at the industry sector level • That industry sector information comes from the current month information on the sector’s average distance-to-default • Including the distance-to-default factor, not on the individual private firm but from an aggregation of public companies in the corresponding sector, improves the performance of private firm models by incorporating forward-looking market price dynamics.
• When the distance-to-default measure is trending downward, or closer to the default barrier, on average for the public firms in a given sector, the probability of default for a private firm in that sector should be adjusted upward as this indicator contains information that is not yet in financial statements
• By controlling for industry variation, the RiskCalc v3.1 model:
- Corrects for intrinsic differences in default probability across industries - Adjusts for differences in interpretation of financial ratios across industries, and corrects for spurious effects - Improves EDF performance and accuracy
• Financial Statement Ratios used in RiskCalc v3.1 U.S., Canada, U.K., and Japan on next slide
MOODY’S KMV Model for BANKS
RiskCalc Modes • RiskCalc v3.1 allows a user to assess the risk of a privately-held bank in two ways: Financial Statement Only (FSO) and Credit Cycle Adjusted (CCA).
• The FSO mode delivers a bank’s default risk based only on regulatory filings. In this mode, the risk assessments produced by the model are relatively stable over time. • The CCA mode adjusts the default risk by taking into account the current stage of the credit cycle. The CCA adjustment is specific to the firm’s sector and country and is updated regularly
Model Components
• The inputs to the model include selection of the financial ratios and transforms of those ratios, and the credit cycle adjustment. • The development of a RiskCalc model involves the following steps: 1. Choosing a limited number of financial statement variables for the model from a list of possible variables 2. Transforming the variables into interim probabilities of default using non-parametric techniques. 3. Estimating the weightings of the financial statement variables, using a probit model combined with industry variables. 4. Creating a (non-parametric) final transform that converts the probit model score into an actual EDF credit measure.
Financial Statement Variables
• Selecting the Variables • The working list of ratios is divided into groups that represent different underlying concepts regarding a bank’s financial status
• Model is then built with at least one variable per group.
• Once the variables are selected, they are transformed into a preliminary EDF credit measure • The relative value of each variable used in calculating an EDF credit measure is important in understanding a company’s risk • The non-linear nature of the model makes the weight of the variables more difficult to determine • Model weights, therefore, are calculated based on the average EDF value for the transformation and its standard deviation • A variable with a flat transformation could have a low weight, even if the coefficient is large
Credit Cycle Adjustment
• EDF credit measures are impacted not only by the financials of a company, but also by the general credit cycle in the economy • The credit cycle adjustment is designed to incorporate the current credit cycle into the estimate of private bank default risk • If the distance-to-default for public firms in the banking industry indicates a level of risk above the historical average, then the private banks’ EDF credit measures are adjusted upward • Conversely, if the level of risk is below the historical average, then the private banks’ EDF credit measures are adjusted downward
CREDIT SUISSE FIRST BOSTON MODEL
Types of Credit Risk
• Credit spread risk • Credit default risk
Components of Credit Risk?
Credit Risk? Model
• Based on a portfolio approach to modelling credit default risk • A statistical model of credit default risk that makes no assumptions about the causes of default • Considers default rates as continuous random variables and incorporates the volatility of default rates in order to capture the uncertainty in the level of default rates
Credit Risk? Model
• Mathematical techniques applied widely in the insurance industry are used to model the sudden event of an obligor default. This approach contrasts with the mathematical techniques typically used in finance • Applying insurance modelling techniques, the analytic CREDITRISK+ Model captures the essential characteristics of credit default events and allows explicit calculation of a full loss distribution for a portfolio of credit exposures • No assumptions are made about the causes of default
Default Rate Behaviour
• Continuous variable When treated as a continuous variable, the possible default rate over a given time horizon is described by a distribution, which can be specified by a default rate and a volatility of the default rate. The data requirements for modelling credit default risk are analogous to the data requirements for pricing stock options - a forward stock price and the stock price volatility are used to define the forward stock price distribution
Default Rate Behaviour (Contd…)
• Discrete variable
By treating the default rate as a discrete variable, a simplification of the continuous process described earlier is made. A convenient way of making default rates discrete is by assigning credit ratings to obligors and mapping default rates to credit ratings. Using this approach, additional information is required in order to model the possible future outcomes of the default rate. This can be achieved via a rating transition matrix that specifies the probability of keeping the same credit rating, and hence the same value for the default rate, and the probabilities of moving to different credit ratings and hence to different values for the default rate
Portfolio Approach of Managing Credit Risk
The portfolio risk of a particular exposure is determined by four factors • the size of the exposure • maturity of the exposure • the probability of default of the obligor • the systematic or concentration risk of the obligor
Credit Risk Exposures
• Model is capable of handling all types of instruments that give rise to credit exposure , including bonds, loans, commitments, financial letters of credit and derivative exposures
• In addition, if a multi-year time horizon is being used, it is important that the changing exposures over time are accurately captured
Default rates
A default rate, which represents the likelihood of a default event occurring, should be assigned to each obligor. This can be achieved in a number of ways, including: • Observed credit spreads from traded instruments can be used to provide market-assessed probabilities of default • Alternatively, obligor credit ratings, together with a mapping of default rates to credit ratings, provide a convenient way of assigning probabilities of default to obligors. The rating agencies publish historic default statistics by rating category for the population of obligors that they have rated • Another approach is to calculate default probabilities on a continuous scale, which can be used as a substitute for the combination of credit ratings and assigned default rates.
Default rate Volatilities
Published default statistics include average default rates over many years. • Actual observed default rates vary from these averages. • The amount of variation in default rates about these averages can be described by the volatility (standard deviation) of default rates. • As can be seen in the table,the standard deviation of default rates can be significant compared to actual default rates, reflecting the highfluctuations observed during economic cycles.
Recovery rates
• In the event of a default of an obligor, a firm generally incurs a loss equal to the amount owed by the obligor less a recovery amount, which the firm recovers as a result of foreclosure, liquidation or restructuring of the defaulted obligor or the sale of the claim. • Recovery rates should take account of the seniority of the obligation and any collateral or security held
Recovery rates
•
There is also considerable variation for obligations of differing seniority, as can be seen from the standard deviation of the corporate bond and bank loan recovery rates in the table below
Stages in Modeling process
The modelling of credit risk is a two stage process, as shown in the following diagram:
Frequency of default events
• The Model makes no assumption about the causes of default - credit defaults occur as a sequence of events in such a way that it is neither possible to forecast the exact time of occurrence of any one default nor the exact total number of defaults. • It models the underlying default rates by specifying a default rate and a default rate volatility. This aims to take account of the variation in default rates in a pragmatic manner, without introducing significant model error.
Moving from Default Events to Default Losses
• The distribution of losses differs from the distribution of default events because the amount lost in a given default depends on the exposure to the individual obligors • Unlike the variation of default probability between obligors, which does not influence the distribution of the total number of defaults, the variation in exposure magnitude results in a loss distribution that is not Poisson in general
In the event of a default of an obligor: • A firm generally incurs a loss equal to the amount owed by the obligor less a recovery amount, which the firm obtains as a result of foreclosure, liquidation or restructuring of the defaulted obligor. • A recovery rate is used to quantify the amount received from this process. Recovery rates should take account of the seniority of the obligation and any collateral or security held • Model calculates the probability that a loss of a certain multiple of the chosen unit of exposure will occur. This allows a full loss distribution to be generated, as shown in the figure ahead
Sector Analysis
Model measures the benefit of portfolio diversification and the impact of concentrations through the use of sector analysis: • Concentration Risk • Allocating all Obligors to a Single Sector • Allocating Obligors to one of Several Sectors • Apportioning Obligors across Several Sectors • The Impact of Sectors on the Loss Distribution
Multi-Year Losses for a Hold-toMaturity Time Horizon
• Model allows risk of the portfolio to be viewed on a hold-to maturity time horizon in order to capture any default losses that could occur until maturity of the credit exposure • Analysing credit exposures on a multi-year basis enables the risk manager to compare exposures of different size, credit quality, and maturity
• The loss distribution produced provides, for any chosen level of confidence, an indication of the possible cumulative losses that could be suffered until all the exposures have matured
Using the CREDITRISK+ Model to Calculate Multi-Year Loss Distributions
• Model can be used to calculate multi-year loss distributions by decomposing the exposure profile over time into separate elements of discrete time, with the present value of the remaining exposure in each time period being assigned a marginal default probability relevant to the maturity and credit quality. • These decomposed exposure elements can then be used by the CREDITRISK+ Model to generate a loss distribution on a hold-to-maturity basis.
RISKMETRICS TM
J.P. MORGAN/REUTERS
Reasons For Methodology
• Greater transparency of market risks
• A benchmark for market risk measurement • Understanding and evaluation of advice given to clients on managing risk
RiskMetrics
A set of techniques and data to measure market risks in portfolios of • • • • • Fixed income instruments Equities Foreign exchange Commodities Derivatives
What is RiskMetrics?
RiskMetrics is a set of tools that enable participants in the financial markets to estimate their exposure to market risk under what has been called the “Value-at-Risk framework”. RiskMetrics has three basic components: • A set of market risk measurement methodologies • Data sets of volatility and correlation data used in the computation of market risk • Software systems developed by J.P.Morgan, subsidiaries of Reuters, and third party vendors that implement the methodologies
PART I : RISK MEASUREMENT FRAMEWORK
Introduction
• Risk - the degree of uncertainty of future net returns • Classification of risks is based on the source of the underlying uncertainty
– Credit risk estimates the potential loss because of the inability of a counterparty to meet its obligations
– Operational risk results from errors that can be made in instructing payments or settling transactions
– Liquidity risk is reflected in the inability of a firm to fund its illiquid assets
– Market risk involves the uncertainty of future earnings resulting from changes in market conditions, (e.g., prices of assets, interest rates). Over the last few years measures of market risk have become synonymous with the term Valueat-Risk
Introduction to Value-at-Risk
• Value-at-Risk (VaR) is a measure of the maximum potential change in value of a portfolio of financial instruments with a given probability over a pre-set horizon • VaR answers the question: How much can I lose with x% probability over a given time horizon?
EXAMPLE 1
You are a USD-based corporation and hold a DEM 140 million FX position. What is your VaR over a 1-day horizon given that there is a 5% chance that the realized loss will be greater than what VaR projected? The choice of the 5% probability is discretionary and differs across institutions using the VaR framework What is your exposure? As a USD based investor, your exposure is equal to the market value of the position in your base currency. If the foreign exchange rate is 1.40 DEM/USD, the market value of the position is USD 100 million
What is Your Risk?
• Requires an estimate of how much the exchange rate can potentially move • Standard deviation of the return on the DEM/USD exchange rate, measured historically can provide an indication of the size of rate movements (say 0.565%) • Under the standard RiskMetrics assumption that standardized returns (rt/?t) on DEM/USD are normally distributed given the value of this standard deviation, VaR is given by 1.65 times the standard deviation (that is, 1.65 ?) or 0.932%
What is Your Risk?
• In USD, the VaR of the position is equal to the market value of the position times the estimated volatility or:
• FX Risk: $100 million * 0.932% = $932,000
• It means is that 95% of the time, you will not lose more than $932,000 over the next 24 hours.
VaR Statistics
EXAMPLE 2
You are a USD-based corporation and hold a DEM 140 million position in the 10-year German government bond. What is your VaR over a 1-day horizon period, again, given that there is a 5% chance of understating the realized loss?
What is your exposure? The only difference versus the previous example is that you now have both interest rate risk on the bond and FX risk resulting from the DEM exposure. The exposure is still USD 100 million but it is now at risk to two market risk factors.
What is Your Risk?
If you use an estimate of 10-year German bond standard deviation of 0.605%, you can calculate: Interest rate risk: $100 million * 1.65 * 0.605% = $999,000 FX Risk: $100 million * 1.65 * 0.565% = $932,000 Total Risk • Estimated correlation between the returns on the DEM/USD exchange rate and the 10-year German government bond is 0.27 • Using a formula common in standard portfolio theory, the total risk of the position is given by:
= $ 1.168 million
A More Advanced Approach
Value-at-Risk is a number that represents the potential change in a portfolio’s future value. How this change is defined depends on
1.
The horizon over which the portfolio’s change in value is measured and
2.
The “degree of confidence” chosen by the risk manager.
VaR Calculation Steps
The Value-at-Risk calculation consists of the following steps. 1. 2. Mark-to-market the current portfolio. Denote this value by V0. Define the future value of the portfolio, V1 , as V1 = V0er where r represents the return on the portfolio over the horizon. For a 1-day horizon, this step is unnecessary as RiskMetrics assumes a 0 return Make a forecast of the 1-day return on the portfolio and denote this value by r’, such that there is a 5% chance that the actual return will be less than r’. Alternatively expressed, Probability (r < r’) = 5% Define the portfolio’s future “worst case” value V1’, as V1’ = V0 er. The Value-at-Risk estimate is simply (V0 – Vi’)
3.
4.
EXAMPLE 3
1. 2. Consider a portfolio whose current marked-to-market value, V0, is USD 500 million. To carry out the VaR calculation we require 1-day forecasts of the mean ?1. Within the RiskMetrics framework, we assume that the mean return over a 1-day horizon period is equal to 0. We also need the standard deviation, ?1 , of the returns in this portfolio. Assuming that the return on this portfolio is distributed conditionally normal, r’ = -1.65 ?1 + ?1. The RiskMetrics data set provides the term 1.65 ?1. Hence, setting ?1 = 0 and ?1 = 0.0321, we get V1’ = USD 474.2 million. This yields a Value-at-Risk of USD 25.8 million (given by V0 – V1’)
3.
4.
Simulated Portfolio Changes
Using RiskMetrics to Compute VaR on a Portfolio of Cash Flows
Step 1: Each financial position in a portfolio is expressed as one or more cash flows that are marked-to-market at current market rates. For example, consider an instrument that gives rise to three USD 100 cash flows each occurring in 1, 4, and 7 months’ time
Step 2: When necessary, the actual cash flows are converted to RiskMetrics cash flows by mapping (redistributing) them onto a standard grid of maturity vertices, known as RiskMetrics vertices, which are fixed at the following intervals: 1m 3m 6m 12m 2yr 3yr 4yr 5yr 7yr 9yr 10yr 15yr 20yr 30yr • To map the cash flows, we use the RiskMetrics vertices closest to the actual vertices andredistribute the actual cash flows as shown in Chart
Mapping Actual Cash Flows onto RiskMetrics Vertices
Step 3: VaR is calculated at the 5th percentile of the distribution of portfolio return, and for a specified time horizon. In the example above, the distribution of the portfolio return, rp , is written as:
• Under the assumption that is distributed conditionally normal, the fifth percentile is -1.65 ?p where ?p is the standard deviation of the portfolio return distribution. • Applying portfolio theory equation to a portfolio containing more than two instruments requires using simple matrix algebra. We can thus express this VaR calculation as follows:
Where,
And
Note that RiskMetrics provides the vector of information as well as the correlation matrix R. What the user has to provide are the actual portfolio weights.
Measuring the Risk of Nonlinear Positions
• In our previous examples, we could easily estimate the risk of a fixed income or foreign exchange product by assuming a linear relationship between the value of an instrument and the value of its underlying. This is not a reasonable assumption when dealing with nonlinear products such as options.
• RiskMetrics offers two methodologies, to compute the VaR of nonlinear positions: – An analytical approximation and – A structured Monte Carlo simulation
Analytical Approximation
• This method approximates the nonlinear relationship via a mathematical expression that relates the return on the position to the return on the underlying rates. This is done by using what is known as a Taylor series expansion This approach no longer necessarily assumes that the change in value of the instrument is approximated by its delta alone (the first derivative of the option’s value with respect to the underlying variable)
•
•
Rather it tells that a second order term using the option’s gamma (the second derivative of the option’s value with respect to the underlying price) must be introduced to measure the curvature of changes in value around the current value
In practice, other “greeks” such as vega (volatility), rho (interest rate) and theta (time to maturity) can also be used to improve the accuracy of the approximation There are two types of analytical methods for computing VaR — the delta and delta-gamma approximation
• •
Structured Monte Carlo Simulation
• It involves creating a large number of possible rate scenarios and revaluing the instrument under each of these scenarios • VaR is then defined as the 5th percentile of the distribution of value changes • Due to the required revaluations, this approach is computationally more intensive than the first approach.
EXAMPLE 4
Consider a portfolio comprising of two assets: • Asset 1: A future cash flow stream of DEM 1 million to be received in one year’s time. The current 1-year DEM rate is 10% so the current market value of the instrument is DEM 909,091. • Asset 2: An at-the-money (ATM) DEM put/USD call option with contract size of DEM 1 million and expiration date one month in the future. The premium of the option is 0.0105 and the spot exchange rate at which the contract was concluded is 1.538 DEM/USD. We assume the implied volatility at which the option is priced is 14%
• The value of this portfolio depends on the USD/DEM exchange rate and the one-year DEM bond price • Our risk horizon for the example will be five days
• We take as the daily volatilities of these two assets ?FX = 0.42% and ?B = 0.08% and as the correlation between the two ? = -0.17 • Both alternatives will focus on price risk exclusively and therefore ignore the risk associated with volatility (vega), interest rate (rho) and time decay (theta risk).
Analytical Method: Delta Approximation
• The simplest first order approximation is to estimate changes in the option value via a linear model, which is commonly known as the “delta approximation”
• Delta is the first derivative of the option price with respect to the spot exchange rate. The value of d for the option in this example is 0.4919 • The return on this portfolio consisting of a cash flow in one year and a put on the DEM/call on the USD is written as follows
Where
•
Under the assumption that the portfolio return is normally distributed, VaR at the 95% confidence level is given by
•
Using our volatilities and correlations forecasts for DEM/USD and the 1-year DEM rate, the weekly VaR for the portfolio using the delta equivalent approach can be approximated by:
Analytical Method: Delta-Gamma Approximation
• The delta approximation is reasonably accurate when the exchange rate does not change significantly, but less so in the more extreme cases. • This is because the delta is a linear approximation of a non linear relationship between the value of the exchange rate and the price of the option • We may be able to improve this approximation by including the gamma term, which accounts for nonlinear (i.e. squared returns) effects of changes in the spot rate
•
The expression for the portfolio return is now
Where
•
Now, the gamma term introduces skewness into the distribution of (i.e., the distribution is no longer symmetrical around its mean). Therefore, since this violates one of the assumptions of normality (symmetry) we can no longer calculate the 95th percentile VaR as 1.65 times the standard deviation of rp.
•
Instead we must find the appropriate multiple (the counterpart to -1.65) that incorporates the skewness effect. We compute the 5th percentile of ’s distribution by computing its first four moments, i.e., rp’s mean, variance, skewness and kurtosis. We then find distribution whose first four moments match those of rp’s.
Applying this methodology to this approach we find the VaR for this portfolio to be USD 3,708.
•
Value of Put Option on USD/DEM
Structured Monte-Carlo Simulation
This approach is based on a full valuation precept where all instruments are marked to market under a large number of scenarios driven by the volatility and correlation estimates The Monte Carlo methodology consists of three major steps: 1. Scenario generation: Using the volatility and correlation estimates for the underlying assets in our portfolio, we produce a large number of future price scenarios Portfolio valuation: For each scenario, we compute a portfolio value
2.
3.
Summary: We report the results of the simulation, either as a portfolio distribution or as a particular risk measure`
Histogram and scatter diagram of rate distributions
Using our volatility and correlation estimates, we can apply our simulation technique to our example portfolio. We can generate a large number of scenarios (1000 in this example case) of DEM 1-year and DEM/USD exchange rates at the 1week horizon
•
Valuation of Instruments in Sample Portfolio
Representation of VaR
PART II : RISKMETRICS AND MODERN FINANCIAL MANAGEMENT
What Riskmetrics Provide?
Comparing ALM to VaR
The advantages of VaR Management are that it • Incorporates the mark-to-market approach uniformly. • Relies on a much shorter horizon forecast of market variables. This improves the risk estimate as short horizon forecasts tend to be more accurate than long horizon forecasts.
VaR: Two Steps Beyond Accounting
PART III : APPLICATION OF RISKMETRICS
1. Market Risk Limits
• Position limits have traditionally been expressed in nominal terms, futures equivalents or other denominators unrelated to the amount of risk effectively incurred
Setting limits in terms of Value-at-Risk has significant advantages: • • Position benchmarks become a function of risk and positions in different markets while products can be compared through this common measure. A common denominator rids the standard limits manuals of a multitude of measures which are different for every asset class.
•
•
Limits become meaningful for management as they represent a reasonable estimate of how much could be lost
VaR measures incorporate portfolio or risk diversification effects. This leads to hierarchical limit structures in which the risk limit at higher levels can be lower than the sum of risk limits of units reporting to it.
Hierarchical VaR Limit Structure
Setting limits in terms of risk helps business managers to allocate risk to those areas which they feel offer the most potential, or in which their firms’ expertise is greatest. This motivates managers of multiple risk activities to favor risk reducing diversification strategies
2. Calibrating Valuation and Risk Models
• An effective method to check the validity of the underlying valuation and risk models is to compare DEaR (Daily Earnings-atRisk) estimates with realized daily profits and losses over time
• By definition, the cone delimited by the +/- DEaR lines should contain 90% of all the stars, because DEaR is defined as the maximum amount of expected profit or losses 90% of the time • If there are substantially more than 10% of the stars outside the DEaR-cone, then the underlying models underestimate the risks • If there are no stars outside the DEaR cone and not even close to the lines, then the underlying models overestimate the risks.
DEaR vs. Actual Daily P&L
3. Performance Evaluation
• Trading and position taking talent have been rewarded to a significant extent on the basis of total returns. Given the high rewards bestowed on outstanding trading talent this may bias the trading professionals towards taking excessive risks
• To do this correctly one needs a standard measure of risks. Ideally risk taking should be evaluated on the basis of three interlinked measures: 1. Revenues 2. Volatility of revenues 3. Risks
Performance Evaluation Triangle
EXAMPLE 5
• • Assume you have to evaluate Trader #1 relative to Trader #2 and the only information on hand is the history of their respective cumulative trading revenues (i.e., trading profits) This information allows you to compare their profits and volatility of their profits, but says nothing about their risks
Applying Evaluation Triangle
With risk information you can compare the traders more effectively. Chart shows, for the two traders the risk ratio, sharpe ratio, and efficiency ratio over time. Chart provides interesting comparative information which lead to a richer evaluation
4. Regulatory Reporting & Capital Requirement
• Financial institutions such as banks and investment firms will soon have to meet capital requirements to cover the market risks that they incur as a result of their normal operations. • Currently the driving forces developing international standards for market risk based capital requirements are
– The European Community which issued a binding Capital Adequacy Directive (EC-CAD) – Basel Committee on Banking Supervision at the Bank for International Settlements (Basel Committee)
Basel Accord
• Riskmetrics data set can be used in order to meet the requirements set forth in Basel Accord • The following formula accounts for the adjustments need to be made due to log changes, 95% confidence and holding period
where
Comparing the Basel Committee Proposal with RiskMetrics
Issue Mapping: how positions are described in summary form
Basel Committee proposal • Fixed Income: at least 6 time buckets, differentiate government yield curves and spread curves. • Equities: country indices, individual stocks on basis of beta equivalent. • Commodities: to be included, not specified how. • Volatility expressed in standard deviation of normal distribution proxy for daily historical observations year or more back. Equal weights or alternative weighting scheme provided effective observation period is at least one year. • Estimate updated at least quarterly • Minimum adverse move expected to happen with probability of 1% (2.32 standard deviations) over 10 business days. Permission to use daily statistics scaled up with square root of 10 (3.1). Equivalent to 7.3 daily standard deviations.
RiskMetrics • Fixed Income: data for 7–10 buckets of government yield curves in 16 markets, 4 buckets money market rates in 27 markets, 4–6 buckets in swap rates in 18 markets. • Equities: country indices in 27 markets, individual stocks on beta (correction for non-systematic risk). • Commodities: 80 volatility series in 11 commodities (spot and term). • Volatility expressed in standard deviation of normal distribution proxy for exponentially weighted daily historical observations with decay factors of .94 (for trading, 74 day cut-off 1%) and .97 (for investing, 151 day cut-off at 1%). • Special Regulatory Data Set, incorporating Basel Committee 1-year moving average assumption. • Estimates updated daily. • For trading: minimum adverse move expected to happen with probability of 5% (1.65 standard deviation) over 1 business day. • For investment: minimum adverse move expected to happen with probability of 5% (1.65 standard deviation) over 25 business days.
Volatility : how statistics of future price movement are estimated
Adversity: size of adverse move in terms of normal distribution
Comparing the Basel Committee Proposal with RiskMetrics
Issue Options: treatment of time value and non-linearity
Basel Committee proposal • Risk estimate must consider effect of non-linear price movement (gammaeffect). • Risk estimate must include effect of changes in implied volatilities (vega-effect).
RiskMetrics • Non-linear price movement can be estimated analytically (delta-gamma) or under simulation approach. Simulation scenarios to be generated from estimated volatilities and correlations. • Estimates of volatilities of implied volatilities currently not provided, thus limited coverage of options risk. • Full portfolio effect considered across all possible parameter combinations. • Correlations estimated using exponentially weighted daily historical observations with decay factors of 0.94 (for trading, 74 day cutoff 1%) and 0.97 (for investing, 151 day cutoff at 1%). • Does not deal with specific risks covered in standard maps
Correlation: how risks are aggregated
• Portfolio effect can be considered within asset classes (Fixed Income, Equity, Commodity, FX). Use of correlations across asset classes subject to regulatory approval. • Correlations estimated with equally weighted daily data for more than one year • Instrument specific risks not covered by standard maps should be estimated. • Capital requirements at least equal to 50% of charge calculated under standard methodology.
Residuals: treatment of instrument specific risks
THANK YOU
doc_445581802.pptx
This is a presentation explains different credit rsik models.
CREDIT RISK MODELS
PRESENTED BY:
GROUP-1
Credit Risk
• Credit risk is the risk of loss due to a debtor's non-payment of a loan or other line of credit (either the principal or interest (coupon) or both)
• lenders employ own models (Credit Scorecards) to rank customers – risk wise ? apply strategy eg.unsecured loans – higher risk Lenders to consumers • Setting of credit limits (eg. Credit card)
Lender to business
• limit the borrower's ability e.g., by buying back shares • allow for monitoring the debt requiring audits, and monthly reports
Faced by business
• Internal risk rating departments • Rating agencies eg. S&P, Moody etc.
Faced by individuals
• Eg. As depositors in banks • Eg. As employees – firms ability to pay wages
Credit Risk
RBI definition
: Credit risk is defined as the possibility of losses associated with diminution in the
credit quality of borrowers or counterparties.
In a bank’s portfolio, losses stem from outright default due to inability or unwillingness of a customer or counterparty to meet commitments in relation to lending, trading, settlement and other financial transactions. Alternatively, losses result from reduction in portfolio value arising from actual or perceived deterioration in credit quality.
External factors changes in government policies - trade policy - fiscal policy - import-export policy slow down in economy changes in market variables Internal factors -Business failure risk -Business management risk -Financial management risk -Settlement/pre-settlement risk on derivative products Portfolio risk -Adverse distribution -Adverse concentration -Large exposure -Correlation between industry sectors
3
Willful default
Size of Expected Loss
“Expected Loss“
=
EL =
1. What is the probability of a counterparty going into default?
“Probability of Default”
=
PD X
2. How much will that customer owe the bank in the case of default? (Expected Exposure)
“Loan Equivalency” (Exposure at Default)
=
EaD X
3. How much of that exposure is the bank going to lose?
“Severity” (Loss Given Default)
=
LGD
Credit Risk Models
• Structural models - use the evolution of firms’ structural variables, such as asset and debt values, to determine the time of default • Reduced form models - do not consider the relation between default and firm value in an explicit manner. The parameters governing the default hazard rate are inferred from market data
Structural Models
• Merton’s model (1974) was the first modern model of default and is considered the first structural model • Structural default models provide a link between the credit quality of a firm and the firm’s economic and financial conditions • Whereas reduced models exogenously specify recovery rates, in structural models the value of the firm’s assets and liabilities at default will determine recovery rates
MERTON’S MODEL
Merton’s Model
• Merton (1974) makes use of the Black and Scholes (1973) option pricing model to value corporate liabilities
• This is an straightforward application only if one adapts the firm’s capital structure and the default assumptions to the requirements of the Black-Scholes model.
Merton’s Model
Equity Capital Structure of a firm Zero coupon bond Face value D
Maturity T
Their values at time t are denoted by Et and z (t, T) respectively, for 0 ? t ? T. Vt is the sum of equity and debt values
Equity represents a call option on the firm’s assets with maturity T and strike price of D Implicit in this argument is the fact that the firm can only default at time T
This assumption is important to be able to treat the firm’s equity as a vanilla European call option, and therefore apply the Black-Scholes pricing formula.
Merton’s Model
Equity represents a call option on the firm’s assets with maturity T and strike price of D
At maturity T
VT > D
VT < D
The firm’s asset value VT is enough to pay back the face value of the debt D,
•Firm does not default •Shareholders receive VT-D
•Firm defaults •Bondholders take control of the firm, and shareholders receive nothing
Merton’s Model
Other assumptions are: • Inexistence of transaction costs, bankruptcy costs, taxes or problems with indivisibilities of assets • Continuous time trading • Unrestricted borrowing and lending at a constant interest rate r • No restrictions on the short selling of the assets • The value of the firm is invariant under changes in its capital structure (Modigliani-Miller Theorem) •The firm’s asset value follows a diffusion process given by
where ?V is the (relative) asset volatility and Wt is a Brownian motion
Merton’s Model
• The payoffs to equity holders and bondholders at time T under the assumptions of this model are:
• Applying the Black-Scholes pricing formula, the value of equity at time t (0 ? t ? T) is given by
where ? (.) is the distribution function of a standard normal random variable and d1and d2 are given by
Merton’s Model
• The probability of default at time T is given by
• Therefore, the value of the debt at time t is z (t, T) = Vt ? Et • In order to implement Merton’s model we have to estimate the firm’s asset value Vt, its volatility ?V (both unobservable processes), and we have to transform the debt structure of the firm into a zero-coupon bond with maturity T and face value D. • The maturity T of the zero-coupon bond can be chosen either to represent the maturity structure of the debt, for example as the Macaulay duration of all the liabilities, or simply as a required time horizon (for example, in case we are pricing a credit derivative with some specific maturity).
Merton’s Model – Advantages & Criticisms
• • The main advantage of Merton’s model is that it allows to directly apply the theory of European options pricing developed by Black and Scholes (1973). One problem of Merton’s model is the restriction of default time to the maturity of the debt, ruling out the possibility of an early default, no matter what happens with the firm’s value before the maturity of the debt Another handicap of the model is that the usual capital structure of a firm is much more complicated than a simple zero-coupon bond Another characteristic of Merton’s model, which will also be present in some of the FPM, is the predictability of default. Since the firm’s asset value is modelled as a geometric Brownian motion and default can only happen at the maturity of the debt, it can be predicted with increasing precision as the maturity of the debt comes near. As a result, in this approach default does not come as a surprise, which makes the models generate very low short-term credit spreads. introducing jumps in the process followed by the asset value has been one of the solutions considered to this problem.
• •
FIRST PASSAGE MODEL
First Passage Models
• First Passage Models (FPM) were introduced by Black and Cox (1976) extending the Merton model to the case when the firm may default at any time, not only at the maturity date of the debt • Although Black and Cox (1976) considered a time dependent default threshold, let us assume first a constant default threshold K > 0. If we are at time t ? 0 and default has not been triggered yet and Vt > K, then the time of default ? is given by • Using the properties of the Brownian motion Wt, in particular the reflection principle, we can infer the default probability from time t to time T
• FPM have been extended to account for stochastic interest rates, bankruptcy costs, taxes, debt subordination, strategic default, time dependent and stochastic default barrier, jumps in the asset value process, etc. • More analytically complex • The default threshold (+ve) can be interpreted in various ways. We can think of it as a safety covenant of the firm’s debt which allows the bondholders to take control of the company once its asset value has reached this level • The safety covenant would act as a protection mechanism for the bondholders against an unsatisfactory corporate performance
FPM - Drawbacks
• The principal drawback of FPM is the analytical complexity that they introduced • This mathematical complexity makes difficult to obtain closed form expressions for the value of the firm’s equity and debt, or even for the default probability • Predictability of defaults • Predictability of recovery
VASICEK’S MODEL
VASICEK’S MODELS
There are 2 models used by BASEL II given by Vasicek:
1. Vasicek’s Stochastic Interest Rate Model
2. Vasicek’s Probability of Loss (PL) on Loss Portfolio Model
I VASICEK’S MODEL on Interest rates
• The Vasicek model is a mathematical model describing the evolution of interest rates. • It is a type of "one-factor model" (short rate model) as it describes interest rate movements as driven by only one source of market risk. The model can be used in the valuation of interest rate derivatives, and has also been adapted for credit markets. It was introduced in 1977 by Oldrich Vasicek. • Vasicek's model was the first one to capture mean reversion, an essential characteristic of the interest rate that sets it apart from other financial prices. Thus, as opposed to stock prices for instance, interest rates cannot rise indefinitely. • This is because at very high levels they would hamper economic activity, prompting a decrease in interest rates. Similarly, interest rates can not decrease indefinitely. As a result, interest rates move in a limited range, showing a tendency to revert to a long run value.
Ornstein-Uhlenbeck stochastic process
The model specifies that the instantaneous interest rate follows the stochastic differential equation:
Ornstein-Uhlenbeck stochastic process Ornstein-Uhlenbeck stochastic process
The model specifies that the instantaneous interest rate follows the stochastic differential equation:
Wt is a Wiener process modelling the random market risk factor.
Ornstein-Uhlenbeck stochastic process Ornstein-Uhlenbeck stochastic process
The model specifies that the instantaneous interest rate follows the stochastic differential equation:
The standard deviation parameter, ?, determines the volatility of the interest rate
Ornstein-Uhlenbeck stochastic process Ornstein-Uhlenbeck stochastic process
The model specifies that the instantaneous interest rate follows the stochastic differential equation:
The parameter b represents the long run equilibrium value towards which the interest rate reverts
Ornstein-Uhlenbeck stochastic process Ornstein-Uhlenbeck stochastic process
The model specifies that the instantaneous interest rate follows the stochastic differential equation:
The parameter a, governing the speed of adjustment, needs to be positive to ensure stability around the long term value. For example, when rt is below b, the drift term a(b ? rt) becomes positive for positive a, generating a tendency for the interest rate to move upwards (toward equilibrium).
Ornstein-Uhlenbeck stochastic process Ornstein-Uhlenbeck stochastic process
The model specifies that the instantaneous interest rate follows the stochastic differential equation:
The drift factor a(b ? rt) represents the expected instantaneous change in the interest rate at time t.
Ornstein-Uhlenbeck stochastic process Solving the stochastic equation
Ornstein-Uhlenbeck Stochastic IR Modeling with stochastic process
• Forecasting of future Interest rates • The short term Interest rate is assumed to be constant in Merton’s model and thus the model disregards the Interest rate risk. This shortcoming can be overcome by allowing for a stochastic interest rates according to Vasicek’s model
Ornstein-Uhlenbeck stochastic process Disadvantages
•The main disadvantage is that, under Vasicek's model, it is theoretically possible for the interest rate to become negative, an undesirable feature. (This shortcoming was fixed in the Cox-Ingersoll-Ross model. The Vasicek model was further extended in the Hull-White model.) •The use of Vasicek's model does not improve the predictive power of current market data significantly. The evidence strongly suggests the need to use a multiple-factor model to improve the fit to the observed yield curve and the predictive ability of the model for future interest rate movements, if the market data contains any information.
Ornstein-Uhlenbeck stochastic process II. Vasicek’s Model & BASEL II
Ornstein-Uhlenbeck stochastic process Vasicek’s Model & BASEL II
• Vasicek applied Merton’s model to a portfolio of borrowers. As in Merton’s model, each borrower is characterised by a lognormal asset value process. A borrower defaults when its asset-value falls below a threshold defined by its liabilities. • Correlations between borrowers’ defaults arise from correlation between their asset values. Correlations between asset values, in turn, are described by a common dependence of asset returns on a single systematic factor, representing the state of the economy. Thus the asset return for each borrower has a systematic component, reflecting the effect of the economy on the borrower, and an idiosyncratic component, describing fortunes and misfortunes unique for the borrower.
Ornstein-Uhlenbeck stochastic process Vasicek’s Model & BASEL II
• Assuming homogeneous portfolio of loans with zero recoveries, Vasicek derived the distribution function for the portfolio loss. The Vasicek model has also been refined and extended to include nonhomogeneous portfolios and non-zero stochastic recoveries.
• To describe different systematic effects for firms belonging to different industries and/or different geographical regions, the single systematic factor in the Vasicek model was replaced by a set of correlated systematic factors. This multi-factor extension of the Vasicek model lies in the foundation of such industry models as KMV’s PortfolioManager™ and RiskMetrics’ CreditMetrics™. Not all models of portfolio credit risk are of the Merton-Vasicek type (eg, CSFB’s Credit Risk Plus™).
Ornstein-Uhlenbeck stochastic process Vasicek’s Model & BASEL II
The cumulative probability that the percentage loss on a portfolio of n loans does not exceed ? is
………………Equation 1
where Pk are given by an integral expression in Oldrich Vasicek’s memo, Probability of Loss on Loan Portfolio,
Ornstein-Uhlenbeck stochastic process Vasicek’s Model & BASEL II
Now,
Substituting in equation 1
Ornstein-Uhlenbeck stochastic process Vasicek’s Model & BASEL II
……..Continued on next slide.
Ornstein-Uhlenbeck stochastic process Vasicek’s Model & BASEL II
Ornstein-Uhlenbeck stochastic process Vasicek’s Model & BASEL II
PROBABILITY OF LOSS ON LOAN PORTFOLIO Consider a portfolio consisting of n loans in equal dollar amounts. Let the probability of default on any one loan be p, and assume that the values of the borrowing companies’ assets are correlated with a coefficient ? for any two companies. We wish to calculate the probability distribution of the percentage gross loss L on the portfolio, that is,
……..cont. on next slide
Ornstein-Uhlenbeck stochastic process Vasicek’s Model & BASEL II
PROBABILITY OF LOSS ON LOAN PORTFOLIO The company defaults on its loan if the value of its assets drops below the contractual value of its obligations Di payable at time T. We thus have
Ornstein-Uhlenbeck stochastic process Vasicek’s Model & BASEL II
PROBABILITY OF LOSS ON LOAN PORTFOLIO Because of the joint normality and the equal correlations, the processes zi can be represented as
Ornstein-Uhlenbeck stochastic process Vasicek’s Model & BASEL II
PROBABILITY OF LOSS ON LOAN PORTFOLIO
MOODY’S KMV MODEL
Elements of credit risk
The elements of credit risk can therefore be grouped as follows: Standalone Risk • Default probability—the probability that the counterparty or borrower will fail to service obligations. • Loss given default—the extent of the loss incurred in the event the borrower or counterparty defaults. Portfolio Risk • Default correlations—the degree to which the default risks of the borrowers and counterparties in the portfolio are related. • Exposure—the size, or proportion, of the portfolio exposed to the default risk of each counterparty and borrower.
Measuring default probability: the problem
• Three main elements that determine the default probability of a firm: • Value of Assets: the market value of the firm's assets
• Asset Risk: the uncertainty or risk of the asset value. This is a measure of the firm's business and industry risk.
• Leverage: the extent of the firm's contractual liabilities.
MOODY’S KMV Model
• Whereas the relevant measure of the firm's assets is always their market value, the book value of liabilities relative to the market value of assets is the pertinent measure of the firm's leverage, since that is the amount the firm must repay.
• The default risk of the firm increases as the value of the assets approaches the book value of the liabilities, until finally the firm defaults when the market value of the assets is insufficient to repay the liabilities.
MOODY’S KMV Model
• Figure illustrates the evolution of the asset value and book liabilities of Winstar Communications, a New York Telephone company that filed for Chapter 11 bankruptcy protection in April 2001.
MOODY’S KMV Model
• In study of defaults, it is found that in general firms do not default when their asset value reaches the book value of their total liabilities. • The default point, the asset value at which the firm will default, generally lies somewhere between total liabilities and current, or short-term, liabilities. • The relevant net worth of the firm is therefore the market value of the firm's assets minus the firm's default point.
MOODY’S KMV Model
Asset risk and Asset volatility
• The asset risk is measured by the asset volatility, the standard deviation of the annual percentage change in the asset value. • Asset volatility is related to the size and nature of the firm's business.
• Asset value, business risk and leverage can be combined into a single measure of default risk • This ratio is called as the distance-to-default and it is calculated as:
Measuring Default Probability: Practical Approach
• Three basic types of information available that are relevant to the default probability of a publicly traded firm:
– financial statements, – market prices of the firm's debt and equity, – subjective appraisals of the firm's prospects and risk
• MKMV has implemented the VK model to calculate an Expected Default Frequency™ (EDF™) credit measure which is the probability of default during the forthcoming year, or years for firms with publicly traded equity • The EDF value requires equity prices and certain items from financial statements as inputs
• EDF credit measure assumes that Default is defined as the nonpayment of any scheduled payment, interest or principal.
Determining Default Probability
Three steps in the determination of the default probability of a firm: • Estimate asset value and volatility: In this step the asset value and asset volatility of the firm is estimated from the market value and volatility of equity and the book value of liabilities. • Calculate the distance-to-default: The distance-to-default (DD) is calculated from the asset value and asset volatility (estimated in the first step) and the book value of liabilities.
• Calculate the default probability: The default probability is determined directly from the distance-to-default and the default rate for given levels of distance-to-default.
Step 1:Estimate Asset Value and Volatility
• If the market price of equity is available, the market value and volatility of assets can be determined directly using an options pricing based approach, which recognizes equity as a call option on the underlying assets of the firm. • For example, consider a simplified case where there is only one class of debt and one class of equity.
• The limited liability feature of equity means that the equity holders have the right, but not the obligation, to pay off the debt holders and take over the remaining assets of the firm. • The holders of the other liabilities of the firm essentially own the firm until those liabilities are paid off in full by the equity holders. • In the simplest case, equity is the same as a call option on the firm’s assets with a strike price equal to the book value of the firm’s liabilities.
• The model uses this option nature of equity to derive the underlying asset value and asset volatility implied by the market value, volatility of equity, and the book value of liabilities.
• Assume that we initially invest $20 in the firm and borrow a further $80 from a bank. The proceeds,$100, are invested in equities. • At the end of five years, if the market value of the assets is $60 then the value of equity will be zero. If the value of the assets is $110, then the value of the equity will be $30, and so on.
• Assume now that we are interested in valuing our equity prior to the final winding up of the firm.
• Marked the equities to market and their value is determined to be $80, there are two years remaining before we wind the firm up • Value of equity will be greater than zero because it is the value of the assets two years hence that really matters and there is still a chance that the asset value will be greater than $80 in two years time
• Asset value and volatility are the only unknown quantities in these relationships and thus the two equations can be solved to determine the values implied by the current equity value, volatility and capital structure.
Step 2:Calculate the Distance-todefault
• There are six variables that determine the default probability of a firm over some horizon, from now until time H 1. The current asset value. 2. The distribution of the asset value at time H . 3. The volatility of the future assets value at time H . 4. The level of the default point, the book value of the liabilities. 5. The expected rate of growth in the asset value over the horizon. 6. The length of the horizon, H .
• If the value of the assets falls below the default point, then the firm defaults. Therefore, the probability of default is the probability that the asset value will fall below the default point. This is the shaded area (EDF value) below the default point in Figure
• However, in practice, the distribution of the distance-to-default is difficult to measure. • MKMV first measures the distance-to-default as the number of standard deviations the asset value is away from default • Then uses empirical data to determine the corresponding default probability
Step 3: Calculate the Default Probability
• The relationship between distance-to-default and default probability from data on historical default and bankruptcy frequencies • Database includes over 250,000 company-years of data and over 4,700 incidents of default or bankruptcy • From this data, a lookup or frequency table can be generated which relates the likelihood of default to various levels of distance-todefault.
Option nature of equity to derive the market value and volatility of assets
• BS model - The market value of the firm’s underlying assets follows the following stochastic process: dV =? V dt+? V dz where - VA ,dVA are the firm’s asset value and change in asset value, - ? ,? A are the firm’s asset value drift rate and volatility, and - dz is a Wiener process - The BS model allows only two types of liabilities, a single class of debt and a single class of equity.
• If X is the book value of the debt which is due at time T then the market value of equity and the market value of assets are related by the following expression:
• The probability of default is the probability that the market value of the firm’s assets will be less than the book value of the firm’s liabilities by the time the debt matures. That is:
• The BS model assumes that the random component of the firm’s asset returns is Normally distributed, ?~Na0,1f and as a result we can define the default probability in terms of the cumulative Normal distribution:
MOODY’S KMV Model ( Private Companies)
• The Moody’s RiskCalc v3.1 model introduces the next-generation default prediction technology for private, middle-market companies • The performance of the RiskCalc v3.1 model is based in part on an extraordinary and proprietary database developed by Moody’s KMV: the Credit Research Database • Significant investment to expand and refine this core data set, increasing its cross-sectional and time series coverage of private firm data
• Credit Research Database contained more than 6,500,000 financial statements on more than 1,500,000 unique private firms with more than 97,000 default events worldwide
• Cutting-edge processes for cleaning the data and addressing differences in local accounting and reporting practices • Data Integrity - Clean the data to detect obvious data problems or data collection issues, such as whether default information or financial statements are missing for a given institution in a given region or time period, and whether balance sheets balance.
• Designed to meet the requirements for default models found in the New Basel Capital Accord (or Basel II) • Model supports critical requirements for delivering consistent risk estimates, risk ratings, default probabilities, and model validation
Basel II and Model
• Consistent Risk Estimates • The RiskCalc v3.1 model will always produce the same estimate of default risk for a given set of inputs, which meets a critical requirement of the Basel II Accord:
“The overarching principle behind these requirements is that rating and risk estimation systems . . . provide for a meaningful differentiation of risk, and accurate and consistent quantitative estimates of risk.” [Basel II, paragraph 351]
• Forward-looking Risk Ratings • In addition to fundamental financial statement inputs, the RiskCalc v3.1 model incorporates the collective perspective of the market sector in which a firm operates. This is consistent with the Basel II Accord requirement that risk-rating models use all available information in determining a borrower’s rating—including the impact of future economic conditions: “A borrower rating must represent the bank’s assessment of the borrower’s ability and willingness to contractually perform despite adverse economic conditions or the occurrence of unexpected events.” *Basel II, paragraph 376+
• Stress Testing Default Probabilities • The RiskCalc v3.1 model is uniquely designed to stress test a firm’s sensitivity to the probability of default at different stages of a credit cycle. This feature satisfies a leading imperative of the New Basel Capital Accord: “An IRB (internal ratings-based) bank must have in place sound stress testing processes for use in the assessment of capital adequacy. Stress testing should involve identifying possible events or future changes in economic conditions that could have unfavorable effects on a bank’s credit exposures and assessment of the bank’s ability to withstand such changes. Examples of scenarios that usefully could be examined are: (i) economic or industry downturns; (ii) market-risk events; and (iii) liquidity conditions.” [Basel II, paragraph 396]
• Validation • RiskCalc v3.1 models are designed to meet the Basel II Accord’s stringent requirements for validating ratings: “Banks must have a robust system in place to validate the accuracy and consistency of rating systems, processes, and the estimation of all relevant risk components.” *Basel II, paragraph 463]
• Validate models using a rigorous testing process that demonstrates their power outside the development sample. • These tests include of out-of- sample testing (using defaults and non-defaults which were not used in the model development, such as a “hold-out” sample) and comparisons to other models
The Model
• For users who desire a stable estimate of a firm’s default risk based only on a firm’s financial statements, the RiskCalc v3.1 model can be configured in Financial Statement Only (FSO) mode. • For most accurate determination of the default risk of private company credits and an efficient monthly monitoring process, the RiskCalc v3.1 model can be configured in complete version
The Financial Statement Only Mode
• The mode includes financial statement variables that capture a firm’s long-run performance. • Predictions of a middle-market firm’s default risk update only as often as the firm updates its financial statements • Selected specific financial ratios to develop a robust and informative model that is transparent, intuitive, and highly predictive for out-of-sample data
Ratios • Generally, experience is that including a large number of ratios in a quantitative model may yield a model that is “overfitted.” • Selected a limited number of financial ratios that yield a powerful model. • Following broad risk factors of financial performance: • Profitability • Leverage • Debt coverage • Growth variables • Liquidity • Activity ratios • Size
• In building financial statement model - include at least one variable from each group. • High profitability reduces the probability of default. • High leverage increases the probability of default • Growth variables behave like a double-edged sword: both rapid growth and rapid decline (negative growth) will tend to increase a firm’s default probability. • High liquidity reduces the probability of default. • A large stock of inventories relative to sales increases the probability of default • Large firms default less often
• These ratios relates to varying degrees to credit risk • Research shows a nonlinear relationship between many of these ratios and a firm’s probability of default.
FSO Functional Form
• FSO models are based on the following functional form:
• where x1,...,xN are the input ratios; I1,...,IK are indicator variables for each of the industry classifications; • ? and ? are estimated coefficients; • ? is the cumulative normal distribution; • F and T1,...,TN are non-parametric transforms; and FSO EDF is the financial statement-only EDF credit measure. • The Ts capture non-linear impacts of financial ratios on the default likelihood
The Complete Version
• Moody claim it as the most predictive measures of credit risk for private, middle market companies • The model combines forward-looking equity market information that reflects the general credit cycle and the state of the firm’s industry with firm-specific data about private companies • This model delivers the following:
- Significantly more accurate EDF levels - More frequent updates of all relevant information - The ability to stress test EDF credit measures under different credit cycle scenarios (a Basel II imperative)
Distance-To-Default: Using Market Data from Public Firm Model to Improve Private Firm Predictions • Market prices for claims on the assets of private firms are generally not available • RiskCalc v3.1 imbeds credit insight from market information at the industry sector level • That industry sector information comes from the current month information on the sector’s average distance-to-default • Including the distance-to-default factor, not on the individual private firm but from an aggregation of public companies in the corresponding sector, improves the performance of private firm models by incorporating forward-looking market price dynamics.
• When the distance-to-default measure is trending downward, or closer to the default barrier, on average for the public firms in a given sector, the probability of default for a private firm in that sector should be adjusted upward as this indicator contains information that is not yet in financial statements
• By controlling for industry variation, the RiskCalc v3.1 model:
- Corrects for intrinsic differences in default probability across industries - Adjusts for differences in interpretation of financial ratios across industries, and corrects for spurious effects - Improves EDF performance and accuracy
• Financial Statement Ratios used in RiskCalc v3.1 U.S., Canada, U.K., and Japan on next slide
MOODY’S KMV Model for BANKS
RiskCalc Modes • RiskCalc v3.1 allows a user to assess the risk of a privately-held bank in two ways: Financial Statement Only (FSO) and Credit Cycle Adjusted (CCA).
• The FSO mode delivers a bank’s default risk based only on regulatory filings. In this mode, the risk assessments produced by the model are relatively stable over time. • The CCA mode adjusts the default risk by taking into account the current stage of the credit cycle. The CCA adjustment is specific to the firm’s sector and country and is updated regularly
Model Components
• The inputs to the model include selection of the financial ratios and transforms of those ratios, and the credit cycle adjustment. • The development of a RiskCalc model involves the following steps: 1. Choosing a limited number of financial statement variables for the model from a list of possible variables 2. Transforming the variables into interim probabilities of default using non-parametric techniques. 3. Estimating the weightings of the financial statement variables, using a probit model combined with industry variables. 4. Creating a (non-parametric) final transform that converts the probit model score into an actual EDF credit measure.
Financial Statement Variables
• Selecting the Variables • The working list of ratios is divided into groups that represent different underlying concepts regarding a bank’s financial status
• Model is then built with at least one variable per group.
• Once the variables are selected, they are transformed into a preliminary EDF credit measure • The relative value of each variable used in calculating an EDF credit measure is important in understanding a company’s risk • The non-linear nature of the model makes the weight of the variables more difficult to determine • Model weights, therefore, are calculated based on the average EDF value for the transformation and its standard deviation • A variable with a flat transformation could have a low weight, even if the coefficient is large
Credit Cycle Adjustment
• EDF credit measures are impacted not only by the financials of a company, but also by the general credit cycle in the economy • The credit cycle adjustment is designed to incorporate the current credit cycle into the estimate of private bank default risk • If the distance-to-default for public firms in the banking industry indicates a level of risk above the historical average, then the private banks’ EDF credit measures are adjusted upward • Conversely, if the level of risk is below the historical average, then the private banks’ EDF credit measures are adjusted downward
CREDIT SUISSE FIRST BOSTON MODEL
Types of Credit Risk
• Credit spread risk • Credit default risk
Components of Credit Risk?
Credit Risk? Model
• Based on a portfolio approach to modelling credit default risk • A statistical model of credit default risk that makes no assumptions about the causes of default • Considers default rates as continuous random variables and incorporates the volatility of default rates in order to capture the uncertainty in the level of default rates
Credit Risk? Model
• Mathematical techniques applied widely in the insurance industry are used to model the sudden event of an obligor default. This approach contrasts with the mathematical techniques typically used in finance • Applying insurance modelling techniques, the analytic CREDITRISK+ Model captures the essential characteristics of credit default events and allows explicit calculation of a full loss distribution for a portfolio of credit exposures • No assumptions are made about the causes of default
Default Rate Behaviour
• Continuous variable When treated as a continuous variable, the possible default rate over a given time horizon is described by a distribution, which can be specified by a default rate and a volatility of the default rate. The data requirements for modelling credit default risk are analogous to the data requirements for pricing stock options - a forward stock price and the stock price volatility are used to define the forward stock price distribution
Default Rate Behaviour (Contd…)
• Discrete variable
By treating the default rate as a discrete variable, a simplification of the continuous process described earlier is made. A convenient way of making default rates discrete is by assigning credit ratings to obligors and mapping default rates to credit ratings. Using this approach, additional information is required in order to model the possible future outcomes of the default rate. This can be achieved via a rating transition matrix that specifies the probability of keeping the same credit rating, and hence the same value for the default rate, and the probabilities of moving to different credit ratings and hence to different values for the default rate
Portfolio Approach of Managing Credit Risk
The portfolio risk of a particular exposure is determined by four factors • the size of the exposure • maturity of the exposure • the probability of default of the obligor • the systematic or concentration risk of the obligor
Credit Risk Exposures
• Model is capable of handling all types of instruments that give rise to credit exposure , including bonds, loans, commitments, financial letters of credit and derivative exposures
• In addition, if a multi-year time horizon is being used, it is important that the changing exposures over time are accurately captured
Default rates
A default rate, which represents the likelihood of a default event occurring, should be assigned to each obligor. This can be achieved in a number of ways, including: • Observed credit spreads from traded instruments can be used to provide market-assessed probabilities of default • Alternatively, obligor credit ratings, together with a mapping of default rates to credit ratings, provide a convenient way of assigning probabilities of default to obligors. The rating agencies publish historic default statistics by rating category for the population of obligors that they have rated • Another approach is to calculate default probabilities on a continuous scale, which can be used as a substitute for the combination of credit ratings and assigned default rates.
Default rate Volatilities
Published default statistics include average default rates over many years. • Actual observed default rates vary from these averages. • The amount of variation in default rates about these averages can be described by the volatility (standard deviation) of default rates. • As can be seen in the table,the standard deviation of default rates can be significant compared to actual default rates, reflecting the highfluctuations observed during economic cycles.
Recovery rates
• In the event of a default of an obligor, a firm generally incurs a loss equal to the amount owed by the obligor less a recovery amount, which the firm recovers as a result of foreclosure, liquidation or restructuring of the defaulted obligor or the sale of the claim. • Recovery rates should take account of the seniority of the obligation and any collateral or security held
Recovery rates
•
There is also considerable variation for obligations of differing seniority, as can be seen from the standard deviation of the corporate bond and bank loan recovery rates in the table below
Stages in Modeling process
The modelling of credit risk is a two stage process, as shown in the following diagram:
Frequency of default events
• The Model makes no assumption about the causes of default - credit defaults occur as a sequence of events in such a way that it is neither possible to forecast the exact time of occurrence of any one default nor the exact total number of defaults. • It models the underlying default rates by specifying a default rate and a default rate volatility. This aims to take account of the variation in default rates in a pragmatic manner, without introducing significant model error.
Moving from Default Events to Default Losses
• The distribution of losses differs from the distribution of default events because the amount lost in a given default depends on the exposure to the individual obligors • Unlike the variation of default probability between obligors, which does not influence the distribution of the total number of defaults, the variation in exposure magnitude results in a loss distribution that is not Poisson in general
In the event of a default of an obligor: • A firm generally incurs a loss equal to the amount owed by the obligor less a recovery amount, which the firm obtains as a result of foreclosure, liquidation or restructuring of the defaulted obligor. • A recovery rate is used to quantify the amount received from this process. Recovery rates should take account of the seniority of the obligation and any collateral or security held • Model calculates the probability that a loss of a certain multiple of the chosen unit of exposure will occur. This allows a full loss distribution to be generated, as shown in the figure ahead
Sector Analysis
Model measures the benefit of portfolio diversification and the impact of concentrations through the use of sector analysis: • Concentration Risk • Allocating all Obligors to a Single Sector • Allocating Obligors to one of Several Sectors • Apportioning Obligors across Several Sectors • The Impact of Sectors on the Loss Distribution
Multi-Year Losses for a Hold-toMaturity Time Horizon
• Model allows risk of the portfolio to be viewed on a hold-to maturity time horizon in order to capture any default losses that could occur until maturity of the credit exposure • Analysing credit exposures on a multi-year basis enables the risk manager to compare exposures of different size, credit quality, and maturity
• The loss distribution produced provides, for any chosen level of confidence, an indication of the possible cumulative losses that could be suffered until all the exposures have matured
Using the CREDITRISK+ Model to Calculate Multi-Year Loss Distributions
• Model can be used to calculate multi-year loss distributions by decomposing the exposure profile over time into separate elements of discrete time, with the present value of the remaining exposure in each time period being assigned a marginal default probability relevant to the maturity and credit quality. • These decomposed exposure elements can then be used by the CREDITRISK+ Model to generate a loss distribution on a hold-to-maturity basis.
RISKMETRICS TM
J.P. MORGAN/REUTERS
Reasons For Methodology
• Greater transparency of market risks
• A benchmark for market risk measurement • Understanding and evaluation of advice given to clients on managing risk
RiskMetrics
A set of techniques and data to measure market risks in portfolios of • • • • • Fixed income instruments Equities Foreign exchange Commodities Derivatives
What is RiskMetrics?
RiskMetrics is a set of tools that enable participants in the financial markets to estimate their exposure to market risk under what has been called the “Value-at-Risk framework”. RiskMetrics has three basic components: • A set of market risk measurement methodologies • Data sets of volatility and correlation data used in the computation of market risk • Software systems developed by J.P.Morgan, subsidiaries of Reuters, and third party vendors that implement the methodologies
PART I : RISK MEASUREMENT FRAMEWORK
Introduction
• Risk - the degree of uncertainty of future net returns • Classification of risks is based on the source of the underlying uncertainty
– Credit risk estimates the potential loss because of the inability of a counterparty to meet its obligations
– Operational risk results from errors that can be made in instructing payments or settling transactions
– Liquidity risk is reflected in the inability of a firm to fund its illiquid assets
– Market risk involves the uncertainty of future earnings resulting from changes in market conditions, (e.g., prices of assets, interest rates). Over the last few years measures of market risk have become synonymous with the term Valueat-Risk
Introduction to Value-at-Risk
• Value-at-Risk (VaR) is a measure of the maximum potential change in value of a portfolio of financial instruments with a given probability over a pre-set horizon • VaR answers the question: How much can I lose with x% probability over a given time horizon?
EXAMPLE 1
You are a USD-based corporation and hold a DEM 140 million FX position. What is your VaR over a 1-day horizon given that there is a 5% chance that the realized loss will be greater than what VaR projected? The choice of the 5% probability is discretionary and differs across institutions using the VaR framework What is your exposure? As a USD based investor, your exposure is equal to the market value of the position in your base currency. If the foreign exchange rate is 1.40 DEM/USD, the market value of the position is USD 100 million
What is Your Risk?
• Requires an estimate of how much the exchange rate can potentially move • Standard deviation of the return on the DEM/USD exchange rate, measured historically can provide an indication of the size of rate movements (say 0.565%) • Under the standard RiskMetrics assumption that standardized returns (rt/?t) on DEM/USD are normally distributed given the value of this standard deviation, VaR is given by 1.65 times the standard deviation (that is, 1.65 ?) or 0.932%
What is Your Risk?
• In USD, the VaR of the position is equal to the market value of the position times the estimated volatility or:
• FX Risk: $100 million * 0.932% = $932,000
• It means is that 95% of the time, you will not lose more than $932,000 over the next 24 hours.
VaR Statistics
EXAMPLE 2
You are a USD-based corporation and hold a DEM 140 million position in the 10-year German government bond. What is your VaR over a 1-day horizon period, again, given that there is a 5% chance of understating the realized loss?
What is your exposure? The only difference versus the previous example is that you now have both interest rate risk on the bond and FX risk resulting from the DEM exposure. The exposure is still USD 100 million but it is now at risk to two market risk factors.
What is Your Risk?
If you use an estimate of 10-year German bond standard deviation of 0.605%, you can calculate: Interest rate risk: $100 million * 1.65 * 0.605% = $999,000 FX Risk: $100 million * 1.65 * 0.565% = $932,000 Total Risk • Estimated correlation between the returns on the DEM/USD exchange rate and the 10-year German government bond is 0.27 • Using a formula common in standard portfolio theory, the total risk of the position is given by:
= $ 1.168 million
A More Advanced Approach
Value-at-Risk is a number that represents the potential change in a portfolio’s future value. How this change is defined depends on
1.
The horizon over which the portfolio’s change in value is measured and
2.
The “degree of confidence” chosen by the risk manager.
VaR Calculation Steps
The Value-at-Risk calculation consists of the following steps. 1. 2. Mark-to-market the current portfolio. Denote this value by V0. Define the future value of the portfolio, V1 , as V1 = V0er where r represents the return on the portfolio over the horizon. For a 1-day horizon, this step is unnecessary as RiskMetrics assumes a 0 return Make a forecast of the 1-day return on the portfolio and denote this value by r’, such that there is a 5% chance that the actual return will be less than r’. Alternatively expressed, Probability (r < r’) = 5% Define the portfolio’s future “worst case” value V1’, as V1’ = V0 er. The Value-at-Risk estimate is simply (V0 – Vi’)
3.
4.
EXAMPLE 3
1. 2. Consider a portfolio whose current marked-to-market value, V0, is USD 500 million. To carry out the VaR calculation we require 1-day forecasts of the mean ?1. Within the RiskMetrics framework, we assume that the mean return over a 1-day horizon period is equal to 0. We also need the standard deviation, ?1 , of the returns in this portfolio. Assuming that the return on this portfolio is distributed conditionally normal, r’ = -1.65 ?1 + ?1. The RiskMetrics data set provides the term 1.65 ?1. Hence, setting ?1 = 0 and ?1 = 0.0321, we get V1’ = USD 474.2 million. This yields a Value-at-Risk of USD 25.8 million (given by V0 – V1’)
3.
4.
Simulated Portfolio Changes
Using RiskMetrics to Compute VaR on a Portfolio of Cash Flows
Step 1: Each financial position in a portfolio is expressed as one or more cash flows that are marked-to-market at current market rates. For example, consider an instrument that gives rise to three USD 100 cash flows each occurring in 1, 4, and 7 months’ time
Step 2: When necessary, the actual cash flows are converted to RiskMetrics cash flows by mapping (redistributing) them onto a standard grid of maturity vertices, known as RiskMetrics vertices, which are fixed at the following intervals: 1m 3m 6m 12m 2yr 3yr 4yr 5yr 7yr 9yr 10yr 15yr 20yr 30yr • To map the cash flows, we use the RiskMetrics vertices closest to the actual vertices andredistribute the actual cash flows as shown in Chart
Mapping Actual Cash Flows onto RiskMetrics Vertices
Step 3: VaR is calculated at the 5th percentile of the distribution of portfolio return, and for a specified time horizon. In the example above, the distribution of the portfolio return, rp , is written as:
• Under the assumption that is distributed conditionally normal, the fifth percentile is -1.65 ?p where ?p is the standard deviation of the portfolio return distribution. • Applying portfolio theory equation to a portfolio containing more than two instruments requires using simple matrix algebra. We can thus express this VaR calculation as follows:
Where,
And
Note that RiskMetrics provides the vector of information as well as the correlation matrix R. What the user has to provide are the actual portfolio weights.
Measuring the Risk of Nonlinear Positions
• In our previous examples, we could easily estimate the risk of a fixed income or foreign exchange product by assuming a linear relationship between the value of an instrument and the value of its underlying. This is not a reasonable assumption when dealing with nonlinear products such as options.
• RiskMetrics offers two methodologies, to compute the VaR of nonlinear positions: – An analytical approximation and – A structured Monte Carlo simulation
Analytical Approximation
• This method approximates the nonlinear relationship via a mathematical expression that relates the return on the position to the return on the underlying rates. This is done by using what is known as a Taylor series expansion This approach no longer necessarily assumes that the change in value of the instrument is approximated by its delta alone (the first derivative of the option’s value with respect to the underlying variable)
•
•
Rather it tells that a second order term using the option’s gamma (the second derivative of the option’s value with respect to the underlying price) must be introduced to measure the curvature of changes in value around the current value
In practice, other “greeks” such as vega (volatility), rho (interest rate) and theta (time to maturity) can also be used to improve the accuracy of the approximation There are two types of analytical methods for computing VaR — the delta and delta-gamma approximation
• •
Structured Monte Carlo Simulation
• It involves creating a large number of possible rate scenarios and revaluing the instrument under each of these scenarios • VaR is then defined as the 5th percentile of the distribution of value changes • Due to the required revaluations, this approach is computationally more intensive than the first approach.
EXAMPLE 4
Consider a portfolio comprising of two assets: • Asset 1: A future cash flow stream of DEM 1 million to be received in one year’s time. The current 1-year DEM rate is 10% so the current market value of the instrument is DEM 909,091. • Asset 2: An at-the-money (ATM) DEM put/USD call option with contract size of DEM 1 million and expiration date one month in the future. The premium of the option is 0.0105 and the spot exchange rate at which the contract was concluded is 1.538 DEM/USD. We assume the implied volatility at which the option is priced is 14%
• The value of this portfolio depends on the USD/DEM exchange rate and the one-year DEM bond price • Our risk horizon for the example will be five days
• We take as the daily volatilities of these two assets ?FX = 0.42% and ?B = 0.08% and as the correlation between the two ? = -0.17 • Both alternatives will focus on price risk exclusively and therefore ignore the risk associated with volatility (vega), interest rate (rho) and time decay (theta risk).
Analytical Method: Delta Approximation
• The simplest first order approximation is to estimate changes in the option value via a linear model, which is commonly known as the “delta approximation”
• Delta is the first derivative of the option price with respect to the spot exchange rate. The value of d for the option in this example is 0.4919 • The return on this portfolio consisting of a cash flow in one year and a put on the DEM/call on the USD is written as follows
Where
•
Under the assumption that the portfolio return is normally distributed, VaR at the 95% confidence level is given by
•
Using our volatilities and correlations forecasts for DEM/USD and the 1-year DEM rate, the weekly VaR for the portfolio using the delta equivalent approach can be approximated by:
Analytical Method: Delta-Gamma Approximation
• The delta approximation is reasonably accurate when the exchange rate does not change significantly, but less so in the more extreme cases. • This is because the delta is a linear approximation of a non linear relationship between the value of the exchange rate and the price of the option • We may be able to improve this approximation by including the gamma term, which accounts for nonlinear (i.e. squared returns) effects of changes in the spot rate
•
The expression for the portfolio return is now
Where
•
Now, the gamma term introduces skewness into the distribution of (i.e., the distribution is no longer symmetrical around its mean). Therefore, since this violates one of the assumptions of normality (symmetry) we can no longer calculate the 95th percentile VaR as 1.65 times the standard deviation of rp.
•
Instead we must find the appropriate multiple (the counterpart to -1.65) that incorporates the skewness effect. We compute the 5th percentile of ’s distribution by computing its first four moments, i.e., rp’s mean, variance, skewness and kurtosis. We then find distribution whose first four moments match those of rp’s.
Applying this methodology to this approach we find the VaR for this portfolio to be USD 3,708.
•
Value of Put Option on USD/DEM
Structured Monte-Carlo Simulation
This approach is based on a full valuation precept where all instruments are marked to market under a large number of scenarios driven by the volatility and correlation estimates The Monte Carlo methodology consists of three major steps: 1. Scenario generation: Using the volatility and correlation estimates for the underlying assets in our portfolio, we produce a large number of future price scenarios Portfolio valuation: For each scenario, we compute a portfolio value
2.
3.
Summary: We report the results of the simulation, either as a portfolio distribution or as a particular risk measure`
Histogram and scatter diagram of rate distributions
Using our volatility and correlation estimates, we can apply our simulation technique to our example portfolio. We can generate a large number of scenarios (1000 in this example case) of DEM 1-year and DEM/USD exchange rates at the 1week horizon
•
Valuation of Instruments in Sample Portfolio
Representation of VaR
PART II : RISKMETRICS AND MODERN FINANCIAL MANAGEMENT
What Riskmetrics Provide?
Comparing ALM to VaR
The advantages of VaR Management are that it • Incorporates the mark-to-market approach uniformly. • Relies on a much shorter horizon forecast of market variables. This improves the risk estimate as short horizon forecasts tend to be more accurate than long horizon forecasts.
VaR: Two Steps Beyond Accounting
PART III : APPLICATION OF RISKMETRICS
1. Market Risk Limits
• Position limits have traditionally been expressed in nominal terms, futures equivalents or other denominators unrelated to the amount of risk effectively incurred
Setting limits in terms of Value-at-Risk has significant advantages: • • Position benchmarks become a function of risk and positions in different markets while products can be compared through this common measure. A common denominator rids the standard limits manuals of a multitude of measures which are different for every asset class.
•
•
Limits become meaningful for management as they represent a reasonable estimate of how much could be lost
VaR measures incorporate portfolio or risk diversification effects. This leads to hierarchical limit structures in which the risk limit at higher levels can be lower than the sum of risk limits of units reporting to it.
Hierarchical VaR Limit Structure
Setting limits in terms of risk helps business managers to allocate risk to those areas which they feel offer the most potential, or in which their firms’ expertise is greatest. This motivates managers of multiple risk activities to favor risk reducing diversification strategies
2. Calibrating Valuation and Risk Models
• An effective method to check the validity of the underlying valuation and risk models is to compare DEaR (Daily Earnings-atRisk) estimates with realized daily profits and losses over time
• By definition, the cone delimited by the +/- DEaR lines should contain 90% of all the stars, because DEaR is defined as the maximum amount of expected profit or losses 90% of the time • If there are substantially more than 10% of the stars outside the DEaR-cone, then the underlying models underestimate the risks • If there are no stars outside the DEaR cone and not even close to the lines, then the underlying models overestimate the risks.
DEaR vs. Actual Daily P&L
3. Performance Evaluation
• Trading and position taking talent have been rewarded to a significant extent on the basis of total returns. Given the high rewards bestowed on outstanding trading talent this may bias the trading professionals towards taking excessive risks
• To do this correctly one needs a standard measure of risks. Ideally risk taking should be evaluated on the basis of three interlinked measures: 1. Revenues 2. Volatility of revenues 3. Risks
Performance Evaluation Triangle
EXAMPLE 5
• • Assume you have to evaluate Trader #1 relative to Trader #2 and the only information on hand is the history of their respective cumulative trading revenues (i.e., trading profits) This information allows you to compare their profits and volatility of their profits, but says nothing about their risks
Applying Evaluation Triangle
With risk information you can compare the traders more effectively. Chart shows, for the two traders the risk ratio, sharpe ratio, and efficiency ratio over time. Chart provides interesting comparative information which lead to a richer evaluation
4. Regulatory Reporting & Capital Requirement
• Financial institutions such as banks and investment firms will soon have to meet capital requirements to cover the market risks that they incur as a result of their normal operations. • Currently the driving forces developing international standards for market risk based capital requirements are
– The European Community which issued a binding Capital Adequacy Directive (EC-CAD) – Basel Committee on Banking Supervision at the Bank for International Settlements (Basel Committee)
Basel Accord
• Riskmetrics data set can be used in order to meet the requirements set forth in Basel Accord • The following formula accounts for the adjustments need to be made due to log changes, 95% confidence and holding period
where
Comparing the Basel Committee Proposal with RiskMetrics
Issue Mapping: how positions are described in summary form
Basel Committee proposal • Fixed Income: at least 6 time buckets, differentiate government yield curves and spread curves. • Equities: country indices, individual stocks on basis of beta equivalent. • Commodities: to be included, not specified how. • Volatility expressed in standard deviation of normal distribution proxy for daily historical observations year or more back. Equal weights or alternative weighting scheme provided effective observation period is at least one year. • Estimate updated at least quarterly • Minimum adverse move expected to happen with probability of 1% (2.32 standard deviations) over 10 business days. Permission to use daily statistics scaled up with square root of 10 (3.1). Equivalent to 7.3 daily standard deviations.
RiskMetrics • Fixed Income: data for 7–10 buckets of government yield curves in 16 markets, 4 buckets money market rates in 27 markets, 4–6 buckets in swap rates in 18 markets. • Equities: country indices in 27 markets, individual stocks on beta (correction for non-systematic risk). • Commodities: 80 volatility series in 11 commodities (spot and term). • Volatility expressed in standard deviation of normal distribution proxy for exponentially weighted daily historical observations with decay factors of .94 (for trading, 74 day cut-off 1%) and .97 (for investing, 151 day cut-off at 1%). • Special Regulatory Data Set, incorporating Basel Committee 1-year moving average assumption. • Estimates updated daily. • For trading: minimum adverse move expected to happen with probability of 5% (1.65 standard deviation) over 1 business day. • For investment: minimum adverse move expected to happen with probability of 5% (1.65 standard deviation) over 25 business days.
Volatility : how statistics of future price movement are estimated
Adversity: size of adverse move in terms of normal distribution
Comparing the Basel Committee Proposal with RiskMetrics
Issue Options: treatment of time value and non-linearity
Basel Committee proposal • Risk estimate must consider effect of non-linear price movement (gammaeffect). • Risk estimate must include effect of changes in implied volatilities (vega-effect).
RiskMetrics • Non-linear price movement can be estimated analytically (delta-gamma) or under simulation approach. Simulation scenarios to be generated from estimated volatilities and correlations. • Estimates of volatilities of implied volatilities currently not provided, thus limited coverage of options risk. • Full portfolio effect considered across all possible parameter combinations. • Correlations estimated using exponentially weighted daily historical observations with decay factors of 0.94 (for trading, 74 day cutoff 1%) and 0.97 (for investing, 151 day cutoff at 1%). • Does not deal with specific risks covered in standard maps
Correlation: how risks are aggregated
• Portfolio effect can be considered within asset classes (Fixed Income, Equity, Commodity, FX). Use of correlations across asset classes subject to regulatory approval. • Correlations estimated with equally weighted daily data for more than one year • Instrument specific risks not covered by standard maps should be estimated. • Capital requirements at least equal to 50% of charge calculated under standard methodology.
Residuals: treatment of instrument specific risks
THANK YOU
doc_445581802.pptx