Menu
Home
Log in / Register
 
Home arrow Management arrow Risk management in banking

EXTENSIONS OF THE MARKET VAR METHODOLOGY

There are several extensions of the VaR methodologies. The E-VaR is the expectation of VaR conditional on exceeding the threshold at the preset confidence level. It is easy to derive under the normal assumptions but can be derived from historical or hypothetical simulations as well. Hypothetical scenarios allow also stress-testing the VaR to see what happens in extreme conditions.

E-VaR or Expected Shortfall

The E-VaR, also named expected shortfall or expected tail loss, is the expected loss conditional on loss being lower or equal to VaR. We call the random loss L and the threshold point matching the percentile a, or VaR, L(a). The VaR is the loss percentile P[(L < L(a)] — F(a). The cumulative distribution function F of losses is empirical under historical VaR and normal under the delta-normal VaR. The E-VaR is the probability weighted average of losses exceeding the VaR.

Using Bayes' rule, considering a value x of the random portfolio loss:

By definition, P[L < L(a)] — a. The probability P(L — x) that the random loss L is lower than or equal to value x conditional on being lower than or equal to L(a) is the unconditional probability that the loss is lower than or equal to x divided by the unconditional probability that the loss is lower than or equal to L(a), or F(a). By definition, the summation of all probabilities P(L < x) from the lower bound of L up to the loss percentile L(a) is F(a):

For example, if we select the percentile a = 1%, and F(a) = F(l%), and the cumulative probability of exceeding the VaR threshold is 1%. The conditional probabilities attached to any value of loss, x, lower than VaR are P(l)/a — p(l)/1 %. The sum of all probabilities conditional on L < L(a) is equal to 1%.

The expected shortfall is the probability weighted average loss conditional on loss lower than or equal to L(a). This formula requires the calculation of the expectation of the truncated distribution F(L), from the lower bound of L and up to the truncation occurring at L(a). In the case of a normal distribution, the E-VaR can be derived from the probability density function of the normal distribution. In this case, F(a) — <E>(a) and L(a) — 0_1(a). Using cp(L) as the probability density function of the standard normal distribution, with y being the standardized variable of loss:

It is proportional to the normal density of 0_1(a) since the value of the density of the lower bound is zero3. Since this applies to a standardized variable of portfolio loss, this value should be multiplied by the volatility of the portfolio loss, neglecting the expectation of the P & L variation of very a short period.

In practice, we can find the E-VaR from any parametric or non-parametric distributions, by taking the average value of all adverse deviations of the daily P & L. Say we have n values of losses strictly lower than the loss matching the VaR at 99%, each with frequency in percentage equal to p.. The E-VaR is:

Figure 36.2 shows the tail of the distribution of the historical VaR of the forward contract. The full distribution of loss values was obtained by running 1000 simulations of the risk factors.

Tail of the distribution of values of the forward contract

FIGURE 36.2 Tail of the distribution of values of the forward contract

TABLE 36.3 Calculation of E-VaR

Calculation of E-VaR

The tail losses and their frequencies are shown in Table 36.3. The VaR at 1% is exactly 11,000,000. The frequency count shows the number of times a loss value is observed. The loss value is the next column. The next column shows the weighted loss value by the frequency count. The last column shows the cumulative frequencies in percentage. The loss percentile 1%, 11,000,000, can be obtained by summing all frequency counts of loss occurrences across rows, which totals 10 or 1 % out of 1000 simulations. The first row shows the loss value matching the percentile 1%.

The E-VaR is the average of loss occurrences beyond the VaR. The same loss can occur several times, as Table 36.3 shows in the first column, and should be counted as many times as it occurs. The E-VaR is the average of the weighted losses by the number of occurrences of the each loss value. In other words we observe the loss values beyond the VaR and we take the weighted average, ignoring blank values. Blank cells correspond to loss values that do not appear in the distribution. The E-VaR is 19,800,000. In the calculation, only those losses in excess of VaR are used, ignoring the first row.

The E-VaR is above the VaR since is a weighted average of loss values by number of occurrences beyond the VaR on the graph.

Hypothetical Scenarios, Stress-tests and Extreme VaR

Note that hypothetical simulations allow stress testing those factors to which the portfolio is highly sensitive. Extreme VaR measures involve extreme scenarios. Such scenarios might be judgmental and selective, as explained in Chapter 38 on stress tests. For instance, if the management fears wide variations of risk factors selected according to the high sensitivity of the portfolio to such factors, assigning shocks to these factors would results in high adverse deviations of the P & L. This is the "factor-push" methodology, of which a pre-requisite is the identification of those factors to which portfolio sensitivity is higher. The magnitude of adverse effects would make explicit the responses to stressing one or more factors. Using the extreme conditions that prevailed historically over a long period is a common way to stress test portfolios.

Extreme VaR techniques differ in that they attempt to model extreme situations, using the fat tail distribution. A common technique is to fit a distribution with fat tails to selected highest loss values. The extreme value distribution, or the Pareto distribution family, serves for fitting the tail, separately from the rest of the distribution. The technique involves "smoothing the tail" to obtain better estimates of the value percentiles. It allows using a known distribution instead of the modeled or the simulated ones, for determining loss percentiles at low confidence levels without requiring to calculation intensive simulations. By definition, it relies on a small number of observations and smoothing the tail ignores ranges of "blank values" and jumps, which can be a drawback.

 
Found a mistake? Please highlight the word and press Shift + Enter  
< Prev   CONTENTS   Next >
 
Subjects
Accounting
Business & Finance
Communication
Computer Science
Economics
Education
Engineering
Environment
Geography
Health
History
Language & Literature
Law
Management
Marketing
Mathematics
Political science
Philosophy
Psychology
Religion
Sociology
Travel