# Avoiding Risk Underestimation by Using the Jensen Inequality

## Avoiding the Risk of Overestimating Profit

Suppose that the demand for a particular product *X* is associated with variation *(X* is a random variable) and the variation of *X* cannot be controlled. The profits *Y* depend on the demand *X* through a particular function *Y = f(X).* The question of interest is which strategy is more conservative with respect to assessing the average profit in order to avoid an overestimation of the profits:

a. Averaging first different values of the demand *X (x =* (1/m)S"=i ^{jc}t) by using *n* random values *х _{и}х_{2},...,х„* of the demand within the demand range, followed by evaluating the average profit from

*у = f(x),*or

b. Obtaining the average profit *у* = (l/n)Z’=i./(^{A}'<) by averaging the profits corresponding to *n* random demands *x _{t},x_{2},...,x„* within the demand range.

The choice of strategy which avoids overestimating the average profit depends on whether the dependence *Y - f(X*) of the profit on the demand is concave or convex. Often, this dependence is concave, of the type in Figure 10.2, because after some

FIGURE 10.2 Profit-demand dependencies are often concave functions.

continuous increase in the profits with increasing demand, a flat region follows due to limited production capacity.

If the dependence *Y = f(X)* is concave, the following Jensen inequality holds:

where w, (/ = 1,.,.,/г) are weights that satisfy 0 < w, < 1 and vv, *+w _{2} *+... + vv„ = 1.

If the weights are chosen to be equal vv, = 1 *In,* the Jensen inequality (10.8) becomes

In this case, the average of the profits at different levels of the demand can be significantly smaller than the profit calculated at an average level of the demand.

To demonstrate this consider an example from the biotech industry, where the demand for a particular biochemical product varies uniformly from 0 to 300,000 kg per year and the capacity of the production plant for one year is only 200,000 kg of product per year. Suppose that the profit in USD generated from selling the product is given by *у =* 3.6*, where * is the quantity of the product [in kg] sold.

The profit function is therefore a concave function, defined in the following way:

The average demand is obviously 300,000/2 = 150,000 kg per year. The profit corresponding to the average demand is y, = 3.6 x 150,000 = 540,000. This is the value on the left side of inequality (10.9).

The average of the profits was calculated by using a simple Monte Carlo simulation whose algorithm in pseudo-code is given next.

**Algorithm 10.1**

n=l00000; %number of simulation trials

f(x): %the function that gives the profit at a random demand x S=0 ;

**for **i=l **to **n **do**

**{**

tmp=300000*rand();

y=f(tmp);

S=S+y;

**}**

Average_profit=S/n

Running the Monte Carlo simulation with 100,000 trials resulted in an average profit equal to 480,000. This is the value *yi =* (l/n)Z"=i/(-h) on the right-hand side of inequality (10.9). The difference between the two values for the average profit is significant. Because of the significant difference, critical business decisions cannot be made on the basis of the simple calculation of the profit at the average demand. Instead, the average of the profits taken at different values of the random demand should be taken. This provides a realistic estimate of the real level of profits for the business. The correct decision which eliminates the risk of overestimating the average profit is averaging the profits at different levels of the demand rather than taking the profit at the average demand. Avoiding an overestimation of the profits avoids an optimistic valuation of the business.

## Avoiding the Risk of Underestimating the Cost of Failure

The downtime for repair *X* is always associated wdth variation (X is a random variable) and the variation of *X* cannot be controlled. An assessment of the average cost of failure *Y* (which depends on *X* through a particular dependence *Y = f(X))* can be made in two alternative ways. The question of interest is which strategy is more conservative with respect to assessing the average cost of failure *Y:*

a. Averaging *n* different downtimes *x,x _{2},...,x„, x =* (l/n)Z"=i a, and assessing the average cost of failure

*у = f(x)*with the average value

*x*of the downtime, or

b. Averaging the costs of failure y, = /(*,);...;>'„ = /(x„) at the *n* different values *x _{]},x_{2},...,x„* of the downtimes:

*у =*(l/w)Z"=i/(*f)-

FIGURE 10.3 The choice of a risk avoiding strategy is driven by whether the cost function is (a) concave or (b) convex.

Again, the choice of strategy depends on whether the cost-of-failure dependence *Y=f(X)* is concave or convex.

If the function *Y = f(X)* is concave (Figure 10.3a), the following Jensen inequality holds:

In this case, taking the cost at the average downtime from *у* = /((l/n)Z"=i*.) gives a more conservative estimate of the cost of failure.

If the function *Y =f{X)* is convex (Figure 10.3b), the following Jensen inequality holds:

In this case, taking the average *у =* l/n)Z"=i/(-q) of the cost at different downtimes gives a higher (more conservative) value of the cost of failure.

These examples also show that the best results in eliminating profit overestimation and cost of failure underestimation are obtained from combining domain-specific knowledge and the domain-independent method based on algebraic inequalities. Domain-specific knowledge alone is not sufficient to achieve the risk reduction.

## A Conservative Estimate of System Reliability by Using the Jensen Inequality

Suppose that the reliability *X* of identical components from *n* separate batches is associated with variation (A is a random variable). The reliabilities of the components from the separate batches are *x _{u}x_{2},...,x_{n}* and the batch-to-batch variation of

*X*cannot be controlled. Suppose that an assembly needs to be made which includes

*m*

identical components *[m >* 2) logically arranged in parallel. The *m* identical components needed to build the assembly in parallel can be taken from any of the *n* available batches. The reliability of a system including *m* components logically arranged in parallel is given by the equation

where *x* is the component reliability characterising a particular batch.

By using the particular values *x,X**2**,...,x _{n}* of the reliabilities of the components from the separate batches, an assessment of the reliability

*R*of the assembly is made. The question of interest is which approach gives a more conservative estimate of the average reliability of the assembly built with

*m*components

*(m >*2) arranged in parallel:

a. Averaging the values of the reliabilities of the components from the separate batches *x =* (l/n)Z"=i A and performing a single calculation *R* = 1 -(1 - *x)'" *of the system’s reliability with the average value *x* of the reliability of the components, or

b. Taking the average *R* = (l/n)X!Li*R,* of the reliabilities *R,* = 1 -(1

*i* = 1,...,« of *n* assemblies built with components from the separate batches.

This question can be answered by investigating the system reliability function (10.13) whose second derivative with respect to *x* is negative:

Consequently, the reliability of the system is a concave function of the component reliability *x,* and for concave functions *f(x),* the Jensen inequality states

where u (/ = 1,.,.,/г) are weights that satisfy 0 < vr, < 1 and *w _{{} + w_{2} + ... + w„* = 1.

If the weights are chosen to be equal (w, =1 *In),* the Jensen inequality (10.14) becomes

For the system reliability function (10.13), calculating with the average component reliability results in a higher and more optimistic value.

Consequently, taking *R* = (l/n)£"_{=}|[l-(l-*/)^{m}] for the system reliability gives a conservative estimate.