Constrained Optimization: Kuhn-Tucker Conditions

Inequality Constraints and the Kuhn-Tucker Conditions

Previously, we investigated procedures for solving problems with equality constraints. However, in most realistic problems, many constraints are expressed as inequalities. The inequality constraints form the boundaries of a set containing the solution.

One method for solving NLPs with inequality constraints is by using the Kuhn-Tucker Conditions (KTC) for optimality, sometimes called the Karush- Kuhn-Tucker conditions.[1] In this section, we’ll describe this set of conditions first graphically, then analytically. We’ll discuss both necessary and sufficient conditions for x = {x,X2, ■ ■ ■ ,x„) to be an optimal solution to an NLP with inequality constraints. We’ll also illustrate how to use Maple to compute solutions. Last, we’ll close with example applications.

Basic Theory of Constrained Optimization

The generic form of the NLPs we will study in this section is

(Note: Since a = b is equivalent to (a < b A a > b) and a > b is equivalent to —a < —b, we could focus only on less-than inequalities; however, the technique is more easily understood by allowing all three forms.)

Recall that the optimal solution to an NLP with only equality constraints had to fall on one constraint or at an intersection of several constraints. With inequality constraints, the solution no longer must lie on a constraint or at an intersection point of constraints. We need a method that describes the position of the optimal solution relative to each constraint.

The technique based on the Kuhn-Tucker conditions involves defining a Lagrangian function of the decision variables x, the Lagrange multipliers A;, and the nonnegative slack or surplus variables gf. The nonnegative slack variable g2 is a variable added to the ith ‘less-than or equal’ constraint to transform it to an equality: bi H¥ gi(x) + gf = bp. the nonnegative variable g2 "picks up the slack” in the inequality. The surplus variable g2 is a variable subtracted from the jth ‘greater-than or equal’ constraint to make it an equality: (x) > bj i-t gj (x) — g2 = bj; the variable g2 “holds the surplus” in the inequality. In this formulation, the shadow price for the ith constraint is -Ai.

The Lagrangian function for our generic NLP (4.4) is

Remember, the sign with pi is + for < constraints and — for > constraints.

Analogously to the method Lagrange multipliers, the computational procedure based on the KTC requires that all the partials of the Lagrangian function (4.5) equal zero. All these partials being equal to zero forms the necessary conditions for the solution of (4.4) to exist.

Theorem. Necessary Conditions for an Optimal Solution.

If x* is an optimum for the NLP (4.4), then

Condition (4.6c) is called the complementary slackness condition.

The following theorem provides sufficient conditions for x* to be an optimal solution to the NLP given in (4.4).

Theorem. Sufficient Conditions for an Optimal Solution.

Suppose each g, (x) is a convex function.

Maximum: If /(x) is concave, then any point x* that satisfies the necessary conditions is a maximal solution to (4.4). Further, each A; < 0.

Minimum: If /(x) is convex, then any point x* that satisfies the necessary conditions is a minimal solution to (4.4). Further, each A; > 0.

If the necessary conditions are satisfied, but the sufficient conditions are not completely satisfied, then we may use a bordered Hessian matrix to check the nature of a potential stationary or regular point. The bordered Hessian can be written in general as the block matrix

We can classify, if possible, the stationary points as maxima or minima according to the bordered Hessian’s definiteness. If the bordered Hessian is indefinite, then a different classification method must be used.

The Complementary Slackness Condition

The KTC computational solution process solves the 2m possible cases for A; and щ, where m equals the number of constraints, then applies the necessary conditions to find optimal points. The 2 comes from the number of possibilities for each A,: either A, = 0 or А; ф 0. There is actually more to this process: it really involves the complementary slackness condition imbedded in the necessary condition (4.6c), 2/qA, = 0. If p, equals zero, then A,, the shadow price, can be nonzero and the ith constraint is binding—the optimal point lies on the constraint boundary. If p, is not equal to zero, then A;, the shadow price, must be zero and the ith constraint is nonbinding—there is slack (< constraint) or surplus (> constraint), represented by p,. Ensuring the complementary slackness conditions are satisfied reduces the work involved in solving the other necessary conditions from Equations (4.6a) and (4.6b).

Based on this analysis, the complementary slackness necessary conditions (4.6c) lead to the solution process that we focus on for our computational and geometric interpretation. We have defined pf as a slack or surplus variable. Therefore, if pf equals zero, then our optimal point lies on the ith constraint, and if pf is greater than zero, the optimal point is interior to the ith constraint boundary. However, if the value of p, is undefined because pf equals a negative number, then the point of concern is infeasible. Figure 4.18 illustrates these conditions.

Complementary Slackness Geometrically

FIGURE 4.18: Complementary Slackness Geometrically

Computational KTC with Maple

First, we’ll step through a solution using Maple for the computations. Example 4.23. Two Variable-Two Constraint Linear Problem.

Define the generalized Lagrangian (4.5) for this problem.

The six Necessary Conditions are:

Recognize that, since there are two constraints, there are four (22) cases required to solve for the optimal solution. These 4 cases stem from necessary conditions 2/iiAi = 0 and 2/<2 A-г = 0, the complementary slackness conditions. The cases are collected in Table 4.7.

TABLE 4.7: The Four Cases for Complementary Slackness

Case

Condition Imposed

Condition Inferred

I.

II.

III.

IV.

For simplicity, we have arbitrarily made both x and у > 0 for this maximization problem. Figure 4.19 shows a graphical representation.

The Region of Feasible Solutions

FIGURE 4.19: The Region of Feasible Solutions

Returning to Cases I-IV, we observe that:

Case I. There is slack in both the first and second constraints as both pf > 0. Therefore, we do not fall exactly on either of the constraint boundaries. This corresponds to the intersection point labeled Pi at (0,0), since only intersection points can lead to linear optimization solutions. This point is feasible, but is clearly not optimal—moving away from (0,0) in the first quadrant increases the objective function. This case will not yield an optimal solution.

Case II. The possible solution point is on the second constraint, but not on the first constraint. There are two possibilities, P3 and P5, from Figure 4.19. Point Pr, is infeasible. Point P3 is feasible, but not an optimal solution. This case will not yield an optimal solution.

Case III. The possible solution point is on the first constraint, but not on the second constraint. There are two possible solutions, P2 and Pi, from Figure 4.19. Point P4 is infeasible. Point P2 is feasible, but not optimal. Again, this case does not yield an optimal solution.

Case IV. The possible solution point lies on both constraint 1 and constraint 2 simultaneously. This corresponds to Pq in Figure 4.19. Point Pa is the optimal solution. It. is the point of the feasible region furthest in the direction of increased value of the objective function. This case will yield the optimal solution to the problem.

Sensitivity analysis is also enhanced by a geometric approach. Figure 4.19 shows that increasing the right-hand side of either or both constraints will extend the feasible region in the direction of the objective function’s increase, thus increasing the value of the objective function. We can also see this through the computational process and the solution’s values of the A;. Computational sensitivity analysis can be derived with the value of the shadow price —A*.

Since the objective function and constraints are linear, convex, and concave functions, the sufficient conditions are also satisfied.

The following computational analysis will show that Case IV yields the optimal solution, confirming the graphical solution.

Case I. Ai = A2 = 0. This case violates Equations (4.7a), 2 ф 0, and (4.7b), 3 ф 0. This case also implies /.if ф 0 with slack in both inequalities.

Case II. Ai = о, A2 ф 0. This case violates either Equation (4.7a), A2 = -3, or (4.7b), A2 = -2.

Case III. Ai ф 0, A2 = 0. This case similarly violates either Equation (4.7a), A: = -3/2, or (4.7b), = -2.

Case IV. Ai ф 0, A2 ф 0. This case implies that y = М2 = 0 which reduces (4.7) to the two sets

and

Solving these sets simultaneously yields the optimal solution x* = 20, y* = 60 giving the maximum f(x*,y*) = 180. We also have Ai = —1, A2 = —1, =

fi‘2 = 0. The shadow price indicates for a small change A in the right-hand side value of either Constraint 1 or Constraint 2, the objective value will increase by approximately — A • Л = Д. The geometric interpretation reinforces the computational results, giving them meaning and fully showing the effect of binding constraints (constraints where fij = 0) on the solution.

The following is a short Maple session of the computations above.

How do we know we’ve found a maximum? Recall the rules for finding the maximum or minimum.

MAXIMUM: If /(x) is a concave function and each constraint <7,;(x) is a convex function, then any point that satisfies the necessary conditions is an optimal solution that maximizes the function subject to the constraints and has each

A; <0.

MINIMUM: If /(x) is a convex function and each constraint ffi(x) is a convex function, then any point that satisfies the necessary conditions is an optimal solution that minimizes the function subject to the constraints and has each A; > 0.

The objective function in linear, and so is both convex and concave, as are the constraints. Since the values of A; are both negative, we have found the maximum.

We can use Maple’s own LagrangeMultipliers.

Note the sign difference on Maple’s As. Explain how this occurs.

In the next example, we add one constraint, x < 40, to the previous problem. Adding one constraint causes the number of solution cases we must consider to grow from 22 to 23 or doubling to 8 cases—each additional constraint doubles the number of cases. The new problem with three constraints is shown in Figure 4.20. Again, for simplicity, we arbitrarily force both x and у > 0.

Example 4.24. Two Variable, Three Constraint Linear Problem.

Maximize z = 3x + 2 у

The new Lagrangian is

Add the new constraint to the graph.

Contours of /(x) with Three Constraints

FIGURE 4.20: Contours of /(x) with Three Constraints

A summary of the graphical interpretation is displayed in Table 4.8. The optimal solution is found using Case IV. Again, the computational solution merely looks for the point where either all the necessary conditions are met or are not violated. The geometric interpretation reinforces why the cases other than IV do not yield an optimal solution.

Constrained Optimization: Kuhn-Tucker Conditions TABLE 4.8: The Eight Cases

Case

Condition Imposed

Point

Feasible

Optimal

I.

Ai = 0, A2 = 0, A3 = 0

(all constraints have slack)

Pi

ves

no

II.

Ai ф 0, A2 = 0, A3 = 0 (on constraint 1, not 2 or 3)

P2,Pi

no/no

no/no

III.

Ai = 0, A2 ф 0, A3 = 0 (on constraint 2, not 1 or 3)

Рз,Р5

yes/no

no/no

IV.

Ai = 0, A2 = 0, A3 ф 0 (on constraint 3, not 1 or 2)

Pr

ves

no

V.

Ai ф 0, A2 ф 0, A3 = 0 (on constraints 1 & 2, not 3)

Pg

yes

yes

VI.

Ai ф 0, A2 = 0, A3 ± 0 (on constraints 1 & 3, not 2)

P8

yes

no

VII.

Ai = 0, A2 ф 0, A3 ф 0 (on constraints 2 & 3, not 1)

P9

no

no

VIII.

Ai ф 0, A2 ф 0, A3 ± 0 (on constraint 1, 2, and 3)

The optimal solution will be found only in Case V which geometrically shows that the solution is binding on Constraints 1 and 2, and not binding on Constraint 3 (slack still exists). The optimal solution found computationally using Case IV (as done in the previous example) is

the same as before. Constraint 3 did not alter the solution as it is nonbinding; i.e., has slack in the solution. The “detailed solution” adds A3 and

The geometric interpretation takes the mystery out of the case-wise solutions. We can visually see why in each specific case we can achieve or not achieve optimality conditions. Whenever possible, make a quick graph, and analyze the graph to eliminate as many cases as possible prior to doing the computational solution procedures. Let’s apply this procedure to another example.

Example 4.25. Geometric Constrained Nonlinear Optimization Problem.

Use Maple to generate contour plots overlaid with the constraints to obtain the geometrical interpretation shown in the worksheet below. The optimal solution, as visually shown, is the point where the level curve of the objective function is tangent to the constraint x + у = 19 in the direction of increase for the contours of f. The solution satisfies the other constraint {x — ll)2 + — 13)2 < 49, but there is slack in this constraint. The solution corresponds to the case where Constraint 2 is binding and Constraint 1 is nonbinding. The constraints being nonbinding and binding, respectively, are shown computationally by

Finish by estimating the solution in the plot below.

We use the fact that Constraint 1 is nonbinding and Constraint 2 is binding to directly solve this case to find the optimal solution. Graphically, we can obtain a good approximation, but we cannot obtain the shadow prices which are invaluable in sensitivity analysis. In this case, we saw that

The necessary conditions for this case are

Let’s use Maple to find the optimal point and the shadow prices.

We’ve defined / and above, so start by loading the Stu- dent[MultivariateCalculus] package to make Gradient available.

We’ll add the case defining Ai = 0 and рг = 0 to the necessary conditions’ system of equations.

For solving this system, we’ll use some of Maple’s “cleverness:” using Real- Domain will ignore complex solutions to our system, and we’ll solve for p(, rather than ц.

We determine the optimal solution that satisfies the conditions is: x* = 11, y* = 8, Ai = 0, A2 = 6, pj = 24, and p = 0. The value of the objective function is f(x*,y*) = 18.

Interpreting the shadow prices shows that if there are more resources for Constraint 2, our objective function will decrease. If we add Д to the right- hand side of Constraint 2, the objective function value will decrease by approximately 6Д. If we changed the right-hand side of the constraint from 19 to 20, the optimal solution becomes x* = 11.5, y* = 8.5, and f(x*,y*) = 12.5 or a decrease of 5.5 units (verification is left as an exercise).

It is also possible to use the LagrangeMultipliers function from the Stu- dent[MultivariateCalculus] package to get extensive information. To get the extra analysis from Maple:

  • • write the constraints as they appear in the Lagrangian,
  • • write the constants in the constraints as decimals, and
  • • list A,; and p, as variables.

In order to make the output more readable, normalize the decimals to four figures and increase the size of matrix displayed. (The display here is smaller and reduced to three digits to fit in the text—execute the code to see the full solution display.)

The first four lines displayed have p2 complex—we discard those, they are not feasible. The fifth line has p. negative, this is also not feasible. The sixth line corresponds to the solution we’ve found. The remaining lines (not shown here) have X nonzero. Note, Maple’s As have an opposite signs to ours.

We close this example by once again pointing out the value of the objective function f(x*,y*) = 18 is a minimum because / is convex, the y, are convex, and the A, are non-negative at (x*,y*). The shadow price for Constraint 2 is A2 = 6, and the slack in Constraint 1 is /q = 4.9.

Necessary and Sufficient Conditions for Computational KTC

Visual interpretation from graphs can significantly reduce the amount of work required to solve the problem. Interpreting the plot can provide the conditions involved at the optimal point, then we can solve directly for that point. However, often a graphical interpretation cannot be obtained, then we must rely on the computational method alone. When this occurs, we must solve all the cases and interpret the results.

Let’s redo the previous example without any graphical analysis. Example 4.26. Computational Constrained Nonlinear Optimization.

The Lagrangian is

Therefore, the necessary conditions are

Define the functions in Maple, then investigate the cases for A, and p, being zero or nonzero. We’ll use decimals for the constants to force floatingpoint arithmetic to simplify the calculations. Remember to load the Stu- dent[MultivariateCalculus] package to make the Gradient function available.

In each of the cases, we will solve for variables fif rather than щ to reduce complexity. We’ll also use RealDomain:-solve to avoid complex solutions. Defining the list of variables as vars reduces typing and simplifies the command structure making it more readable.

Case 1. Ai = 0 and A2 = 0. Then ф 0 and Ф 0-

This case is infeasible since = —b which indicates that Condition 2, (j2(•£, y) < 19, is violated. It would have been easy to solve this necessary conditions system by hand. The first two necessary conditions give x = 14 and у = 11 directly, then substituting the values for x and у in the two constraints easily gives = 36 and p| = — 6. Using Maple, however, provided a template for solving all four cases.

Case 2. Ai = 0 and A2 ф 0. Then pi ф 0 and = 0.

All the necessary conditions are satisfied at the point (x*, y*) = (11,8) where Aj = 0, A2 = 6, fi'i = 24, and = 0- (Check this!) The value of / at this point is 18. It is left as an exercise to show that (11,8) is an optimal point.

Case 3. Ai ф 0 and A2 = 0. Then p, = 0 and ф 0.

This case is also infeasible as is negative in both instances which indicates that Condition 2, (/2, is violated.

Case 4. Ai ф 0 and A2 ф 0. Then p, = 0 and /dj = 0.

This case is again infeasible. The functional values are /(12.77,6.228) = 24.284 and /(4.228,14.77) = 109.720. These are not optimal values because they do not satisfy the sufficient conditions for A,; for a relative minimum. Show that /(4.228,14.77) is a relative maximum. How does this happen?

Use Figure 4.21 which shows the contours of /, the constraints, and the six points found in the cases above to geometrically explain why each point appeared as a potential solution.

Infeasible Solutions and the Real Solution

FIGURE 4.21: Infeasible Solutions and the Real Solution

Example 4.27. Computational Constrained Nonlinear Optimization Redux.

We’ll use the template we developed in the previous example: Choose from the four values of Aj and A2, substitute these into the necessary conditions system of equations, solve the system (using nf as a variable), and analyze the solution.

We’re ready to begin working through the four cases.

The point (2,3) is not a solution; ц and are both negative so both constraints are violated.

All necessary conditions are satisfied. Since the objective function is convex, the constraints are linear (and therefore convex), and A; for all binding constraints are positive (A2 = 2), the sufficient conditions are met. This solution is the only solution that satisfies all the necessary and the sufficient conditions. Thus (2,3) is the optimal solution.

The point (1.667,2.333) is not an optimal solution as is negative violating Constraint 2.

Case IV yields the same solution as Case II.

Use Figure 4.22 which shows the contours of /, the constraints, and the points found in the cases above to geometrically explain why each point appeared as a potential solution.

Infeasible Solutions and the Real Solution Redux

FIGURE 4.22: Infeasible Solutions and the Real Solution Redux

Applications Using the Kuhn-Tucker Conditions Method

Example 4.28. Maximizing Profit from Perfume Manufacturing.

A company manufactures perfumes. They can purchase up to 1,925 oz of the main chemical ingredient at $10 per oz. An ounce of the chemical can produce an ounce of Perfume #1 with a processing cost of $3 per oz. On the other hand, the chemical can produce an ounce of the higher priced Perfume #2 with a processing cost of $5 per oz. The Analytics department used historical data to estimate that Perfume #1 will sell for $(30 — 0.01a;) per ounce if x ounces are manufactured, and Perfume #2 can sell for $(50 — 0.02у ounces are manufactured. The company wants to maximize profits.

Model Formulation. Let Then

SOLUTION. Set up the model, define the Lagrangian function L, and use the techniques from the previous examples.

Once again, we’re ready to begin working through the four cases. A summary of the results from Maple is shown in Table 4.9.

TABLE 4.9: Perfume Application’s Four Cases

Case

X

У

z

Al

A2

Mi

М2

I.

-

-

-

0

0

-

-

II.

1350

1125

1925

0

10

-550

0

III.

850

875

1725

-10

0

0

200

IV.

983.33

941.67

1925

-7.333

2.667

0

0

Remarks:

Case I. DL/dz = 0 becomes —10 = 0; this case is infeasible.

Case II. (ij is negative violating Constraint 1.

Case III. A; < 0 and /t, > 0: Candidate Solution.

Case IV. A,s have different signs so not an optimal point.

We have a concave profit function P. linear constraints, and A, (Ai = — 10) is negative for all binding constraints. Thus, we have met the sufficient conditions for the point (850,875) to be the optimal solution. The optimal manufacturing strategy is to purchase г = 1725 ounces of the chemical and produce x = 850 ounces of Perfume #1 and у = 875 ounces of Perfume #2 yielding a profit of P = $22,537.50.

Consider the significance of the shadow price for Ai. How do we interpret the shadow price in terms of this scenario? If we could obtain an extra ounce (Д = 1) of the main chemical at no cost, it would improve the profit to about $22,37.50 + $10. What would be the largest cost for an extra ounce of the main chemical that would still yield a higher profit?

Example 4.29. Minimum Variance of Expected Investment Returns.

A new company has $5,000 to invest to generate funds for a planned project; the company needs to earn about 12% interest. A stock expert has suggested three mutual funds, A, B. and C, in which the company could invest. Based upon previous year’s returns, these funds appear relatively stable. The expected return, variance on the return, and covariance between funds are shown in Table 4.10 below.

TABLE 4.10: Mutual Fund Investment Data

A

В

C

Expected Value

0.14

0.11

0.10

Variance

0.2

0.08

0.18

AB

AC

BC

Covariance

0.05

0.02

0.03

We use laws of expected value, variance, and covariance in our model.

Model Formulation. Let Xj be the number of dollars invested in fund j for j = 1,2, 3, representing Funds A. В. and C. Our objective is to minimize the variance of the investment, so

Our constraints are: glf the expected return is at least 12%, and 2, the sum of the investments is no more than $5000. We have the NLP

Solution. Set up the Lagrangian function L.

(Why is n subtracted rather than added?)

Define the functions for the model and calculate the necessary conditions.

Notice that we have again left ц and /«2 out of the list of variables—the complementary slackness conditions are taken care of by considering our standard four cases (4 = 2<num6er of constraints) у

Let Maple do the work.

The output would have overflowed the page and woidd be very hard to read, so we’ll use some “Maple Magic” to put it in an easier-to-handle form.

Inspect the results to see that only the first row has a feasible solution; the rest have a negative pf (imaginary щ). Notice that, in row 1, both A, > 0, that is, both constraints are binding. Then

Check the Hessian.

The Hessian matrix H has all positive leading principal minors. Therefore, since H is always positive definite, then our solution is the optimal minimum. The expected return is 12.0% found from

Exercises

  • 1. Solve the following constrained problems using the Kuhn-Tucker Conditions (KTC) approach.
  • (a) Minimize z = x2 + y2 subject to x + 2у = 4.
  • (b) Maximize z = (x — 3)2 + (y — 2)2 subject to x + 2y = 4.
  • (c) Maximize z = x2 + 4xy + y2 subject to x2 + y2 = 1.
  • (d) Maximize z = x2 + 4xy + y2 subject to x2 + y2 = 4 and x + 2у = 4.
  • 2. Maximize Z = 3X2 + Y2 + 2XY + 6X + 2У subject to 2X — Y = 4.

Did you find the maximum? Explain.

3. Two manufacturing processes, /1 and /2, both use a resource with b units available. Maximize fi(xi) + /2(^2) subject to x + x% = b.

If /(l(a:i) = 50 — (a,’! — 2)2 and /2(3:2) = 50 — (a;2 — 2)2, analyze the manufacturing processes using the KTC approach to

  • (a) determine the amount of xi and x,2 to use to maximize f + /2,
  • (b) determine the amount b of the resource to use.
  • 4. Use the Kuhn-Tucker Conditions to find the optimal solution to the following nonlinear problems.
  • (a) Maximize /(x, y) = —x2 — y2 + xy + 7x + 4у subject to 2:c + Згу >10 and —5a; + 12y < 20.
  • (b) Minimize f(x,y) = 2x + xy + 3у subject to x2 + у > 3 and 2.5 — 0.5a: — у < 0.
  • (c) Minimize f(x, y) = 2x + xy + 3у subject to x2 + у > 3, x + 0.5 > 0, and у > 0.
  • (d) Maximize f(x, y) = — (x — 0.4)2(y — 5)2 subject to —x + 2y < 4, x2 + y2 < 14, and x, у > 0.
  • 5. Minimize z = x2 + у2 subject to
  • 2x + y< 100 x + у < 80
  • 6. Maximize z = — (x — 4)2+xy — (y — 4) subject to
  • 2x + 3у < 18 2x + у < 8

Projects

Project 4.1. A newspaper publisher must purchase three types of paper stock. The publisher must meet the demand, but desires to minimize costs in the process. An Economic Lot Size model is chosen to assist them in their decisions. An Economic Order Quantity (EOQ) model with constraints where the total cost is the sum of the individual quantity costs is C(Qi, Q2. Q3) = C(Qi) + C(Q2) + C(Qz) where

where

The constraint is the amount of storage area available to the publisher for storing the three kinds of paper. The items cannot be stacked, but can be laid side by side. The available storage area is S = 200 sq ft.

Table 4.11 shows data that has been collected on the publisher’s usage and costs.

TABLE 4.11: Paper Usage and Cost Data

Type I.

Type II.

Type III.

units

d, 32

24

20

rolls/week

a* 25

18

20

dollars

К 1.00

1.50

2.00

dollars

Si 4

3

2

sq ft/roll

Required.

  • (a) Find the paper quantities that give the unconstrained minimum total cost, and show that these values would not satisfy the constraint. What purpose do these values serve?
  • (b) Find the constrained optimal solution by using Lagrange Multipliers assuming all 200 sq feet is used.
  • (c) Determine and interpret the shadow prices.

Project 4.2. In the tank storage problem. Example 4.22 (pg. 175), determine whether it is better to have cylindrical storage tanks or rectangular storage tanks of 50 cubic units.

Project 4.3. Use the Cobb-Douglas function P(L. K) = aLaKb where L is labor and К is capital, to predict output in thousands, based upon amount of labor and capital used. Suppose the price of capital and labor per year are $10,000 and $7,000, respectively. The company estimates the values of a as 1.2, a = 0.3, and b = 0.6. Your total cost is assumed to be T = L + P^k, where PL and Pf. are the price of capital and labor. There are three possible funding levels: $63,940, $55,060, or $71,510. Determine which budget yields the best solution for the company. Interpret the Lagrange multiplier.

References and Further Reading

[BSS2013] Mokhtar S. Bazaraa, Hanif D. Sherali, and Chitharanjan M. Shetty, Nonlinear Programming: Theory and Algorithms, Wiley, 2013.

[EK1988] Л. Ecker and M. Kupferschmid, Introduction to Operations Research, John Wiley & Sons Inc., 1988.

[HL2009] Frederick Hillier and G Lieberman, Introduction to operations research, vol. 9, McGraw-Hill Science/Engineering/Math, 2009.

[W2002] W. L. Winston, Introduction to mathematical programming applications and algorithms, 4th ed., Duxbury Press, 2002.

5

  • [1] sFirst formulated in Kuhn & Tucker, “Nonlinear programming,” Proc 2nd Berkeley Sym,U Cal Press, 1951, pp. 481-492.
 
Source
< Prev   CONTENTS   Source   Next >