13.10: Lagrange Multipliers (2024)

  1. Last updated
  2. Save as PDF
  • Page ID
    13653
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}}}\)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}}}\)

    \( \newcommand{\vectorC}[1]{\textbf{#1}}\)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}}\)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}}\)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}}}\)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{#1}}} \)

    Solving optimization problems for functions of two or more variables can be similar to solving such problems in single-variable calculus. However, techniques for dealing with multiple variables allow us to solve more varied optimization problems for which we need to deal with additional conditions or constraints. In this section, we examine one of the more common and useful methods for solving optimization problems with constraints.

    Lagrange Multipliers

    In the previous section, an applied situation was explored involving maximizing a profit function, subject to certain constraints. In that example, the constraints involved a maximum number of golf balls that could be produced and sold in \(1\) month \((x),\) and a maximum number of advertising hours that could be purchased per month \((y)\). Suppose these were combined into a single budgetary constraint, such as \(20x+4y≤216\), that took into account both the cost of producing the golf balls and the number of advertising hours purchased per month. The goal is still to maximize profit, but now there is a different type of constraint on the values of \(x\) and \(y\). This constraint and the corresponding profit function

    \[f(x,y)=48x+96y−x^2−2xy−9y^2 \nonumber\]

    is an example of an optimization problem, and the function \(f(x,y)\) is called the objective function. A graph of various level curves of the function \(f(x,y)\) follows.

    13.10: Lagrange Multipliers (2)

    In Figure \(\PageIndex{1}\), the value \(c\) represents different profit levels (i.e., values of the function \(f\)). As the value of \(c\) increases, the curve shifts to the right. Since our goal is to maximize profit, we want to choose a curve as far to the right as possible. If there were no restrictions on the number of golf balls the company could produce or the number of units of advertising available, then we could produce as many golf balls as we want, and advertise as much as we want, and there would be not be a maximum profit for the company. Unfortunately, we have a budgetary constraint that is modeled by the inequality \(20x+4y≤216.\) To see how this constraint interacts with the profit function, Figure \(PageIndex{2}\) shows the graph of the line \(20x+4y=216\) superimposed on the previous graph.

    13.10: Lagrange Multipliers (3)

    As mentioned previously, the maximum profit occurs when the level curve is as far to the right as possible. However, the level of production corresponding to this maximum profit must also satisfy the budgetary constraint, so the point at which this profit occurs must also lie on (or to the left of) the red line in Figure \(\PageIndex{2}\). Inspection of this graph reveals that this point exists where the line is tangent to the level curve of \(f\). Trial and error reveals that this profit level seems to be around \(395\), when \(x\) and \(y\) are both just less than \(5\). We return to the solution of this problem later in this section. From a theoretical standpoint, at the point where the profit curve is tangent to the constraint line, the gradient of both of the functions evaluated at that point must point in the same (or opposite) direction. Recall that the gradient of a function of more than one variable is a vector. If two vectors point in the same (or opposite) directions, then one must be a constant multiple of the other. This idea is the basis of the method of Lagrange multipliers.

    Method of Lagrange Multipliers: One Constraint

    Theorem \(\PageIndex{1}\): Let \(f\) and \(g\) be functions of two variables with continuous partial derivatives at every point of some open set containing the smooth curve \(g(x,y)=k\), where \(k\) is a constant. Suppose that \(f\), when restricted to points on the curve \(g(x,y)=k\), has a local extremum at the point \((x_0,y_0)\) and that \(\vecs ∇g(x_0,y_0)≠0\). Then there is a number \(λ\) called a Lagrange multiplier, for which

    \[\vecs ∇f(x_0,y_0)=λ\vecs ∇g(x_0,y_0).\]

    Proof

    Assume that a constrained extremum occurs at the point \((x_0,y_0).\) Furthermore, we assume that the equation \(g(x,y)=k\) can be smoothly parameterized as

    \(x=x(s) \; \text{and}\; y=y(s)\)

    where \(s\) is an arc length parameter with reference point \((x_0,y_0)\) at \(s=0\). Therefore, the quantity \(z=f(x(s),y(s))\) has a relative maximum or relative minimum at \(s=0\), and this implies that \(\dfrac{dz}{ds}=0\) at that point. From the chain rule,

    \[\begin{align*} \dfrac{dz}{ds} &=\dfrac{∂f}{∂x}⋅\dfrac{∂x}{∂s}+\dfrac{∂f}{∂y}⋅\dfrac{∂y}{∂s} \\[5pt] &=\left(\dfrac{∂f}{∂x}\hat{\mathbf i}+\dfrac{∂f}{∂y}\hat{\mathbf j}\right)⋅\left(\dfrac{∂x}{∂s}\hat{\mathbf i}+\dfrac{∂y}{∂s}\hat{\mathbf j}\right)\\[5pt] &=0, \end{align*}\]

    where the derivatives are all evaluated at \(s=0\). However, the first factor in the dot product is the gradient of \(f\), and the second factor is the unit tangent vector \(\vec{\mathbf T}(0)\) to the constraint curve. Since the point \((x_0,y_0)\) corresponds to \(s=0\), it follows from this equation that

    \[\vecs ∇f(x_0,y_0)⋅\vecs{\mathbf T}(0)=0, \nonumber\]

    which implies that the gradient is either the zero vector \(\vecs 0\) or it is normal to the constraint curve at a constrained relative extremum. However, the constraint curve \(g(x,y)=k\) is a level curve for the function \(g(x,y)\) so that if \(\vecs ∇g(x_0,y_0)≠0\) then \(\vecs ∇g(x_0,y_0)\) is normal to this curve at \((x_0,y_0)\) It follows, then, that there is some scalar \(λ\) such that

    \[\vecs ∇f(x_0,y_0)=λ\vecs ∇g(x_0,y_0) \nonumber\]

    \(\square\)

    To apply Theorem \(\PageIndex{1}\) to an optimization problem similar to that for the golf ball manufacturer, we need a problem-solving strategy.

    Problem-Solving Strategy: Steps for Using Lagrange Multipliers

    1. Determine the objective function \(f(x,y)\) and the constraint function \(g(x,y).\) Does the optimization problem involve maximizing or minimizing the objective function?
    2. Set up a system of equations using the following template: \[\begin{align} \vecs ∇f(x,y) &=λ\vecs ∇g(x,y) \\[5pt] g(x,y)&=k \end{align}.\]
    3. Solve for \(x\) and \(y\) to determine the Lagrange points, i.e., points that satisfy the Lagrange multiplier equation.
    4. If the objective function is continuous on the constraint and the constraint is a closed curve (like a circle or an ellipse), then the largest of the values of \(f\) at the solutions found in step \(3\) maximizes \(f\), subject to the constraint; the smallest of those values minimizes \(f\), subject to the constraint.

      But in other cases, we need to evaluate the objective functions \(f\) at points from the constraint on either side of each Lagrange point to determine whether we have obtained a relative maximum or a relative minimum.

    Note that it is possible that our objective function will not have a relative maximum or a relative minimum at a given Lagrange point. This can occur in a couple situations, but most often when the Lagrange point is also a critical point of the objective function giving us a saddle point. Most of the time we will still get a relative extremum at a saddle point subject to a constraint, but sometimes we will not. See Figure \(\PageIndex{3}\) for an example of this case.

    Figure \(\PageIndex{3}\): Graph of \(f(x,y)=x^2-y^3\) along with the constraint \((x-1)^2 + y^2 = 1\). Note that there is no relative extremum at \((0,0)\), although this point will satisfy the Lagrange Multiplier equation with \(\lambda=0\).

    Example \(\PageIndex{1}\): Using Lagrange Multipliers

    Use the method of Lagrange multipliers to find the minimum value of \(f(x,y)=x^2+4y^2−2x+8y\) subject to the constraint \(x+2y=7.\)

    Solution

    Let’s follow the problem-solving strategy:

    1. The objective function is \(f(x,y)=x^2+4y^2−2x+8y.\) The constraint function is equal to the left-hand side of the constraint equation when only a constant is on the right-hand side. So here \(g(x,y)=x+2y\). The problem asks us to solve for the minimum value of \(f\), subject to the constraint (Figure \(\PageIndex{4}\)).

    13.10: Lagrange Multipliers (4)

    2. We then must calculate the gradients of both \(f\) and \(g\):

    \[\vecs \nabla f \left( x, y \right) = \left( 2x - 2 \right) \hat{\mathbf{i}} + \left( 8y + 8\right) \hat{\mathbf{j}} \\ \vecs \nabla g \left( x, y \right) = \hat{\mathbf{i}} + 2 \hat{\mathbf{j}}.\]

    The equation \(\vecs \nabla f \left( x, y \right) = \lambda \vecs \nabla g \left( x, y \right)\) becomes

    \[\left( 2 x - 2 \right) \hat{\mathbf{i}} + \left( 8 y + 8\right) \hat{\mathbf{j}} = \lambda \left( \hat{\mathbf{i}} + 2 \hat{\mathbf{j}} \right),\]

    which can be rewritten as

    \[\left( 2 x - 2 \right) \hat{\mathbf{i}} + \left( 8 y + 8 \right) \hat{\mathbf{j}} = \lambda \hat{\mathbf{i}} + 2 \lambda \hat{\mathbf{j}}.\]

    Next, we set the coefficients of \(\hat{\mathbf{i}}\) and \(\hat{\mathbf{j}}\) equal to each other:

    \[\begin{align} 2 x - 2 &= \lambda \\ 8 y + 8 &= 2 \lambda. \end{align}\]

    The equation \(g \left( x, y \right) = k\) becomes \(x + 2 y = 7 \). Therefore, the system of equations that needs to be solved is

    \[\begin{align} 2 x - 2 &= \lambda \\ 8 y + 8 &= 2 \lambda \\ x + 2 y &= 7. \end{align}\]

    3. This is a linear system of three equations in three variables. We start by solving the second equation for \(λ\) and substituting it into the first equation. This gives \(λ=4y+4\), so substituting this into the first equation gives \[2x−2=4y+4.\nonumber\] Solving this equation for \(x\) gives \(x=2y+3\). We then substitute this into the third equation: \[\begin{align*} (2y+3)+2y&=7 \\[5pt]4y&=4 \\[5pt]y&=1. \end{align*}\] Since \(x=2y+3,\) this gives \(x=5.\)

    4. Next, we evaluate \(f(x,y)=x^2+4y^2−2x+8y\) at the point \((5,1)\), \[f(5,1)=5^2+4(1)^2−2(5)+8(1)=27.\]To ensure this corresponds to a minimum value on the constraint function, let’s try some other points on the constraint from either side of the point \((5,1)\), such as the intercepts of \(g(x,y)=0\), Which are \((7,0)\) and \((0,3.5)\).

    We get \(f(7,0)=35 \gt 27\) and \(f(0,3.5)=77 \gt 27\).

    So it appears that \(f\) has a relative minimum of \(27\) at \((5,1)\), subject to the given constraint.

    Exercise \(\PageIndex{1}\)

    Use the method of Lagrange multipliers to find the maximum value of

    \[f(x,y)=9x^2+36xy−4y^2−18x−8y \nonumber\]

    subject to the constraint \(3x+4y=32.\)

    Hint

    Use the problem-solving strategy for the method of Lagrange multipliers.

    Answer

    Subject to the given constraint, \(f\) has a maximum value of \(976\) at the point \((8,2)\).

    Let’s now return to the problem posed at the beginning of the section.

    Example \(\PageIndex{2}\): Golf Balls and Lagrange Multipliers

    The golf ball manufacturer, Pro-T, has developed a profit model that depends on the number \(x\) of golf balls sold per month (measured in thousands), and the number of hours per month of advertising y, according to the function

    \[z=f(x,y)=48x+96y−x^2−2xy−9y^2, \nonumber\]

    where \(z\) is measured in thousands of dollars. The budgetary constraint function relating the cost of the production of thousands golf balls and advertising units is given by \(20x+4y=216.\) Find the values of \(x\) and \(y\) that maximize profit, and find the maximum profit.

    Solution:

    Again, we follow the problem-solving strategy:

    1. The objective function is \(f(x,y)=48x+96y−x^2−2xy−9y^2.\) To determine the constraint function, we divide both sides by \(4\), which gives \(5x+y=54.\) The constraint function is equal to the left-hand side, so \(g(x,y)=5x+y.\) The problem asks us to solve for the maximum value of \(f\), subject to this constraint.
    2. So, we calculate the gradients of both \(f\) and \(g\): \[\begin{align*} \vecs ∇f(x,y)&=(48−2x−2y)\hat{\mathbf i}+(96−2x−18y)\hat{\mathbf j}\\[5pt]\vecs ∇g(x,y)&=5\hat{\mathbf i}+\hat{\mathbf j}. \end{align*}\] The equation \(\vecs ∇f(x,y)=λ\vecs ∇g(x,y)\) becomes \[(48−2x−2y)\hat{i}+(96−2x−18y)\hat{\mathbf j}=λ(5\hat{\mathbf i}+\hat{\mathbf j}),\nonumber\] which can be rewritten as \[(48−2x−2y)\hat{\mathbf i}+(96−2x−18y)\hat{\mathbf j}=λ5\hat{\mathbf i}+λ\hat{\mathbf j}.\nonumber\] We then set the coefficients of \(\hat{\mathbf i}\) and \(\hat{\mathbf j}\) equal to each other: \[\begin{align*} 48−2x−2y&=5λ \\[5pt] 96−2x−18y&=λ. \end{align*}\] The equation \(g(x,y)=k\) becomes \(5x+y=54\). Therefore, the system of equations that needs to be solved is \[\begin{align*} 48−2x−2y&=5λ \\[5pt] 96−2x−18y&=λ \\[5pt]5x+y&=54. \end{align*}\]
    3. We use the left-hand side of the second equation to replace \(λ\) in the first equation: \[\begin{align*} 48−2x−2y&=5(96−2x−18y) \\[5pt]48−2x−2y&=480−10x−90y \\[5pt] 8x&=432−88y \\[5pt] x&=54−11y. \end{align*}\] Then we substitute this into the third equation: \[\begin{align*} 5(54−11y)+y&=54\\[5pt] 270−55y+y&=54\\[5pt]216&=54y \\[5pt]y&=4. \end{align*}\] Since \(x=54−11y,\) this gives \(x=10.\)
    4. We then substitute \((10,4)\) into \(f(x,y)=48x+96y−x^2−2xy−9y^2,\) which gives \[\begin{align*} f(10,4)&=48(10)+96(4)−(10)^2−2(10)(4)−9(4)^2 \\[5pt] & =480+384−100−80−144=540.\end{align*}\] Therefore the maximum profit that can be attained, subject to budgetary constraints, is \($540,000\) with a production level of \(10,000\) golf balls and \(4\) hours of advertising bought per month. Let’s check to make sure this truly is a maximum. The endpoints of the line that defines the constraint are \((10.8,0)\) and \((0,54)\) Let’s evaluate \(f\) at both of these points: \[\begin{align*} f(10.8,0)&=48(10.8)+96(0)−10.8^2−2(10.8)(0)−9(0^2) \\[5pt] &=401.76 \\[5pt] f(0,54)&=48(0)+96(54)−0^2−2(0)(54)−9(54^2) \\[5pt] &=−21,060. \end{align*}\] The second value represents a loss, since no golf balls are produced. Neither of these values exceed \(540\), so it seems that our extremum is a maximum value of \(f\), subject to the given constraint.

    Exercise \(\PageIndex{2}\): Optimizing the Cobb-Douglas function

    A company has determined that its production level is given by the Cobb-Douglas function \(f(x,y)=2.5x^{0.45}y^{0.55}\) where \(x\) represents the total number of labor hours in \(1\) year and \(y\) represents the total capital input for the company. Suppose \(1\) unit of labor costs \($40\) and \(1\) unit of capital costs \($50\). Use the method of Lagrange multipliers to find the maximum value of \(f(x,y)=2.5x^{0.45}y^{0.55}\) subject to a budgetary constraint of \($500,000\) per year.

    Hint

    Use the problem-solving strategy for the method of Lagrange multipliers.

    Answer:

    Subject to the given constraint, a maximum production level of \(13890\) occurs with \(5625\) labor hours and \($5500\) of total capital input.

    In the case of an objective function with three variables and a single constraint function, it is possible to use the method of Lagrange multipliers to solve an optimization problem as well. An example of an objective function with three variables could be the Cobb-Douglas function in Exercise \(\PageIndex{2}\): \(f(x,y,z)=x^{0.2}y^{0.4}z^{0.4},\) where \(x\) represents the cost of labor, \(y\) represents capital input, and \(z\) represents the cost of advertising. The method is the same as for the method with a function of two variables; the equations to be solved are

    \[\begin{align*} \vecs ∇f(x,y,z)&=λ\vecs ∇g(x,y,z) \\[5pt] g(x,y,z)&=k. \end{align*}\]

    Example \(\PageIndex{3}\): Lagrange Multipliers with a Three-Variable objective function

    Maximize the function \(f(x,y,z)=x^2+y^2+z^2\) subject to the constraint \(x+y+z=1.\)

    Solution:

    1. The objective function is \(f(x,y,z)=x^2+y^2+z^2.\) To determine the constraint function, we set it equal to the variable expression on the left-hand side of the constraint equation: \(x+y+z=1\) which gives the constraint function as \(g(x,y,z)=x+y+z.\)

    2. Next, we calculate \(\vecs ∇f(x,y,z)\) and \(\vecs ∇g(x,y,z):\) \[\begin{align*} \vecs ∇f(x,y,z)&=⟨2x,2y,2z⟩ \\[5pt] \vecs ∇g(x,y,z)&=⟨1,1,1⟩. \end{align*}\] This leads to the equations \[\begin{align*} ⟨2x,2y,2z⟩&=λ⟨1,1,1⟩ \\[5pt] x+y+z&=1 \end{align*}\] which can be rewritten in the following form: \[\begin{align*} 2x&=λ\\[5pt]2y&=λ \\[5pt]2z&=λ \\[5pt]x+y+z&=1. \end{align*}\]

    3. Since each of the first three equations has \(λ\) on the right-hand side, we know that \(2x=2y=2z\) and all three variables are equal to each other. Substituting \(y=x\) and \(z=x\) into the last equation yields \(3x=1,\) so \(x=\frac{1}{3}\) and \(y=\frac{1}{3}\) and \(z=\frac{1}{3}\) which corresponds to a critical point on the constraint curve.

    4. Then, we evaluate \(f\) at the point \(\left(\frac{1}{3},\frac{1}{3},\frac{1}{3}\right)\): \[f\left(\frac{1}{3},\frac{1}{3},\frac{1}{3}\right)=\left(\frac{1}{3}\right)^2+\left(\frac{1}{3}\right)^2+\left(\frac{1}{3}\right)^2=\frac{3}{9}=\frac{1}{3}\] Therefore, a possible extremum of the function is \(\frac{1}{3}\). To verify it is a minimum, choose other points that satisfy the constraint from either side of the point we obtained above and calculate \(f\) at those points. For example, \[\begin{align*} f(1,0,0)&=1^2+0^2+0^2=1 \\[5pt] f(0,−2,3)&=0^2+(−2)^2+3^2=13. \end{align*}\] Both of these values are greater than \(\frac{1}{3}\), leading us to believe the extremum is a minimum, subject to the given constraint.

    Exercise \(\PageIndex{3}\):

    Use the method of Lagrange multipliers to find the minimum value of the function

    \[f(x,y,z)=x+y+z \nonumber\]

    subject to the constraint \(x^2+y^2+z^2=1.\)

    Hint

    Use the problem-solving strategy for the method of Lagrange multipliers with an objective function of three variables.

    Answer

    Evaluating \(f\) at both points we obtained, gives us, \[\begin{align*} f\left(\dfrac{\sqrt{3}}{3},\dfrac{\sqrt{3}}{3},\dfrac{\sqrt{3}}{3}\right)&=\dfrac{\sqrt{3}}{3}+\dfrac{\sqrt{3}}{3}+\dfrac{\sqrt{3}}{3}=\sqrt{3} \\ f\left(−\dfrac{\sqrt{3}}{3},−\dfrac{\sqrt{3}}{3},−\dfrac{\sqrt{3}}{3}\right)&=−\dfrac{\sqrt{3}}{3}−\dfrac{\sqrt{3}}{3}−\dfrac{\sqrt{3}}{3}=−\sqrt{3}\end{align*}\] Since the constraint is continuous, we compare these values and conclude that \(f\) has a relative minimum of \(−\sqrt{3}\) at the point \(\left(−\dfrac{\sqrt{3}}{3},−\dfrac{\sqrt{3}}{3},−\dfrac{\sqrt{3}}{3}\right)\), subject to the given constraint.

    Problems with Two Constraints

    The method of Lagrange multipliers can be applied to problems with more than one constraint. In this case the objective function, \(w\) is a function of three variables:

    \[w=f(x,y,z)\]

    and it is subject to two constraints:

    \[g(x,y,z)=0 \; \text{and} \; h(x,y,z)=0.\]

    There are two Lagrange multipliers, \(λ_1\) and \(λ_2\), and the system of equations becomes

    \[\begin{align*} \vecs ∇f(x_0,y_0,z_0)&=λ_1\vecs ∇g(x_0,y_0,z_0)+λ_2\vecs ∇h(x_0,y_0,z_0) \\[5pt] g(x_0,y_0,z_0)&=0\\[5pt] h(x_0,y_0,z_0)&=0 \end{align*}\]

    Example \(\PageIndex{4}\): Lagrange Multipliers with Two Constraints

    Find the maximum and minimum values of the function

    \[f(x,y,z)=x^2+y^2+z^2 \nonumber\]

    subject to the constraints \(z^2=x^2+y^2\) and \(x+y−z+1=0.\)

    Solution:

    Let’s follow the problem-solving strategy:

    1. The objective function is \(f(x,y,z)=x^2+y^2+z^2.\) To determine the constraint functions, we first subtract \(z^2\) from both sides of the first constraint, which gives \(x^2+y^2−z^2=0\), so \(g(x,y,z)=x^2+y^2−z^2\). The second constraint function is \(h(x,y,z)=x+y−z+1.\)
    2. We then calculate the gradients of \(f,g,\) and \(h\): \[\begin{align*} \vecs ∇f(x,y,z)&=2x\hat{\mathbf i}+2y\hat{\mathbf j}+2z\hat{\mathbf k} \\[5pt] \vecs ∇g(x,y,z)&=2x\hat{\mathbf i}+2y\hat{\mathbf j}−2z\hat{\mathbf k} \\[5pt] \vecs ∇h(x,y,z)&=\hat{\mathbf i}+\hat{\mathbf j}−\hat{\mathbf k}. \end{align*}\] The equation \(\vecs ∇f(x,y,z)=λ_1\vecs ∇g(x,y,z)+λ_2\vecs ∇h(x,y,z)\) becomes \[2x\hat{\mathbf i}+2y\hat{\mathbf j}+2z\hat{\mathbf k}=λ_1(2x\hat{\mathbf i}+2y\hat{\mathbf j}−2z\hat{\mathbf k})+λ_2(\hat{\mathbf i}+\hat{\mathbf j}−\hat{\mathbf k}),\] which can be rewritten as \[2x\hat{\mathbf i}+2y\hat{\mathbf j}+2z\hat{\mathbf k}=(2λ_1x+λ_2)\hat{\mathbf i}+(2λ_1y+λ_2)\hat{\mathbf j}−(2λ_1z+λ_2)\hat{\mathbf k}.\] Next, we set the coefficients of \(\hat{\mathbf i}\) and \(\hat{\mathbf j}\) equal to each other: \[\begin{align*}2x&=2λ_1x+λ_2 \\[5pt]2y&=2λ_1y+λ_2 \\[5pt]2z&=−2λ_1z−λ_2. \end{align*}\] The two equations that arise from the constraints are \(z^2=x^2+y^2\) and \(x+y−z+1=0\). Combining these equations with the previous three equations gives \[\begin{align*} 2x&=2λ_1x+λ_2 \\[5pt]2y&=2λ_1y+λ_2 \\[5pt]2z&=−2λ_1z−λ_2 \\[5pt]z^2&=x^2+y^2 \\[5pt]x+y−z+1&=0. \end{align*}\]
    3. The first three equations contain the variable \(λ_2\). Solving the third equation for \(λ_2\) and replacing into the first and second equations reduces the number of equations to four: \[\begin{align*}2x&=2λ_1x−2λ_1z−2z \\[5pt] 2y&=2λ_1y−2λ_1z−2z\\[5pt] z^2&=x^2+y^2\\[5pt] x+y−z+1&=0. \end{align*}\] Next, we solve the first and second equation for \(λ_1\). The first equation gives \(λ_1=\dfrac{x+z}{x−z}\), the second equation gives \(λ_1=\dfrac{y+z}{y−z}\). We set the right-hand side of each equation equal to each other and cross-multiply: \[\begin{align*} \dfrac{x+z}{x−z}&=\dfrac{y+z}{y−z} \\[5pt](x+z)(y−z)&=(x−z)(y+z) \\[5pt]xy−xz+yz−z^2&=xy+xz−yz−z^2 \\[5pt]2yz−2xz&=0 \\[5pt]2z(y−x)&=0. \end{align*}\] Therefore, either \(z=0\) or \(y=x\). If \(z=0\), then the first constraint becomes \(0=x^2+y^2\). The only real solution to this equation is \(x=0\) and \(y=0\), which gives the ordered triple \((0,0,0)\). This point does not satisfy the second constraint, so it is not a solution. Next, we consider \(y=x\), which reduces the number of equations to three: \[\begin{align*}y &= x \\[5pt] z^2 &= x^2 +y^2 \\[5pt] x + y -z+1&=0. \end{align*} \] We substitute the first equation into the second and third equations: \[\begin{align*} z^2 &= x^2 +x^2 \\[5pt] &= x+x-z+1 =0. \end{align*} \] Then, we solve the second equation for \(z\), which gives \(z=2x+1\). We then substitute this into the first equation, \[\begin{align*} z^2 &= 2x^2 \\[5pt] (2x^2 +1)^2 &= 2x^2 \\[5pt] 4x^2 + 4x +1 &= 2x^2 \\[5pt] 2x^2 +4x +1 &=0, \end{align*}\] and use the quadratic formula to solve for \(x\): \[ x = \dfrac{-4 \pm \sqrt{4^2 -4(2)(1)} }{2(2)} = \dfrac{-4\pm \sqrt{8}}{4} = \dfrac{-4 \pm 2\sqrt{2}}{4} = -1 \pm \dfrac{\sqrt{2}}{2}. \] Recall \(y=x\), so this solves for \(y\) as well. Then, \(z=2x+1\), so \[z = 2x +1 =2 \left( -1 \pm \dfrac{\sqrt{2}}{2} \right) +1 = -2 + 1 \pm \sqrt{2} = -1 \pm \sqrt{2} . \] Therefore, there are two ordered triplet solutions: \[\left( -1 + \dfrac{\sqrt{2}}{2} , -1 + \dfrac{\sqrt{2}}{2} , -1 + \sqrt{2} \right) \; \text{and} \; \left( -1 -\dfrac{\sqrt{2}}{2} , -1 -\dfrac{\sqrt{2}}{2} , -1 -\sqrt{2} \right). \]
    4. We substitute \(\left(−1+\dfrac{\sqrt{2}}{2},−1+\dfrac{\sqrt{2}}{2}, −1+\sqrt{2}\right) \) into \(f(x,y,z)=x^2+y^2+z^2\), which gives \[\begin{align*} f\left( -1 + \dfrac{\sqrt{2}}{2}, -1 + \dfrac{\sqrt{2}}{2} , -1 + \sqrt{2} \right) &= \left( -1+\dfrac{\sqrt{2}}{2} \right)^2 + \left( -1 + \dfrac{\sqrt{2}}{2} \right)^2 + (-1+\sqrt{2})^2 \\[5pt] &= \left( 1-\sqrt{2}+\dfrac{1}{2} \right) + \left( 1-\sqrt{2}+\dfrac{1}{2} \right) + (1 -2\sqrt{2} +2) \\[5pt] &= 6-4\sqrt{2}. \end{align*}\] Then, we substitute \(\left(−1−\dfrac{\sqrt{2}}{2}, -1+\dfrac{\sqrt{2}}{2}, -1+\sqrt{2}\right)\) into \(f(x,y,z)=x^2+y^2+z^2\), which gives \[\begin{align*} f\left(−1−\dfrac{\sqrt{2}}{2}, -1+\dfrac{\sqrt{2}}{2}, -1+\sqrt{2} \right) &= \left( -1-\dfrac{\sqrt{2}}{2} \right)^2 + \left( -1 - \dfrac{\sqrt{2}}{2} \right)^2 + (-1-\sqrt{2})^2 \\[5pt] &= \left( 1+\sqrt{2}+\dfrac{1}{2} \right) + \left( 1+\sqrt{2}+\dfrac{1}{2} \right) + (1 +2\sqrt{2} +2) \\[5pt] &= 6+4\sqrt{2}. \end{align*}\] \(6+4\sqrt{2}\) is the maximum value and \(6−4\sqrt{2}\) is the minimum value of \(f(x,y,z)\), subject to the given constraints.

    Exercise \(\PageIndex{4}\)

    Use the method of Lagrange multipliers to find the minimum value of the function

    \[f(x,y,z)=x^2+y^2+z^2\]

    subject to the constraints \( 2x+y+2z=9\) and \(5x+5y+7z=29.\)

    Hint

    Use the problem-solving strategy for the method of Lagrange multipliers with two constraints.

    Answer

    \(f(2,1,2)=9\) is a relative minimum of \(f\), subject to the given constraints

    Key Concepts

    • An objective function combined with one or more constraints is an example of an optimization problem.
    • To solve optimization problems, we apply the method of Lagrange multipliers using a four-step problem-solving strategy.

    Key Equations

    • Method of Lagrange multipliers, one constraint

    \(\vecs ∇f(x,y)=λ\vecs ∇g(x,y)\)

    \(g(x,y)=k\)

    • Method of Lagrange multipliers, two constraints

    \(\vecs ∇f(x_0,y_0,z_0)=λ_1\vecs ∇g(x_0,y_0,z_0)+λ_2\vecs ∇h(x_0,y_0,z_0)\)

    \(g(x_0,y_0,z_0)=0\)

    \(h(x_0,y_0,z_0)=0\)

    Glossary

    constraint
    an inequality or equation involving one or more variables that is used in an optimization problem; the constraint enforces a limit on the possible solutions for the problem
    Lagrange multiplier
    the constant (or constants) used in the method of Lagrange multipliers; in the case of one constant, it is represented by the variable \(λ\)
    method of Lagrange multipliers
    a method of solving an optimization problem subject to one or more constraints
    objective function
    the function that is to be maximized or minimized in an optimization problem
    optimization problem
    calculation of a maximum or minimum value of a function of several variables, often using Lagrange multipliers

    Contributors

    • Gilbert Strang (MIT) and Edwin “Jed” Herman (Harvey Mudd) with many contributing authors. This content by OpenStax is licensedwith a CC-BY-SA-NC4.0license. Download for free at http://cnx.org.

    13.10: Lagrange Multipliers (2024)

    FAQs

    Why the method of Lagrange multipliers fails to solve the problem? ›

    The Lagrange-multiplier method fails because ∇g = 0 at the point (x, y) = (0, 1) where f attains its minimum on g = 0. As a result, the curve g(x, y) = 0 is not smooth with a well-defined normal vector at that point (see figure).

    What are the limitations of Lagrange multiplier? ›

    If the function is discontinuous the calculation with lagrange becomes complex. In addition, if the function is not monotonic or nonconvex, optimization might be difficult as there might be multiple solutions or folds on the functional surface. These are some areas that using Lagrange multipliers will be tricky.

    How do I choose a Lagrange multiplier? ›

    Summary
    1. Step 1: Introduce a new variable ‍ , and define a new function ‍ as follows: L ( x , y , … , λ ) = f ( x , y , … ...
    2. Step 2: Set the gradient of ‍ equal to the zero vector. ∇ L ( x , y , … , λ ) = 0 ← Zero vector ‍ ...
    3. Step 3: Consider each solution, which will look something like. , λ 0 ) ‍ . Plug each one into ‍ .

    Why doesn't Lagrange multiplier work? ›

    The method of Lagrange multipliers is only guaranteed to work when zero is a regular value of g because this is the condition that guarantees that g=0 is a smooth curve. Otherwise, there can be all kinds of singularities in the level set g−1(0): isolated points, cusps, X-shaped crossings, etc.

    What are the disadvantages of Lagrangian method? ›

    Difficulty in Solving: Lagrange's method can become computationally complex, especially for problems with multiple constraints or variables. Finding the appropriate Lagrange multipliers and solving the resulting system of equations can be challenging.

    Which type of problems can be solved by Lagrangian multiplier method? ›

    problems with multiple constraints. One can also use the Lagrange mutiplier method to address problems with more than one constraint. We will write it down for problems with 2 constraints, which have the form {minimize/maximize f(x), subject to the constraints: g1(x)=0 and g2(x)=0.

    What are the disadvantages of Lagrange's interpolation formula? ›

    In this context the biggest disadvantage with Lagrange Interpolation is that we cannot use the work that has already been done i.e. we cannot make use of while evaluating . With the addition of each new data point, calculations have to be repeated. Newton Interpolation polynomial overcomes this drawback.

    What happens when Lagrange multiplier is zero? ›

    If the constraint is not binding, then the multiplier will be 0. A zero multiplier means that if you do the optimization without the constraint, you get the same answer as you do with the constraint.

    How do you know when to use Lagrange multipliers? ›

    In mathematical optimization, the method of Lagrange multipliers is a strategy for finding the local maxima and minima of a function subject to equation constraints (i.e., subject to the condition that one or more equations have to be satisfied exactly by the chosen values of the variables).

    What is the intuition behind Lagrange multipliers? ›

    The method of Lagrange multipliers relies on the intuition that at a maximum, f(x,y) cannot be increasing in the direction of any neighboring point where g=c. If it were, we could walk along g=c to get higher, meaning that the starting point wasn't actually the maximum.

    What is the working rule of Lagrange multiplier? ›

    Lagrange Multiplier Theorem for Single Constraint

    If f(x0, y) is the maximum point of the function f(x, y) for the given constrained problem and ∇g(x0, y0) ≠ 0 then there will be λ0 so that (x0, y0, λ0) is known as a stationary point for the above Lagrange function.

    Do Lagrange multipliers have to be positive? ›

    Lagrange multiplier, λj, is positive. If an inequality gj(x1,··· ,xn) ≤ 0 does not constrain the optimum point, the corresponding Lagrange multiplier, λj, is set to zero.

    What is the Lagrange multiplier method used to maximize? ›

    Maximize (or minimize) : f(x,y)given : g(x,y)=c, find the points (x,y) that solve the equation ∇f(x,y)=λ∇g(x,y) for some constant λ (the number λ is called the Lagrange multiplier). If there is a constrained maximum or minimum, then it must be such a point.

    How do you get rid of Lagrange multiplier? ›

    Lagrange's multiplier vector can be eliminated by projecting the equation of motion onto the null space of the system constraint matrix, N ( J c ) . In constrained multibody system analysis, the method is known as Maggi's equations [114,11,151,66].

    What is the point of Lagrange multipliers? ›

    And we're done. So the bottom line is that Lagrange multipliers is really just an algorithm that finds where the gradient of a function points in the same direction as the gradients of its constraints, while also satisfying those constraints.

    What is the drawback of Lagrange's method of maxima and minima? ›

    Explanation: In lagrange's theorem of maxima of minima one can not determine the nature of stationary points.

    What makes a system of equations unsolvable? ›

    In general, a system with more equations than unknowns has no solution. Such a system is also known as an overdetermined system.

    What is the Lagrange multiplier method used to solve? ›

    Lagrange multiplier method is a technique for finding a maximum or minimum of a function F(x,y,z) subject to a constraint (also called side condition) of the form G(x,y,z) = 0.

    What is the advantage of Lagrange multiplier method? ›

    The Lagrange multiplier method can be used to eliminate constraints explicitly in multivariable optimization problems. Lagrange multipliers are also useful for studying the parametric sensitivity of the solution subject to the constraints.

    Top Articles
    Latest Posts
    Article information

    Author: Msgr. Refugio Daniel

    Last Updated:

    Views: 5478

    Rating: 4.3 / 5 (54 voted)

    Reviews: 85% of readers found this page helpful

    Author information

    Name: Msgr. Refugio Daniel

    Birthday: 1999-09-15

    Address: 8416 Beatty Center, Derekfort, VA 72092-0500

    Phone: +6838967160603

    Job: Mining Executive

    Hobby: Woodworking, Knitting, Fishing, Coffee roasting, Kayaking, Horseback riding, Kite flying

    Introduction: My name is Msgr. Refugio Daniel, I am a fine, precious, encouraging, calm, glamorous, vivacious, friendly person who loves writing and wants to share my knowledge and understanding with you.