Lecture 14
September 27, 2023
Last class, we looked at the following model for the treatment of wastewater from two point releases:
\[\begin{alignat}{3} & \min_{E_1, E_2} &\quad 5000E_1^2 + 3000E_2^2 & \\\\ & \text{subject to:} & 1000 E_1 &\geq 500 \\ & & 835E_1 + 1200E_2 &\geq 1459 \\ & & E_1, E_2 &\;\geq 0 \\ & & E_1, E_2 &\;\leq 1 \end{alignat}\]
We can solve this graphically (see the JuMP tutorial for example code):
So the solution occurs at the intersection of the two constraints, where:
\[E_1 = 0.5, E_2 = 0.85\]
and the cost of this treatment plan is
\[C(0.5, 0.85) = \$ 3417.\]
Text: VSRIKRISH to 22333
Text: VSRIKRISH to 22333
What are some challenges to trial and error?
Most search algorithms look for critical points to find candidate optima. Then the “best” of the critical points is the global optimum.
Two common approaches:
These methods work pretty well, but can:
These methods work pretty well, but can:
These methods work pretty well, but can:
Without constraints, our goal is to find extrema of the function.
Can do this using the structure of the objective (gradient) or through stochastic search (Monte Carlo).
Now, notice the effect of a constraint!
For a constrained problem, we also have to look along the constraint to see if that creates a solution.
We can solve some constrained problems using Lagrange Multipliers!
Recall (maybe) that the Lagrange Multipliers method requires equality constraints. But we can easily create those with “dummy” variables.
Original Problem
\[ \begin{aligned} & \min &&f(x_1, x_2) \notag \\\\ & \text{subject to:} && x_1 \geq A \notag \\ & && x_2 \leq B \notag \end{aligned} \]
With Dummy Variables
\[ \begin{aligned} & \min &&f(x_1, x_2) \notag \\\\ & \text{subject to:} && x_1 - S_1^2 = A \notag \\ & && x_2 + S_2^2 = B \notag \end{aligned} \]
Then the Lagrangian function becomes:
\[ H(\mathbf{x}, S_1, S_2, \lambda_1, \lambda_2) = f(\mathbf{x}) - \lambda_1(x_1 - S_1^2 - A) - \lambda_2(x_2 + S_2^2 - B) \]
where \(\lambda_1\), \(\lambda_2\) are penalties for violating the constraints.
The \(\lambda_i\) are the eponymous Lagrange multipliers.
Next step: locate possible optima where the partial derivatives of the Lagrangian are zero.
\[\frac{\partial H(\cdot)}{\partial \cdot} = 0\]
This is actually many equations, even though our original problem was low-dimensional, and can be slow to solve.
Despite these challenges, Lagrange Multiplers are at the core of many advanced search algorithms.
Friday: Solving linear optimization models (linear programming)
Monday: Project proposal peer review.
Wednesday: Guest lecture by Diana Hackett (Mann librarian) on regulations research.
Friday: Lab on linear programming in Julia.