Linear Programming


Lecture 15

September 29, 2023

Review and Questions

Last Class

  • Optimization for solving decision problems.
  • Discussed trial and error and search algorithms.
  • For unconstrained problems, gradient-based methods can get “stuck” at local optima.
  • Evolutionary algorithms can require many iterations and evaluations.
  • Reviewed Lagrange multipliers for constrained optimization.

Questions?

Poll Everywhere QR Code

Text: VSRIKRISH to 22333

URL: https://pollev.com/vsrikrish

See Results

Linear Programming

What is Linear Programming?

Linear programming refers to optimization for linear models.

As we will see over the next few weeks, linear models are:

  • relatively simple (which lets us solve and analyze them “easily”);
  • surprisingly applicable to many contexts!

Linear Functions

Recall that a function \(f(x_1, \ldots, x_n)\) is linear if \[f(x_1, \ldots, x_n) = a_1x_1 + a_2x_2 + \ldots + a_n x_n.\]

The key is that linear models are very simple geometrically:

  • Define hyperplanes;
  • Constant derivatives/gradients.

Linear Programs

A linear program (LP), or a linear optimization model, has the following characteristics:

  • Linearity: The objective function and constraints are all linear;
  • Divisibility: The decision variables are continuous (they can be fractional levels);
  • Certainty: The problem is deterministic.

Linearization

Linear models come up frequently because we can linearize nonlinear functions.

When we linearize components of an mathematical program, this is called the linear relaxation of the original problem.

Linearization

Note that the linearization is often quite sensitive to the points that are used to find the slope.

So this is best done when you have a rough idea of the variable range of interest.

Why Is Solving LPs Straightforward?

x = 2:0.1:11
f1(x) = 4.5 .* x
f2(x) = -x .+ 16
f3(x) = -1.5 .* x .+ 12
f4(x) = 0.5 .* x

p = plot(x, max.(f3(x), f4(x)), fillrange=min.(f1(x), f2(x)), color=:lightblue, grid=true, legend=false, xlabel=L"x", ylabel=L"y", xlims=(-2, 20), framestyle=:origin, ylims=(-2, 20), minorticks=5, tickfontsize=16, guidefontsize=16, right_margin=5mm)
plot!(-2:0.1:20, f1.(-2:0.1:20), color=:green, linewidth=3)
plot!(-2:0.1:20, f2.(-2:0.1:20), color=:red, linewidth=3)
plot!(-2:0.1:20, f3.(-2:0.1:20), color=:brown, linewidth=3)
plot!(-2:0.1:20, f4.(-2:0.1:20), color=:purple, linewidth=3)
plot!(size=(600, 600))

All solutions must exist on the boundary of the feasible region (which must be a polytope).

Why Is Solving LPs Straightforward?

More specifically:

  • an optimum solution must occur at the intersection of constraints;
  • can focus on finding and analyzing the corners.

Why Is Solving LPs Straightforward?

This is the basis of all simplex methods for solving LPs.

These methods go back to George Dantzig in the 1940s and are still widely used today.

Why Do LPs Have A Corner Solution?

Can a solution be in the interior?

Why Do LPs Have A Corner Solution?

What about along an edge but not a corner?

Example: Solving an LP

\[\begin{alignedat}{3} & \max_{x_1, x_2} & 230x_1 + 120x_2 & \\\\ & \text{subject to:} & &\\ & & 0.9x_1 + 0.5x_2 &\leq 600 \\ & & x_1 + x_2 &\leq 1000 \\ & & x_1, x_2 &\geq 0 \end{alignedat}\]

Example: Solving an LP

\[\begin{alignedat}{3} & \max_{x_1, x_2} & 230x_1 + 120x_2 & \\\\ & \text{subject to:} & &\\ & & 0.9x_1 + 0.5x_2 &\leq 600 \\ & & x_1 + x_2 &\leq 1000 \\ & & x_1, x_2 &\geq 0 \end{alignedat}\]

Examining Corner Points

Corner \((x_1, x_2)\) Objective
\((0,0)\) \(0\)
\((0, 1000)\) \(12000\)
\((667, 0)\) \(153410\)
\((250, 750)\) \(147500\)

Solving an LP

Manually checking the corner points is all well and good for this simple example, but does it scale well?

LP solvers (often based off the simplex method) automate this process.

Shadow Prices

Binding Constraints

A solution will be found at one of the corner points of the feasible polytope.

This means that at this solution, one or more constraints are binding: if we relaxed the constraint by weakening it, we could improve the solution.

Binding Constraints

The binding constraints are

  • \(x_1 \geq 0\)
  • \(0.9 x_1 + 0.5 x_2 \leq 600\),

but not

  • \(x_1 + x_2 \leq 1000\).

Shadow Prices

The marginal cost of a constraint is the amount by which the solution would improve if the constraint capacity was relaxed by one unit.

This is also referred to as the shadow price (these are also called the dual variables of the constraint).

Non-zero shadow prices tell us that the constraint is binding, and their values rank which constraints are most influential.

Shadow Prices are Lagrange Multipliers

The shadow prices are the Lagrange Multipliers of the optimization problem.

If our inequality constraints \(X \geq A\) and \(X \leq B\) are written as \(X - S_1^2 = A\) and \(X + S_2^2 = B\):

\[H(X, S_1 S_2, \lambda_1, \lambda_2) = Z(X) - \lambda_1(X - S_1^2 - A) - \lambda_2(X + S_2^2 - B),\]

\[\Rightarrow \qquad \frac{\partial H}{\partial A} = \lambda_1, \qquad \frac{\partial H}{\partial B} = \lambda_2.\]

Key Takeaways

Key Takeaways

  • Linear Programs: straightforward to solve!
  • For an LP, an optimum must occur at a corner of the feasible polytope.
  • Shadow prices (dual variables) of the constraints are the rate by which the solution would improve if the constraints were relaxed.

Upcoming Schedule

Next Classes

Monday: Project proposal peer review

Wednesday: Guest lecture by Diana Hackett (Mann librarian) on regulations research.

Friday: Lab on linear programming in Julia.