Exercises with the symbol \(^+\) are to be done at home before the class. Exercises with the symbol \(^*\) will be tackled in class. The remaining exercises are left for self training after the exercise class. Some exercises are from the text book and the number is reported. They have the solution at the end of the book.
1\(^+\) (11.1)
Suppose you do not know any optimization algorithm for solving a linear program. You decide to evaluate all the vertices and determine, by inspection, which one minimizes the objective function. Give a loose upper bound on the number of possible minimizers you will examine. Furthermore, does this method properly handle all linear constrained optimization problems?
We have chosen to minimize a linear program by evaluating every vertex in the convex polytope formed by the constraints. Every vertex is thus a potential minimizer. Vertices are defined by intersections of active constraints. As every inequality constraint can either be active or inactive, and assuming there are \(n\) inequality constraints, we do not need to examine more than \(2^n\) combinations of constraints. This method does not correctly report unbounded linear constrained optimization problems as unbounded.
2\(^+\) (11.2)
If the program in Example 11.1 of [KW] is bounded below, argue that the simplex method must converge.
The simplex method is guaranteed either to improve with respect to the objective function with each step or to preserve the current value of the objective function. Any linear program will have a finite number of vertices. So long as a heuristic, such as Bland’s rule, is employed such that cycling does not occur, the simplex method must converge on a solution.
3\(^+\) (11.3)
Suppose we want to solve: \[\begin{aligned}
\text{minimize}\:\:& 6x_1 +5x_2\\
\text{subject to}\:\: & 3x_1-2x_2 \geq 5.
\end{aligned}\] How would we translate this problem into a linear program in equality form with the same minimizer?
The problem is already in minimization form. Let’s rewrite the inequality in less than equal form: \[\begin{aligned}
\text{minimize}\:\:& 6x_1 +5x_2\\
\text{subject to}\:\: & -3x_1+2x_2 \leq -5.
\end{aligned}\] Since both variables are free, to put the problem in a form in which all variables are greater or equal to 0 we introduce two new variables for each original one:
The simplex algorithm works by selecting a basis and then changing the basis. The initial basis has to be feasible. This implies that the martix \(A_B\) is invertible and that the solution \(x_B=A_B^{-1}b\geq 0\). The basis in a problem of size \(1\times 5\) has size \(1\).
Let’s select \(s_1\) to be in basis.
import numpy as npc = np.array([6, -6,5,-5,0])A = np.array([[-3,3,2,-2,1]]) b = np.array([-5]) # initial basisB = np.array([0])N = np.array([1,2,3,4])#%%print("c_B:",c[B])print("c_N:",c[N])
c_B: [6]
c_N: [-6 5 -5 0]
4
Consider the following problem: \[\begin{aligned}
\text{minimize} \; &5x_1 + 4x_2\\
\text{ s.t.}\;& 2x_1 + 3x_2 \leq 5\\
&4x_1 + x_2 \leq 11\\
&x_1,x_2\geq 0
\end{aligned}\] Solve the problem numerically implementing the simplex algorithm.
We observe that the optimal solution sometimes becomes negative and hence infeasible. This is not possible in the simplex algorithm. Once we have a feasible vertex we can always move to another feasible vertex. Hence, the implementation has a bug. Can you find it? Hint: the problem is in the function compute_leaving_var.
5\(^+\) (11.4)
Suppose your optimization algorithm has found a search direction \(\vec
d\) and you want to conduct a line search. However, you know that there is a linear constraint \(\vec w^T {\vec x}\geq \vec 0\). How would you modify the line search to take this constraint into account? You can assume that your current design point is feasible.
If the current iterate \({\vec x}\) is feasible, then \(\vec w^T{\vec x}= \vec b\geq
\vec 0\). We want the next point to maintain feasibility, and thus we require \(\vec w^T({\vec x}+\alpha\vec d)\geq \vec 0\). If the obtained value for \(\alpha\) is positive, that \(\alpha\) is an upper bound on the step length. If the obtained value for \(\alpha\) is negative, it can be ignored
6\(^+\) (11.5)
Reformulate the linear program
\[\begin{aligned}
\text{minimize} \; &\vec c^T{\vec x}\\
\text{ s.t.}\;& A\vec x\geq 0
\end{aligned}\] into an unconstrained optimization problem with a log barrier penalty.
Note that we can choose \(\vec x(0) = \vec 0\) as our initial point. You can generate instances of this problem by choosing \(a_i\) from some distribution on \(\mathbb{R}^n\) and then solving the problem with scipy.optimize.minimize.
8
Write the Lagrangian function for the following problem and derive the dual function \(\cal D\). Determine \(\vec x(\vec \lambda)\), the minimizer of \(\cal D\) and the gradient of \(\cal D\).
A pharmaceutical company produces three types of drugs: A, B, and C. These drugs require raw materials, chemical processing time, and packaging units. The goal is to maximize profit, taking into account restrictions due to resource availability and production balance.
Drug A contributes 60 DKK per unit to the profit, drug B contributes 100 DKK per unit, drug C contributes 80 DKK per unit. The consumptions per units of drug of raw materials, chemical processing time, and packaging units are given in Table [tab] together with the quantities available. While consumptions of raw material and processing time cannot exceed the amount available, excess of packaging units is allowed. Extra packaging units can be bought but each extra unit reduces profit at 5 DKK per unit while packaging units left can be reused in the next production period and hence contribute positively to the profit with the same amount of DKK per unit. Due to contractual obligations, Drug B must be produced in exactly twice the amount of Drug A.
Formulate the problem in linear programming terms. Write first the instantiated model and then the abstract, general model separating model from data.
Data from the pharmaceutical company
Drug
A
B
C
raw material (Kg)
5
8
6
600
processing time (hours)
3
4
5
400
packaging units
2
3
1
200
profit
60
100
80
-5
To model a problem in LP terms we need to define:
the sets problem components and the known parameters
the decision variables
the objective function
the constraints
Decision Variables
\(x_1\) Units of Drug A produced
\(x_2\) Units of Drug B produced
\(x_3\) Units of Drug C produced
\(x_4\) Additional packaging units (can be negative if surplus)
Objective Function (Maximization) \[\text{maximize} \; Z=60x_1+100x_2+80x_3-5x_4\]
Constraints
Raw Material Constraint (Inequality) The factory has at most 600 kg of raw material: \[5x_1+8x_2+6x_3\leq 600\]
Processing Time Constraint (Inequality) The available chemical processing time is 400 hours: \[3x_1+4x_2+5x_3\leq 400\]
Packaging Balance Constraint (Equality) The total packaging units used must match available packaging plus extra units: \[2x_1+3x_2+x_3=200+x_4\]
If \(x_4>0\), extra packaging was bought.
If \(x_4<0\), excess packaging is left unused.
Drug B Production Requirement (Equality) Due to contractual obligations, Drug B must be produced in exactly twice the amount of Drug A: \[x_2=2x_1\]
Non-negativity and Unbounded Variables
\(x_1,x_2,x_3\geq 0\) is unrestricted in sign
\(x_4\) can be negative (if packaging is left over) or positive (if additional units are purchased).
10\(^*\)
Consider the following linear programming problem: \[\begin{aligned}
\text{maximize} \;& Z=60x_1+100x_2+80x_3-5x_4\\
\text{ s.t.}\; &5x_1+8x_2+6x_3\leq 600\\
&3x_1+4x_2+5x_3\leq 400\\
&2x_1+3x_2+x_3=200+x_4\\
&x_2=2x_1\\
&x_1,x_2,x_3\geq 0\\
&x_4 \in \Re
\end{aligned}\]
Your tasks:
Transform the problem in standard form
Transform the problem in equality form
Solve the problem numerically with scipy.optimize.linprog. Read this tutorial.
The standard or the equality forms are already good for scipy.optimize.linprog. However, let’s put the problem in yet another form that requires less manipulations from the initial given form. \[\begin{aligned}
\text{minimize} \;& -Z=-60x_1-100x_2-80x_3+5x_4\\
\text{ s.t.}\; &5x_1+8x_2+6x_3+0x_4\leq 600\\
&3x_1+4x_2+5x_3+0x_4\leq 400\\
&2x_1+3x_2+x_3-x_4=200\\
&2x_1-x_2+0x_3+0x_4=0\\
&x_1,x_2,x_3\geq 0\\
&x_4 \in \Re
\end{aligned}\]
Elements can be accesssed individually. For example the solution can be accessed by:
print(result.x) # solutionprint(result.fun) # objective function value
The residuals are the values of the slack variables and tell us whether the corresponding constraint is active or inactive in the optimal solution.
The marginals (or dual values, shadow prices, Lagrange multipliers) are the values of the dual variables that can be obtained as a by product from the simplex. They are called marginals because they tell us how much the objective function value would change if we increased by 1 the right hand side of the constraint. For example, the marginal associated with the second inequality constraint is -6.538, hence we expect the optimal value of the objective function to decrease by \(\epsilon\) if we add a small amount \(\epsilon\) to the right hand side of the second inequality constraint. Indeed:
print(result.x) # solutionprint(result.fun) # objective function value
-7852.692307692308
11
Two opponents, say Antigonus and Brasidas, are fighting a battle over three mountain passes. Each of them commands seven legions. The one who sends more legions to a pass occupies it, but when the same number of legions meet, there will be a draw. Finally, the one who occupies more passes than the other wins the battle, with a draw occurring if both occupy the same number of passes.
They must simultaneously decide how to allocate their legions over the passes without knowing what the other will do. The goal is to win the most passes (or maximize some payoff based on the passes won). It makes sense to randomize the allocations to avoid being predictable. By randomized allocations, we mean that Antigonus is making his choices at random according to some probability distribution. Of course, the same applies to Brasidas. The problem is that they must decide on the probability distributions for their allocations. The probability distributions are defined over the set of all possible allocations of legions to the three passes.
(You can rethink the problem in the context of election or marketing competitions, if you like.)
Your tasks are:
Determine how many different allocations each opponent has.
Build the payoff matrix for Antigonus’ allocations on the rows and Brasidas’ allocations on the columns. Each element of the matrix gets 1 if Antigonus’allocation wins, -1 if Brasidas’ allocation wins and 0 otherwise.
Model the problem of finding the optimal strategy for Antigonus as a linear programming problem. Assume that Antigonous, when playing some randomized allocation, expects Brasidas to play a best response against this allocation. The goal of Antigonus is to maximize the worst-case expected payoff, say \(v\). In this way, by adopting his optimal strategy, he can assure himself of winning \(v\) on average.
Solve the linear programming problem with the tools from the Python module scipy.optimize and highlight the optimal strategy found for Antigonus and the optimal value, that is, the expected payoff (expected win for Antigonus).
Try changing the number of legions. Instead of seven, try with other numbers, say five, six, eight, nine. Do you find a case in which the optimal allocation strategy is deterministic, that is, a pure unique allocation?
Model the problem from the point of view of Brasidas, assuming that he also uses a randomized strategy. Use the same payoff matrix constructed for Antigonus. What is the expected payoff for him? Does the result make sense?