AI505/AI801, Optimization – Exercise Sheet 08

Published

April 15, 2026

Exercises with the symbol \(^+\) are to be done at home before the class. Exercises with the symbol \(^*\) will be tackled in class. The remaining exercises are left for self training after the exercise class. Some exercises are from the text book and the number is reported. They have the solution at the end of the book.

1 \(^+\) (10.9) 

Solve the constrained optimization problem

\[\begin{aligned} \underset{x}{\text{minimize}} \:\:&\sin\left(\frac 4x\right)\\ \text{subject to} \:\: & x\in [1,10] \end{aligned}\]

using both the transform \(x = t_{ a,b} ( \hat{x} )\) and a sigmoid transform for constraint bounds \(x \in [ a, b]\): \[x=s(\hat{x})=a+\frac{(b-a)}{1+\exp{(-\hat{x})}}.\]

Why is the \(t_{a,b}\) transform better in this case than the sigmoid transform?

2 \(^+\) (10.1) 

Solve

\[\begin{aligned} \underset{x}{\text{minimize}} \:\:&x\\ \text{subject to} \:\: & x\geq 0 \end{aligned}\]

using the quadratic penalty method with \(\rho > 0\). Solve the problem in closed form.

3 \(^*\) (10.11) 

Suppose we want to minimize \(x_1^3 + x_2^2 + x_3\) subject to the constraint that \(x_1 + 2x_2 + 3x_3 = 6\). How might we transform this into an unconstrained problem with the same minimizer?

Solve the transformed problem numerically.

4 \(^*\)  (10.13)

Consider using a penalty method to optimize

\[\begin{aligned} \underset{x}{\text{minimize}} \:\:&1-x^2\\ \text{subject to} \:\: & |x|\leq 2 \end{aligned}\]

Optimization with the penalty method typically involves running several optimizations with increasing penalty weights. Impatient engineers may wish to optimize once using a very large penalty weight. Explain what issues are encountered for both the count penalty method and the quadratic penalty method.

Implement the method and solve the problem numerically.

5 \(^*\)  

Verify the numerical results of Example 10.3 and 10.4 (pages 172-173) of the text book.

6 \(^*\)  

Derive the Karush-Kuhn-Tucker conditions (FONCs) for constrained maximization problems.

7

Write the KKT conditions for the problem:

\[ \begin{aligned} \underset{\vec x}{\text{minimize}} \:\:&f(\vec x)\\ \text{subject to} \:\: & \vec g(\vec x)\leq \vec b \\ &\vec h(\vec x)=\vec 0\\ &\vec x \geq \vec 0. \end{aligned} \]

Note the additional requirements \(\vec x \geq \vec 0\) and the right hand term \(\vec b\) with respect to the problem saw in class.

8 \(^*\)  

Consider the following constrained optimization problem: \[\min\;\left(x_1-\frac{3}{2}\right)^2+\left(x_2-\frac 12\right)^4 \quad \text{s.t.}\quad \begin{bmatrix} 1-x_1-x_2\\ 1-x_1+x_2\\ 1+x_1-x_2\\ 1+x_1+x_2\\ \end{bmatrix}\geq 0\] Plot the problem and determine the optimal solution reasoning on the plot. Then, show that in the optimal solution the KKT conditions hold.

9

Consider the problem

\[\begin{aligned} \underset{(x1,x2)}{\min}\:\:&0.5(x^2_1 +x_2^2 )\\ \text{subject to}\:\:& x_1-1 \geq 0. \end{aligned}\]

Write the Lagrangian function, the dual function and solve the dual problem.