Convex Optimization - L5 Duality

Convex Optimization - L5 Duality

1. Lagrange dual problem

1.1 Lagrangian

Standard form problem (not necessarily convex)

\[\begin{array}{cll} \min & f_{0}(\boldsymbol{x}) & \\ \text{s.t.} & f_i(\boldsymbol{x}) \leq 0, & i = 1,2,\cdots,m \\ & h_i(\boldsymbol{x}) = 0, & i = 1,2,\cdots,p \end{array} \]

where \(\boldsymbol{x} \in \mathbb{R}^{n}\), and its domain \(\mathcal{D} = \left(\cap_{i=0}^{m} \text{dom} \, f_i \right) \cap \left(\cap_{i=1}^{p} \text{dom} \, h_i \right) \neq \varnothing\)

Lagrangian function : \(L: \mathbb{R}^{n} \times \mathbb{R}^{m} \times \mathbb{R}^{p} \rightarrow \mathbb{R}\), with \(\text{dom} \, L = \mathcal{D} \times \mathbb{R}^{m} \times \mathbb{R}^{p}\)

\[L(\boldsymbol{x}, \boldsymbol{\lambda}, \boldsymbol{\nu}) = f_0(\boldsymbol{x}) + \sum_{i=1}^{m} \lambda_i f_i(\boldsymbol{x}) + \sum_{i=1}^{p} \nu_i h_i(\boldsymbol{x}) \]

is weighted sum of objective and constraint functions.

  • \(\lambda_i\) is Lagrange multiplier associated with \(f_i(\boldsymbol{x}) \leq 0\) and
  • \(\nu_i\) is Lagrange multiplier associated with \(h_i(\boldsymbol{x}) = 0\)

1.2 Lagrange dual function

Lagrange dual function\(g : \mathbb{R}^{m} \times \mathbb{R}^{p} \rightarrow \mathbb{R}\):

\[g(\boldsymbol{\lambda}, \boldsymbol{\nu}) = \inf_{x \in \mathcal{D}} L(\boldsymbol{x}, \boldsymbol{\lambda}, \boldsymbol{\nu}) = \inf_{x \in \mathcal{D}} \left( f_0(\boldsymbol{x}) + \sum_{i=1}^{m} \lambda_i f_i(\boldsymbol{x}) + \sum_{i=1}^{p} \nu_i h_i(\boldsymbol{x}) \right) \]

\(g\) can be \(-\infty\) for some \(\boldsymbol{\lambda}\), \(\boldsymbol{\nu}\). \(g\) is concave even the primal problem is not convex.

Lower bound property : If \(\boldsymbol{\lambda} \succeq \boldsymbol{0}\), then \(g(\boldsymbol{\lambda}, \boldsymbol{\nu}) \leq p^*\)

Proof : If \(\tilde{\boldsymbol{x}}\) is feasible and \(\boldsymbol{\lambda} \succeq \boldsymbol{0}\), then

\[f(\tilde{\boldsymbol{x}}) \geq L(\boldsymbol{x}, \boldsymbol{\lambda}, \boldsymbol{\nu}) \geq \inf_{x \in \mathcal{D}} L(\boldsymbol{x}, \boldsymbol{\lambda}, \boldsymbol{\nu}) = g(\boldsymbol{\lambda}, \boldsymbol{\nu}) \]

minimizing over all feasible \(\tilde{\boldsymbol{x}}\) gives \(p^* \geq g(\boldsymbol{\lambda}, \boldsymbol{\nu})\)

2. The Lagrange dual problem

\[\begin{array}{cll} \max \limits_{\boldsymbol{\lambda}, \, \boldsymbol{\nu}} & g(\boldsymbol{\lambda}, \boldsymbol{\nu}) & \\ \text{s.t.} & \boldsymbol{\lambda} \succeq \boldsymbol{0} \end{array} \]

  • finds best lower bound on \(p^*\), obtained from Lagrange dual function
  • a convex optimization problem; optimal value denoted \(d^*\)
  • \(\boldsymbol{\lambda}\) , \(\boldsymbol{\nu}\) are dual feasible if \(\boldsymbol{\lambda} \succeq \boldsymbol{0}\), \((\boldsymbol{\lambda}, \boldsymbol{\nu}) \in \text{dom } g\)
  • often simplified by making implicit constraint\((\boldsymbol{\lambda}, \boldsymbol{\nu}) \in \text{dom } g\) explicit.

2.1 Weak and strong duality

Weak duality : \(d^* \leq p^*\)

  • always holds (for convex and nonconvex problems)
  • can be used to find nontrivial lower bounds for difficult problems

Strong duality: \(d^* = p^*\)

  • does not hold in general does not hold in general
  • (usually) holds for convex problems
  • conditions that guarantee strong duality in convex problems are called constraint qualifications

2.2 Slater's constraint qualification

Strong duality holds for a convex problem:

\[\begin{array}{cll} \min & f_{0}(\boldsymbol{x}) & \\ \text{s.t.} & f_i(\boldsymbol{x}) \leq 0, & i = 1,2,\cdots,m \\ & \mathbf{A}\boldsymbol{x} = \boldsymbol{b} \end{array} \]

If it is strictly feasible, i.e.,

\[\exists \boldsymbol{x} \in \text{int} \mathcal{D}: \quad f_i(\boldsymbol{x}) < 0, i = 1,2,\cdots,m \, \text{ and } \, \mathbf{A}\boldsymbol{x} = \boldsymbol{b} \]

  • also guarantees that the dual optimum is attained (if \(p* > - \infty\))

  • can be sharpened: e.g., can replace \(\text{int} \mathcal{D}\) with \(\text{relint} \mathcal{D}\) (interior relative to affine hull); linear inequalities do not need to hold with strict inequality, ...

There exist many other types of constraint qualifications

2.3 Examples

(1) Inequality form LP

Primal problem

\[\begin{array}{cll} \min \limits_{\boldsymbol{x}} & \boldsymbol{c}^{\top} \boldsymbol{x} \\ \text{s.t.} & \boldsymbol{Ax} \preceq \boldsymbol{b} \end{array} \]

The dual function \(g(\boldsymbol{\lambda})\) is

\[g(\boldsymbol{\lambda}) = \inf_{x} \left\{ \boldsymbol{c}^{\top} \boldsymbol{x} + \boldsymbol{\lambda}^{\top} (\boldsymbol{A} \boldsymbol{x} - \boldsymbol{b}) \right\} = \inf_{x} \left\{ (\boldsymbol{c} + \boldsymbol{A}^{\top} \boldsymbol{\lambda} )^{\top} \boldsymbol{x}-\boldsymbol{b}^{\top} \boldsymbol{\lambda} \right\} = \begin{cases} - \boldsymbol{b}^{\top} \boldsymbol{\lambda}, & \boldsymbol{A}^{\top} \boldsymbol{\lambda} + \boldsymbol{c} =0 \\ -\infty, & \text { otherwise } \end{cases} \]

The dual problem is:

\[\begin{array}{cll} \max \limits_{\boldsymbol{\lambda}} & -\boldsymbol{b}^{\top} \boldsymbol{\lambda} \\ \text{s.t.} & \boldsymbol{A^{\top} \lambda} + \boldsymbol{c} = \boldsymbol{0} \\ & \boldsymbol{\lambda} \succeq \boldsymbol{0} \end{array} \]

  • from Slater's condition: \(\boldsymbol{p}^*=\boldsymbol{q}^*\) if \(\boldsymbol{A \tilde{x}} \prec \boldsymbol{b}\) for some \(\boldsymbol{\tilde{x}}\)

  • in fact, \(\boldsymbol{p}^*=\boldsymbol{q}^*\) except when primal and dual are infeasible

(2) Quadratic program

A primal problem is (assume \(\boldsymbol{P} \in \mathbb{S}^{n}_{++}\), i.e., \(\boldsymbol{P}\) is a symmetric positive definite matrix)

\[\begin{array}{cll} \min \limits_{\boldsymbol{x}} & \boldsymbol{x}^{\top} \boldsymbol{P} \boldsymbol{x} \\ \text{s.t.} & \boldsymbol{Ax} \preceq \boldsymbol{b} \end{array} \]

The dual function \(g(\boldsymbol{\lambda})\) is:

\[g(\boldsymbol{\lambda}) = \inf_{x} \left\{ \boldsymbol{x}^{\top} \boldsymbol{P} \boldsymbol{x} + \boldsymbol{\lambda}^{\top} (\boldsymbol{A} \boldsymbol{x} - \boldsymbol{b}) \right\} = -\frac{1}{4} \boldsymbol{\lambda}^{\top} \boldsymbol{A} \boldsymbol{P}^{-1} \boldsymbol{A}^{\top} \boldsymbol{\lambda} - \boldsymbol{b}^{\top} \boldsymbol{\lambda} \]

Note that \(h(\boldsymbol{x})=\boldsymbol{x}^{\top} \boldsymbol{P} \boldsymbol{x} + \boldsymbol{\lambda}^{\top} (\boldsymbol{A} \boldsymbol{x} - \boldsymbol{b})\) is a quadratic function w.r.t \(\boldsymbol{x}\). Thus, let \(\dfrac{\partial h(\boldsymbol{x})}{\partial \boldsymbol{x}} = \boldsymbol{0}\), then, we will have $\boldsymbol{x} = -\dfrac{1}{2} \boldsymbol{P}^{-1} \boldsymbol{\lambda}^{\top} \boldsymbol{A} $

The dual problem is:

\[\begin{array}{cll} \max \limits_{\boldsymbol{\lambda}} & -\dfrac{1}{4} \boldsymbol{\lambda}^{\top} \boldsymbol{A} \boldsymbol{P}^{-1} \boldsymbol{A}^{\top} \boldsymbol{\lambda} - \boldsymbol{b}^{\top} \boldsymbol{\lambda} \\ \text{s.t.} & \boldsymbol{\lambda} \succeq \boldsymbol{0} \end{array} \]

  • from Slater's condition: \(\boldsymbol{p}^*=\boldsymbol{q}^*\) if \(\boldsymbol{A \tilde{x}} \prec \boldsymbol{b}\) for some \(\boldsymbol{\tilde{x}}\)

  • in fact, \(\boldsymbol{p}^*=\boldsymbol{q}^*\) always

(3) A nonconvex problem with strong duality

The trust region problem : the primal problem is:

\[\begin{array}{cll} \min \limits_{\boldsymbol{x}} & \boldsymbol{x^{\top} A x} + 2 \boldsymbol{b^{\top} x}\\ \text{s.t.} & \boldsymbol{x^{\top} x} \leq 1 \end{array} \]

where \(\boldsymbol{A} \in \mathbb{S}^{n}\) (\(\boldsymbol{A}\) is a symmetric matrix) and \(\boldsymbol{A} \nsucceq \boldsymbol{0}\), thus, this is not a convex problem.

The dual function \(g(\boldsymbol{\lambda})\) is:

\[\begin{split} g(\boldsymbol{\lambda}) &= \inf_{x} \left\{ \boldsymbol{x}^{\top} (\boldsymbol{A} + \lambda \boldsymbol{I}) \boldsymbol{x} + 2 \boldsymbol{b^{\top} x} - \lambda \right\} \\ &= \begin{cases} - \boldsymbol{b}^{\top} (\boldsymbol{A} + \lambda \boldsymbol{I})^{\dagger} \boldsymbol{b} - \lambda \ , & \text{if} \quad \boldsymbol{A} + \lambda \boldsymbol{I} \succeq \boldsymbol{0} \quad \text{and} \quad \boldsymbol{b} \in \mathcal{R}(\boldsymbol{A} + \lambda \boldsymbol{I}) \\ -\infty \ , & \text{otherwise} \end{cases} \end{split} \]

where \((\boldsymbol{A} + \lambda \boldsymbol{I})^{\dagger}\) is the pseudo-inverse of \(\boldsymbol{A} + \lambda \boldsymbol{I}\). The dual function \(g(\lambda)\) is minimized by \(\boldsymbol{x} = -(\boldsymbol{A} + \lambda \boldsymbol{I})^{\dagger}\).

The dual problem and equivalent semi-definite problem (SDP) are:

\[\begin{array}{cll} \max \limits_{\lambda} & - \boldsymbol{b}^{\top} (\boldsymbol{A} + \lambda \boldsymbol{I})^{\dagger} \boldsymbol{b} - \lambda \\ \text{s.t.} & \boldsymbol{A} + \lambda \boldsymbol{I} \succeq \boldsymbol{0} \\ & \boldsymbol{b} \in \mathcal{R}(\boldsymbol{A} + \lambda \boldsymbol{I}) \end{array} \quad \Leftrightarrow \quad \begin{array}{cll} \max \limits_{\lambda, \ t} & - t - \lambda \\ \text{s.t.} & \begin{bmatrix} \boldsymbol{A} + \lambda \boldsymbol{I} & \boldsymbol{b} \\ \boldsymbol{b}^{\top} & t \end{bmatrix} \succeq \boldsymbol{0} \end{array} \]

  • strong duality although primal problem is not convex (not easy to show)

3. Geometric interpretation

We can give a simple geometric interpretation of the dual function in terms of the set. Suppose:

\[\mathcal{G} = \left\{ \left(f_{1}(\boldsymbol{x}), \ldots, f_{m}(\boldsymbol{x}), h_{1}(\boldsymbol{x}), \ldots, h_{p}(\boldsymbol{x}), f_{0}(\boldsymbol{x})\right) \in \mathbb{R}^{m} \times \mathbb{R}^{p} \times \mathbb{R} \mid x \in \mathcal{D}\right\} \]

which is the set of values taken on by the constraint and objective functions.

Let \(f_{i}(\boldsymbol{x})=u_i, h_{i}(\boldsymbol{x})=v_i, f_{0}(\boldsymbol{x})=t\), then, the optimal value of the primal problem \(\boldsymbol{p}^*\) can be expressed in terms of \(\mathcal{G}\) as:

\[\boldsymbol{p}^* = \inf_{\boldsymbol{u}, \boldsymbol{v}, t} \left\{ t \mid (\boldsymbol{u}, \boldsymbol{v}, t) \in \mathcal{G}, \boldsymbol{u} \preceq \boldsymbol{0} \ , \boldsymbol{v} = \boldsymbol{0} \right\} \]

To evaluate the dual function \(f(\boldsymbol{\lambda}, \boldsymbol{\nu})\) at \((\boldsymbol{\lambda}, \boldsymbol{\nu})\), we can minimize the affine function

\[(\boldsymbol{\lambda}, \boldsymbol{\nu}, 1)^{\top} (\boldsymbol{u}, \boldsymbol{v}, t) = \sum_{i=1}^{m} \lambda_{i} u_{i}+\sum_{i=1}^{p} v_{i} v_{i}+t \]

over \((\boldsymbol{u}, \boldsymbol{v}, t) \in \mathcal{G}\). i.e.,

\[g(\boldsymbol{\lambda}, \boldsymbol{\nu}) = \inf_{\boldsymbol{u}, \boldsymbol{v}, t} \left\{ (\boldsymbol{\lambda}, \boldsymbol{\nu}, 1)^{\top} (\boldsymbol{u}, \boldsymbol{v}, t) \mid (\boldsymbol{u}, \boldsymbol{v}, t) \in \mathcal{G} \right\} \]

Therefore, we have

\[\begin{split} p^{\star} &= \inf_{\boldsymbol{u}, \boldsymbol{v}, t} \left\{ t \mid (\boldsymbol{u}, \boldsymbol{v}, t) \in \mathcal{G}, \boldsymbol{u} \preceq \boldsymbol{0}, \boldsymbol{v}=\boldsymbol{0} \right\} \\ & \geq \inf_{\boldsymbol{u}, \boldsymbol{v}, t} \left\{(\boldsymbol{\lambda}, \boldsymbol{\nu}, 1)^{\top}(\boldsymbol{u}, \boldsymbol{v}, t) \mid (\boldsymbol{u}, \boldsymbol{v}, t) \in \mathcal{G}, \boldsymbol{u} \preceq \boldsymbol{0}, \boldsymbol{v} = \boldsymbol{0} \right\} \\ & \geq \inf_{\boldsymbol{u}, \boldsymbol{v}, t} \left\{(\boldsymbol{\lambda}, \boldsymbol{\nu}, 1)^{T}(\boldsymbol{u}, \boldsymbol{v}, t) \mid (\boldsymbol{u}, \boldsymbol{v}, t) \in \mathcal{G} \right\} \\ &= g(\boldsymbol{\lambda}, \boldsymbol{\nu}) \end{split} \]

4. Optimality conditions

4.2 Complementary slackness

Assume strong duality holds, \(\boldsymbol{x}^*\) is primal optimal, \((\boldsymbol{\lambda}^*, \boldsymbol{\nu}^*)\) is dual optimal

\[\begin{split} f_{0} (\boldsymbol{x}^*) = g(\boldsymbol{\lambda}^*, \boldsymbol{\nu}^*) &=\inf_{\boldsymbol{x}} \left\{f_{0}(\boldsymbol{x})+\sum_{i=1}^{m} \lambda_{i}^{\star} f_{i}(\boldsymbol{x})+\sum_{i=1}^{p} \nu_{i}^* h_{i}(\boldsymbol{x}) \right\} \\ & \leq f_{0}(\boldsymbol{x}^*)+\sum_{i=1}^{m} \lambda_{i}^* f_{i}(\boldsymbol{x}^*)+\sum_{i=1}^{p} \nu_{i}^* h_{i}(\boldsymbol{x}^*) \\ &\leq f_{0}(\boldsymbol{x}^*) \end{split} \]

Hence, the two inequalities hold with equality

  • \(\boldsymbol{x}^*\) minimizes \(L(\boldsymbol{x}, \boldsymbol{\lambda}^*, \boldsymbol{\nu}^*)\)

  • Complementary slackness: \(\lambda_i^* f_i(\boldsymbol{x}^*) = 0\) for \(i=1,2,\cdots, m\), that is:

\[\lambda_i^*> 0 \Rightarrow f_i(\boldsymbol{x}^*), \qquad f_i(\boldsymbol{x}^*) < 0 \Rightarrow \lambda_i^*= 0 \]

4.3 Karush-Kuhn-Tucker (KKT) conditions

(1) KKT conditions for nonconvex problem

The following four conditions are called KKT conditions (for a problem with differentiable constraint function \(f_i\), \(h_i\):

  • (1) primal constraints: \(f_i(\boldsymbol{x}) \leq 0, i = 1,2,\cdots,m\), \(h_i(\boldsymbol{x}) = 0, i = 1,2,\cdots,p\)

  • (2) dual constraints: $\boldsymbol{\lambda} \succeq \boldsymbol{0}$

  • (3) complementary slackness: \(\lambda_if_i(\boldsymbol{x})=0, i=12,\cdots,m\)

  • (4) gradient of Lagrangian with respect to \(\boldsymbol{x}\) vanishes:

\[\frac{\partial L(\boldsymbol{x}, \boldsymbol{\lambda}, \boldsymbol{\nu})}{\partial \boldsymbol{x}} = \nabla f_0(\boldsymbol{x}) + \sum_{i=1}^{m} \lambda_i \nabla f_i(\boldsymbol{x}) + \sum_{i=1}^{p} \nu_i \nabla h_i(\boldsymbol{x}) = \boldsymbol{0} \]

If strong duality holds and \(\boldsymbol{x}\), \(\boldsymbol{\lambda}\) ,\(\boldsymbol{\nu}\) are optimal, then they must satisfy the KKT conditions

To summarize, for any optimization problem with differentiable objective and constraint functions for which strong duality obtains, any pair of primal and dual optimal points must satisfy the KKT conditions.

(2) KKT conditions for convex problems

If \(\tilde{x}\), \(\tilde{\lambda}\), \(\tilde{\nu}\) satisfy KKT for a convex problem, then they are optimal:

  • from complementary slackness: \(f_0(\tilde{x}) = L(\tilde{x}, \tilde{\lambda}, \tilde{\nu})\)

  • from 4th KKT condition (and convexity): \(g(\tilde{\lambda}, \tilde{\nu}) = L(\tilde{x}, \tilde{\lambda}, \tilde{\nu})\)

hence, \(f_0(\tilde{x}) = g(\tilde{\lambda}, \tilde{\nu})\)

If Slater's condition is satisfied: \(\boldsymbol{x}\) is optimal if and only if there exist \(\boldsymbol{\lambda}\), \(\boldsymbol{\nu}\) that satisfy KKT conditions

  • Slater implies strong duality, and dual optimum is attained

  • generalizes optimality condition \(\nabla f_0(\boldsymbol{x})\) for unconstrained problem

5. Perturbation and sensitivity analysis

Reference

[1] Convex Optimization, edx, video(Youtube), slides

[2]

posted @ 2022-03-14 17:05  veager  阅读(57)  评论(0)    收藏  举报