3
$\begingroup$

I'm interested in the optimization problem

$$\begin{array}{ll} \text{maximize} & x^T M \, x\\ \text{subject to} & \| x \| \leq 1\\ & x \geq 0\end{array}$$

where $M$ is positive definite. As I understand it, this isn't a convex problem, but the constraints should define a convex set. Is there a well-known way of solving this sort of problem?

For those curious about how I might have gotten here: I have $c$ sensors, each of which has $k$ features. I want to find a convex combination of the $k$ features per sensor that maximizes the distance between some patterns over the sensors.

$\endgroup$
5
  • 1
    $\begingroup$ This seems relevant: mathoverflow.net/questions/145028/… $\endgroup$ Commented Nov 1, 2017 at 21:34
  • $\begingroup$ it is easy to see that the maximum is obtained where $\|x\|=1$. Then, maybe you can use lagrangian multipliers for $x^Tx=1$ and see what happens? $\endgroup$
    – supinf
    Commented Nov 2, 2017 at 13:27
  • $\begingroup$ maybe the wiki article for quadraticly constrained problems could be helpful as well: en.wikipedia.org/wiki/… $\endgroup$
    – supinf
    Commented Nov 2, 2017 at 13:29
  • $\begingroup$ @supinf with the lagrange multiplier approach I can't use the positivity constraint, though, right? $\endgroup$ Commented Nov 2, 2017 at 17:25
  • 1
    $\begingroup$ The objective is convex but I'm not minimizing, I'm maximizing and so standard convex solvers (or at least cvxpy) won't allow me to do it $\endgroup$ Commented Nov 4, 2017 at 16:16

1 Answer 1

1
$\begingroup$

Given,

$$\begin{array}{ll} \text{maximize} & x^T M \, x\\ \text{subject to} & \| x \| \leq 1\\ & x \geq 0\end{array}$$

We can rewrite the constraint $\|x||\le1$ to $\sum_{i=1}^nx_i^2\le1$.

$$[x_1,\ldots,x_n]\begin{bmatrix}m_{11}&\cdots &m_{1n}\\\vdots&\ddots&\vdots\\m_{n1}&\cdots&m_{nn}\end{bmatrix}\begin{bmatrix}x_1\\x_2\\\vdots\\x_n\end{bmatrix}$$

$$=x_1\sum_{i=1}^nx_im_{1i} + x_2\sum_{i=1}^nx_im_{2i} + \cdots+x_n\sum_{i=1}^nx_im_{ni}$$

Then, we will have the following model:

$$ \begin{array}{ll} -\text{minimize} & -1(x_1\sum_{i=1}^nx_im_{1i} + x_2\sum_{i=1}^nx_im_{2i} + \cdots+x_n\sum_{i=1}^nx_im_{ni})\\\\ \text{subject to} & \sum_{i=1}^nx_i^2\le1\\ & x \ge 0 \end{array}$$

Then, we can use the Penalty function to solve this model via:

Initialization Step: Let $\epsilon>0$ be a termination scalar. Choose an initial point $x_1$, a penalty parameter $\mu_1>0$, and a scalar $\beta>1$. Let $k=1$ and go to the Main Step.

Main Step

  1. Starting with $x_k$, solve the following problem: $$\min f(x)+\mu_k\alpha(x)$$ Let $x_{k+1}$ be an optimal solution and go to Step $2$.
  2. If $\mu_k(x_{k+1})<\epsilon$, stop; otherwise $\mu_{k+1} = \beta\mu_k$, replace $k$ by $k+1$ and go to Step $1$.

where

$$\alpha(x)=[\max\{0, \sum_{i=1}^nx_i^2-1\}]^2$$

Traditionally, we usually let $\mu_1=10$ and $\beta = 2$. The inner function $\min f(x)+\mu_k\alpha(x)$ can be solved with the Conjugate Gradient Method of Fletcher and Reeves via:

3

$\endgroup$

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .