Given,
$$\begin{array}{ll} \text{maximize} & x^T M \, x\\ \text{subject to} & \| x \| \leq 1\\ & x \geq 0\end{array}$$
We can rewrite the constraint $\|x||\le1$ to $\sum_{i=1}^nx_i^2\le1$.
$$[x_1,\ldots,x_n]\begin{bmatrix}m_{11}&\cdots &m_{1n}\\\vdots&\ddots&\vdots\\m_{n1}&\cdots&m_{nn}\end{bmatrix}\begin{bmatrix}x_1\\x_2\\\vdots\\x_n\end{bmatrix}$$
$$=x_1\sum_{i=1}^nx_im_{1i} + x_2\sum_{i=1}^nx_im_{2i}
+ \cdots+x_n\sum_{i=1}^nx_im_{ni}$$
Then, we will have the following model:
$$
\begin{array}{ll}
-\text{minimize} & -1(x_1\sum_{i=1}^nx_im_{1i} + x_2\sum_{i=1}^nx_im_{2i} + \cdots+x_n\sum_{i=1}^nx_im_{ni})\\\\
\text{subject to} & \sum_{i=1}^nx_i^2\le1\\ & x \ge 0
\end{array}$$
Then, we can use the Penalty function to solve this model via:
Initialization Step: Let $\epsilon>0$ be a termination scalar. Choose an initial point $x_1$, a penalty parameter $\mu_1>0$, and a
scalar $\beta>1$. Let $k=1$ and go to the Main Step.
Main Step
- Starting with $x_k$, solve the following problem: $$\min f(x)+\mu_k\alpha(x)$$
Let $x_{k+1}$ be an optimal solution and go to Step $2$.
- If $\mu_k(x_{k+1})<\epsilon$, stop; otherwise $\mu_{k+1} = \beta\mu_k$, replace $k$ by $k+1$ and go to Step $1$.
where
$$\alpha(x)=[\max\{0, \sum_{i=1}^nx_i^2-1\}]^2$$
Traditionally, we usually let $\mu_1=10$ and $\beta = 2$. The inner function $\min f(x)+\mu_k\alpha(x)$ can be solved with the Conjugate Gradient Method of Fletcher and Reeves via: