1
$\begingroup$

I'm trying to follow the derivation in the following Mosek link where a HARA utility optimization problem is reformulated using power cones (see the secion titled "HARA utility as a Power cone"). We want to maximize the HARA utility function over $S$ scenarios, that is

$$\max_{\omega\in\mathbb{R}^{n}}\sum_{s=0}^{S}p_{s}\left(\frac{aW_{s}}{1-\gamma}+b\right)^{\gamma}$$

where the wealth $W_s$ depends on the decision vector $\omega$ and $p_s$ are the scenario probabilities.

According to the link, by introducing auxilliary variables $h_s$ this problem is equivalent to

$$\max_{\omega\in\mathbb{R}^{n}} \sum_{s=0}^{S}p_{s}\left(\frac{1-\gamma}{\gamma}\right)h_{s}$$

such that

$$s.t.\quad h_{s}\leq\left(\frac{1-\gamma}{\gamma}\right)\left(\frac{aW_{s}}{1-\gamma}+b\right)^{\gamma}$$

for all $s \in S$.

How can I prove this? I understand why the auxilliary variables are introduced but where does the $\left(\frac{1-\gamma}{\gamma}\right)$ term come from?

Next depending on the value of $\gamma$ the nonlinear constraint can be written as a power cone constraint

Case 1: $\gamma > 1$ $$\left(h_{s},1,\left(\frac{aW_{s}}{1-\gamma}+b\right)\right)\in\mathcal{P}_{3}^{1/\gamma}$$

Case 1: $0 < \gamma < 1$ $$\left(\left(\frac{aW_{s}}{1-\gamma}+b\right),1,h_{s}\right)\in\mathcal{P}_{3}^{\gamma}$$

Case 1: $\gamma <0$ $$\left(h_{s},\left(\frac{aW_{s}}{1-\gamma}+b\right),1\right)\in\mathcal{P}_{3}^{1/\left(1-\gamma\right)}$$

How can this be proved?

The power cone is defined as

$$\mathcal{P}_{3}^{\alpha}=\left\{ x\in\mathbb{R}^{3}:x_{1}^{\alpha}x_{2}^{1-\alpha}\geq\left|x_{3}\right|,\;x_{1},x_{2}>0\right\}$$

$\endgroup$
1
  • $\begingroup$ There are some obvious typos and misplaced terms there, so better just follow the Mosek modeling cookbook and do it from the definition. We will fix it. I will try to summarize the correct derivation below. $\endgroup$ Commented Jul 28, 2023 at 6:43

1 Answer 1

1
$\begingroup$

First, according to the definition of HARA utility $U(W)$ in the preceding paragraph, the correct function to maximize is actually $$\mathrm{maximize} \sum p_s \left(\frac{1-\gamma}{\gamma}\right)\left(\frac{aW_s}{1-\gamma}+b\right)^\gamma.$$ Note that for $\gamma<0$ and $\gamma>1$ the power function is convex and the coefficient is negative, while for $0<\gamma<1$ the power function is concave and the coefficient is positive, so in all cases we are good to go for maximization and all we have to do is fit things into the power cone regime.

The rest of the derivation attempts to do it together for all cases, but it may be easier to see that for $\gamma<0$ and $\gamma>1$ the problem is equivalent to

$$\mathrm{maximize} \sum p_s \left(\frac{1-\gamma}{\gamma}\right)u_s$$ $$\mathrm{s.t.}\quad u_s\geq\left(\frac{aW_s}{1-\gamma}+b\right)^\gamma$$

while for $0<\gamma<1$ it is

$$\mathrm{maximize} \sum p_s \left(\frac{1-\gamma}{\gamma}\right)u_s$$ $$\mathrm{s.t.}\quad u_s\leq\left(\frac{aW_s}{1-\gamma}+b\right)^\gamma$$

because of the sign of the coefficient (you could equivalently just ignore the coefficient and replace maximization by minimization in the first case when the coefficient was negative).

Then the inequality for $u_s$ is actually expressed in power cone form as you quote (see https://docs.mosek.com/modeling-cookbook/powo.html#powers) and finally $h_s=\left(\frac{1-\gamma}{\gamma}\right)u_s$ (which is also what is returned in the code later.)

$\endgroup$
0

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .