Skip to main content

All Questions

Filter by
Sorted by
Tagged with
0 votes
0 answers
18 views

Deriving DFP updating formula

In A.Γ.'s answer to the solution of the minimization problem in the question Quasi-newton methods: Understanding DFP updating formula, I'm confused as to where the formula for step 3 is derived from. ...
Acki's user avatar
  • 43
0 votes
0 answers
16 views

Does the solutions yielded by augmented Lagrange function fulfill equality constraints?

Consider the following optimization problem, \begin{equation} \begin{aligned} \min_{x}\ & f(x),\\ s.t.\ &g(x)=0, \end{aligned} \end{equation} with the smooth functions~$f(x),g(x)$, which can ...
Jiayu Zou's user avatar
1 vote
0 answers
55 views

Seeking a Proof for Eigenvalue Bound in Theorem 8.3 from Nocedal & Wright (Broyden Class)

I am studying the Broyden class of quasi-Newton methods in Numerical Optimization by Nocedal and Wright, and I've come across Theorem 8.3, which does not include a detailed proof in the book. The ...
Math21's user avatar
  • 11
0 votes
1 answer
85 views

Projected Gradient Descent Applies on the Constraint $\boldsymbol{l} \leq \boldsymbol{X} \boldsymbol{t} \leq \boldsymbol{u}$

I understand that in projected gradient descent, I need to project my (intermediate) solution to the feasible set with shortest distance like here: What is the difference between projected gradient ...
Taylor Fang's user avatar
0 votes
0 answers
37 views

Optimization of affine invariant metric

Notations: $\theta$: Rotation angle $s$: Scale factor $t = (t_x, t_y)$: Translation vector $R(\theta) = \begin{bmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{bmatrix}$: ...
Andyale's user avatar
  • 51
1 vote
0 answers
66 views

Numerical stability motivation of using Log Sum Exp for Geometric Programming?

I am an engineer who is currently working with some network optimization problem. Recently, I have been learning about Geometric Programming (GP). It seems to me that there are 2 approach for solving ...
Tuong Nguyen Minh's user avatar
0 votes
0 answers
78 views

Is there a way to project onto the intersection of a box and a half-space in closed form?

My problem is to project a fixed point $\hat{x}\in\mathbb{R}^n$ onto the intersection of two convex closed sets, in particular a box and an half-space. Formally, we want to find $$ \arg\min_y\|\hat{x}-...
alice owo's user avatar
2 votes
1 answer
127 views

How to deal with ill conditioned optimization problem (Sinkhorn barycenter)

Problematic (Debiased Sinkhorn barycenter, proposed by H.Janti et al.): Let $\alpha_1, \ldots, \alpha_K \in \Delta_n$ and $\mathbf{K}=e^{-\frac{\mathrm{C}}{\varepsilon}}$. Let $\pi$ denote a sequence ...
Tung Nguyen's user avatar
  • 1,248
1 vote
1 answer
188 views

A hard optimization problem

Consider the following function, for $1\leq j \leq N$ $$\tag{1} y_j=\sum_{k=0}^{M} \frac{e^{-\sum_{|i|\leq k}(k-|i|)x_{j+i}/v}-e^{-\sum_{|i|\leq k}(k+1-|i|)x_{j+i}/v}}{\sum_{|i|\leq k} x_{j+i}} $$ for ...
sam wolfe's user avatar
  • 3,585
0 votes
0 answers
32 views

Is it possible to convexify the inequality constraint $z \leq x^3 \cdot y$?

Is there a way to convexify the inequality constraint $z \leq x^3 \cdot y$ in a nonlinear optimization problem with $x, y, z$ being nonnegative variables?
Michael's user avatar
  • 53
0 votes
1 answer
188 views

What's the purpose of the KKT condition when first-order optimality condition exists?

Given a convex optimization problem $$\min f(x), x \in D$$ $f, D$ convex. The first-order optimality condition says $x$ is the minimizer if and only if $\nabla f(x)^T (x-y) \geq 0, \forall y\in D.$ ...
Shamisen Expert's user avatar
1 vote
1 answer
96 views

Solving a convex problem with quasiconvexity with CVXPY?

I have a question regarding quasiconvexity and its usage in CVXPY. I have the following optimization problem. \begin{equation*} \begin{aligned} \min_{x} \quad & \sqrt x\\ \textrm{subject to:} \...
Michael's user avatar
  • 53
0 votes
0 answers
25 views

Gradient-based methods on quasi-convex problems?

In the book I am currently reading, it says Gradient-based methods are useful for one global optimum and no additional local optima: (quasi-)concave for maximums, (quasi-)convex for minimums. It is ...
Marlon Brando's user avatar
2 votes
1 answer
132 views

Is it true that a function is $c$-strongly convex if $f - c\|x\|^2_p$ is convex for ANY norm $\|x\|_p$?

It is a common knowledge that a function is $c$-strongly convex if $f - c\|x\|^2_2$ is convex. However, can we replace $\|x\|_2$ with any norm $\|x\|_p$? I strongly suspect this holds, but from ...
Shamisen Expert's user avatar
1 vote
1 answer
66 views

Positive-semidefinite-constrained nuclear-norm-regularized optimization problem [closed]

I want to solve the following optimization problem: Let $S \in \mathbb{R}^{p \times p}$ be a symmetric matrix, and fix $n \in \mathbb{R}$ such that $n$ is significantly less than $p$ and the rank of $...
Eco-nometrician's user avatar
0 votes
1 answer
36 views

what are the generic methods to prove solution existence!?

Suppose we are in $\mathbb R^n$ and consider $$\min f(x) \qquad s.t. \qquad x\in P.$$ Under what conditions on $f$ and $P$, we can guarantee this problem obtains a solution? The most generic ...
Sam's user avatar
  • 381
2 votes
1 answer
52 views

Least squares regression with stable and non-negative constraint

I am trying to fit an auto-regressive model to a time-series where I have some constraints. We have the first order model, $$ X_{t+1} = AX_t + \xi_t, $$ which I can pose as a least-squares ...
citizenfour's user avatar
2 votes
1 answer
99 views

An idea for convex-concave saddle-point problem [closed]

For the following convex-concave problem $$ \min_{x\in\mathbb{R}^n}\max_{y\in\mathbb{R}^n} f(x)+\langle y,x\rangle-g(y) $$ Does the following algorithm achieve convergence? Under what conditions? $x_{...
Kamy's user avatar
  • 31
1 vote
0 answers
63 views

IN SVMs, why can't the Lagrange multipliers for support vectors be 0?

In hard margin SVMs, we have the primal optimization problem: \begin{align*} \min_{\vec{w}, b} \max_{\vec{\alpha}} \quad & \frac{||\vec{w}||^2}{2} + \sum_{i=1}^m \alpha_i \left( 1 - y_i (\vec{w} \...
Bob Smith's user avatar
0 votes
1 answer
84 views

Non linear optimization wrt to square matrix

Given vectors $\pmb{y}_0, \pmb{y}_1, \dots \pmb{y}_t \in \mathbb{R}^n$, let $F : \mathbb{R}^{n \times n} \to \mathbb{R}_0^+$ be defined by $$F(W) := \left\| \pmb{y}_1 - W \pmb{y}_0 \right\|^2 + \left\|...
Vladislav Bizin's user avatar
2 votes
1 answer
86 views

How can i solve this optimization problem effectively?

Recently, i met an optimization problem $$ \arg \min_{\mathbf{x}}\Vert \mathbf {Kx} - \mathbf{y} \Vert^2_2+\frac{\eta \Vert \mathbf{Dx} -\mathbf d \Vert_2^2 }{\Vert \mathbf{Dx} \Vert_2^2} $$ from ...
Leung Joe's user avatar
1 vote
0 answers
96 views

alternating gradient descent on quadratic function

I have $x \in \mathbb{R}^{d\times 1}$ and $y\in \mathbb{R}^{p\times d}$ and I want to minimize the function $f(x,y) = g(x) + x^T y^T y x$ such that $y^T y$ is a positive definite matrix and $g$ is a ...
Ki Chao's user avatar
  • 43
1 vote
0 answers
33 views

Reference request: convergence of the alternating gradient descent

I have an optimization problem with two sets of variables, $x \in \mathbb{R}^d$ and $y \in \mathbb{R}^p$, and you want to minimize the objective function $f(x, y)$. The basic idea of AGD is to update ...
Ki Chao's user avatar
  • 43
1 vote
1 answer
93 views

Check the convexity of the upper bounding function of the subgradient method

I want to proof that the function $$ U(\alpha_1, \ldots, \alpha_k) = \frac{R^2 + M^2 \sum_{i=1}^k \alpha_i^2}{2 \sum_{i=1}^k \alpha_i} $$ is convex, for $R, M \geq 0$ and $\alpha_i > 0$. This ...
Wellington Silva's user avatar
0 votes
0 answers
44 views

Simplifying large scale nonlinear numerical optimization problem

I want to solve the optimization problem below numerically using GAMS. Given the formulation, I was wondering if there are suitable ways of transforming and or approximating the given function to ...
tjm's user avatar
  • 1
3 votes
1 answer
90 views

Conjugate gradient descent when $A$ is non-singular but not symmetric

If $A$ is a non-singular matrix that is not symmetric, will the $k$th update of CGD still yield an optimal guess $x_{k+1}$ for the system $Ax = b$? Using the update rule $x_{k+1} = x_k + \alpha_k p_k,...
jackyooo's user avatar
  • 149
0 votes
1 answer
75 views

Suppose that $f − \frac{1}{2}m||x||^2$ is convex where $f$ is differentiable.

Suppose that $f − \frac{1}{2}m||x||^2$ is convex where $f$ is differentiable. Show that: $f(αx + (1-α)y) ≤ αf(x) + (1-α)f(y) - \frac{α(1−α)m}{2}||x−y||^2$, $α∈(0,1)$ To start, I tried to apply the ...
user28941253's user avatar
1 vote
1 answer
35 views

Estimate a vector given pairwise differences

I am not an expert in optimization, so I would appreciate any leads on the following problem, which I have stumbled upon in my research. Essentially, we are asked to estimate an array $x \in \mathbb{R}...
Renat Sergazinov's user avatar
-2 votes
1 answer
51 views

Suppose that $\liminf_{k \to \infty} \|\nabla f(x_k)\| = 0$

Suppose that $$ \liminf_{k \to \infty} \|\nabla f(x_k)\| = 0 $$ and the sequence $\{x_k\}$ is bounded. Is it true that all accumulation points of $\{x_k\}$ are stationary points? Can you find a ...
user28941253's user avatar
0 votes
1 answer
184 views

Prove that $\|∇f(x) − ∇f(y)\| ≤ M\|x − y\|.$

In the context of convex optimisation, suppose $f$ is a convex function, differentiable with $∇f$, and suppose $0 ⪯ ∇^{2}f ⪯ MI$. I want to show that $\|∇f(x) − ∇f(y)\| ≤ M\|x − y\|$. I am not sure ...
user28941253's user avatar
1 vote
0 answers
102 views

Is SOCP harder than GP?

As an electrical engineer, I have been studying convex optimization for a while. During my study, I see that most textbooks claim that both second-order cone programs (SOCP) and geometric programs (GP)...
Tuong Nguyen Minh's user avatar
1 vote
1 answer
65 views

HARA utility as a power cone

I'm trying to follow the derivation in the following Mosek link where a HARA utility optimization problem is reformulated using power cones (see the secion titled "HARA utility as a Power cone&...
AUK1939's user avatar
  • 385
0 votes
0 answers
954 views

Static Vs. Dynamic Optimization?

I am beginner to optimization and my question is fundamental. We all know that Static optimization means the design variables/objective function does not vary with respect to time. Dynamic ...
Unknown's user avatar
  • 23
1 vote
1 answer
231 views

What is the time complexity of deciding whether a linear program is feasible?

I have a question: What is the time complexity for solving a convex feasibility problem (particularly, this question focuses on the linear feasibility problem)? Specifically, the feasibility problem ...
Yvan's user avatar
  • 11
0 votes
1 answer
134 views

Gradient descent over a restricted convex domain — how do we guarantee that we stay in the domain if the global minimizer is outside of it?

Let $f:\mathbb R^d \rightarrow \mathbb R$ be a convex function and $A \subset \mathbb R^d$ a convex set. We are interested in finding the minimum of $f$ over $A$. We have the gradient of $f$ and we ...
W. Volante's user avatar
  • 2,344
1 vote
0 answers
528 views

Lagrangian function and first order necessary optimality conditions

I am given the following equality constrained convex QP. $$\min_x \frac{1}{2}x'Hx+g'x$$ $$st. A'x+b=0$$ with $H\succ 0$. I want to find the Lagrangian function for this problem and the first order ...
sarah johnson's user avatar
1 vote
1 answer
121 views

Projection to triangular matrices whose symmetrisation has positive eigenvalues

Let $\lambda>0$, and $\mathbb{R}^{n\times n}$ be the space of matrices equipped with Frobenius norm. Consider the set $$ D=\{A\in \mathbb{R}^{n\times n}\mid \textrm{$A$ is lower triangular and ...
John's user avatar
  • 13.5k
3 votes
0 answers
148 views

Minimizing a strictly convex quadratic function subject to a single nonconvex quadratic constraint

For $A\succ 0, B\succeq 0, \rho,\phi>0$, and a given $y$, I am trying the find a cheap yet numerically stable method for solving a sequence of the following problem (it is a penalty method and $\...
Sam's user avatar
  • 381
1 vote
1 answer
567 views

Quadratic programming with positive semidefinite matrix

Let us consider the following optimization problem $$ \min \,\,\, (1/2)x^{T}Px + q^{T}x $$ $$ \text{subject to} \quad Ax \in C $$ where $C$ is a closed convex set. Assume now that $p$ is positive ...
AnTlr's user avatar
  • 99
5 votes
1 answer
369 views

Convex optimization using constraint projection matrices

I have a convex optimization of the form $$ \min_x \frac{1}{2} x^TAx-x^Tb \\ \text{s.t.}\ (I-P)x=0 $$ where $A$ is a $n$ by $n$ positive definite matrix, and $P$ is a $n$ by $n$ projection matrix (it ...
Michael M.'s user avatar
1 vote
1 answer
98 views

Should projected gradient descent be used for solving convex optimization problem instead of interior point method?

It is widely known that interior point method has been used for convex optimization problem. However, the intermediary step of interior point method is usually consist of the Newton method. This may ...
Tuong Nguyen Minh's user avatar
1 vote
1 answer
488 views

Using KKT conditions to solve the following problem?

I'm trying to optimize the following equation: $$min_y \frac{1}{2}||y-z||^2_2 \\ s.t \ (\mathbf{y} - \mathbf{\sigma})^T\mathbf{A}(\mathbf{y}-\mathbf{\sigma}) \leq 1$$ Note: $A ⪰ 0$ I've started out by ...
skidjoe's user avatar
  • 375
0 votes
1 answer
105 views

Proving the optimal values to minimize a function with a symmetric matrix?

I have an optimization problem which I'm not too sure how to go about doing... I need to show that the particular value of $x = (x_1, x_2)$ where $x_1, x_2$ are numbers, I'm just omitting them from ...
skidjoe's user avatar
  • 375
3 votes
0 answers
90 views

Whats the idea behind using Bregman divergence (in particular Bregman proximal method) to minimise a functional?

Given some functional $E$ on a convex set $\Omega$, the Bregman divergence $D_E$ (of some convex function $E$) is defined at a point $p$ as the difference between its value at that point and its first ...
Monty's user avatar
  • 2,372
1 vote
1 answer
59 views

Equation of Parabola in Rosenbrock function

The Rosenbrock function is a popular benchmark for optimization, its general form is $$ f(x, y) = (a - x)^2 + b(y - x^2)^2 $$ where typically $a=1$ and $b=100$. The function looks like this Is it ...
Euler_Salter's user avatar
  • 5,417
0 votes
0 answers
59 views

Algorithm for QP

Let $\mathbb{Q}^+ = \{ q \in \mathbb{Q} : q > 0 \}$. For $n \in \mathbb{N}^+$, let $a_1, \dots, a_n \in \mathbb{Q}$, $b_1, \dots, b_{n} \in \mathbb{Q}^+$ and $c_1, \ldots, c_n \in \mathbb{Z}$. ...
rodie9k's user avatar
  • 101
1 vote
0 answers
342 views

How important are the initial values for non linear optimization?

Based on what I understand, non-linear optimization can only find the closest local minimum based on the initial values given. Therefore, the initial values are very important, because if the initial ...
user3667089's user avatar
1 vote
0 answers
90 views

Clarification on the complexity of Interior Point Method for general convex problem

I am trying to find the computational complexity of interior point method for a general convex problem and stumbled upon the following complexity in Section 1.3.1 of Convex Optimization by Stephen ...
Neil's user avatar
  • 47
1 vote
0 answers
59 views

Can I solve an optization problem by solving directly the KKT conditions instead of the pertubed KKT in primal-dual interior point method?

I encountered the following problem in studying convex optimization: In order to solve the convex optimization problem $$\begin{cases} \mathrm{min}\ f(x)\\\mathrm {s.t.}\ g_i(x)\leq0\ (i=1,...,m)\ \...
xiaohaozi's user avatar
  • 103
2 votes
0 answers
136 views

Why do we use average iterates when implementing SGD?

I read many papers about stochastic gradient descent (SGD). One thing I am curious is that many non-asymptotic convergence results are given with respect to the error between optimal solution and the ...
anyone's user avatar
  • 31

1
2 3 4 5
8