Papers by Vincent Guigues

arXiv (Cornell University), Nov 16, 2019
In this paper we investigate the dual of a Multistage Stochastic Linear Program (MSLP) to study t... more In this paper we investigate the dual of a Multistage Stochastic Linear Program (MSLP) to study two related questions for this class of problems. The first of these questions is the study of the optimal value of the problem as a function of the involved parameters. For this sensitivity analysis problem, we provide formulas for the derivatives of the value function with respect to the parameters and illustrate their application on an inventory problem. Since these formulas involve optimal dual solutions, we need an algorithm that computes such solutions to use them, i.e., we need to solve the dual problem. In this context, the second question we address is the study of solution methods for the dual problem. Writing Dynamic Programming equations for the dual, we can use an SDDP type method, called Dual SDDP, which solves these Dynamic Programming equations computing a sequence of nonincreasing deterministic upper bounds on the optimal value of the problem. However, applying this method will only be possible if the Relatively Complete Recourse (RCR) holds for the dual. Since the RCR assumption may fail to hold (even for simple problems), we design two variants of Dual SDDP, namely Dual SDDP with penalizations and Dual SDDP with feasibility cuts, that converge to the optimal value of the dual (and therefore primal when there is no duality gap) problem under mild assumptions. We also show that optimal dual solutions can be obtained computing dual solutions of the subproblems solved when applying Primal SDDP to the original primal MSLP. The study of this second question allows us to take a fresh look at the class of MSLP with interstage dependent cost coefficients. Indeed, for this class of problems, cost-to-go functions are non-convex and solution methods were so far using SDDP for a Markov chain approximation of the cost coefficients process. For these problems, we propose to apply Dual SDDP with penalizations to the cost-to-go functions of the dual which are concave. This algorithm converges to the optimal value of the problem. Finally, as a proof of concept of the tools developed, we present the results of numerical experiments computing the sensitivity of the optimal value of an inventory problem as a function of parameters of the demand process and compare Primal and Dual SDDP on the inventory and a hydro-thermal planning problems.
Operations Research Letters, Jul 1, 2023
In this paper, we discuss an application of the SDDP type algorithm to nested risk-averse formula... more In this paper, we discuss an application of the SDDP type algorithm to nested risk-averse formulations of Stochastic Optimal Control (SOC) problems. We propose a construction of a statistical upper bound for the optimal value of risk-averse SOC problems. This outlines an approach to a solution of a long standing problem in that area of research. The bound holds for a large class of convex and monotone conditional risk mappings. Finally, we show the validity of the statistical upper bound to solve a real-life stochastic hydro-thermal planning problem.

arXiv (Cornell University), May 27, 2020
In this paper, we introduce a new class of decision rules, referred to as Constant Depth Decision... more In this paper, we introduce a new class of decision rules, referred to as Constant Depth Decision Rules (CDDRs), for multistage optimization under linear constraints with uncertaintyaffected right-hand sides. We consider two uncertainty classes: discrete uncertainties which can take at each stage at most a fixed number d of different values, and polytopic uncertainties which, at each stage, are elements of a convex hull of at most d points. Given the depth µ of the decision rule, the decision at stage t is expressed as the sum of t functions of µ consecutive values of the underlying uncertain parameters. These functions are arbitrary in the case of discrete uncertainties and are poly-affine in the case of polytopic uncertainties. For these uncertainty classes, we show that when the uncertain right-hand sides of the constraints of the multistage problem are of the same additive structure as the decision rules, these constraints can be reformulated as a system of linear inequality constraints where the numbers of variables and constraints is O(1)(n + m)d µ N 2 with n the maximal dimension of control variables, m the maximal number of inequality constraints at each stage, and N the number of stages. As an illustration, we discuss an application of the proposed approach to a Multistage Stochastic Program arising in the problem of hydro-thermal production planning with interstage dependent inflows. For problems with a small number of stages, we present the results of a numerical study in which optimal CDDRs show similar performance, in terms of optimization objective, to that of Stochastic Dual Dynamic Programming (SDDP) policies, often at much smaller computational cost.
Statistics & Decisions, 2008
We introduce an adaptive algorithm to estimate the uncertain parameter of a stochastic optimizati... more We introduce an adaptive algorithm to estimate the uncertain parameter of a stochastic optimization problem. The procedure estimates the one-step-ahead means, variances and covariances of a random process in a distribution-free and multidimensional framework when these means, variances and covariances are slowly varying on a given past interval. The quality of the approximate problem obtained when employing our estimation of the uncertain parameter is controlled in function of the number of components of the process and of the length of the largest past interval where the means, variances and covariances slowly vary. The procedure is finally applied to a portfolio selection model.
In this paper, we apply Value-at-Risk (VaR) approaches on the problem of yearly electric generati... more In this paper, we apply Value-at-Risk (VaR) approaches on the problem of yearly electric generation management. In a classical approach, the future is modelled as a markov chain and the goal is to minimize the average generation cost over this uncertain future. However, such a strategy could lead to big financiallosses if worst case scenarios occur. The two VaR approaches
Optimization, 2009
We consider robust formulations of the mid-term optimal power management problem. For this type o... more We consider robust formulations of the mid-term optimal power management problem. For this type of problems, classical approaches minimize the expected generation cost over a horizon of one year, and model the uncertain future by means of scenario trees. In this setting, extreme scenarios-with low probability in the scenario tree-may fail to be well represented. More precisely, when extreme events occur, strategies devised with the classical approach can result in significant financial losses. By contrast, robust techniques can handle well extreme cases. We consider two robust formulations that preserve the separable structure of the original problem, a fundamental issue when solving real-life problems. Numerical results assess the validity and practicality of the approaches.
Computational Optimization and Applications, 2009
We recommend an implementation of the Markowitz problem to generate stable portfolios with respec... more We recommend an implementation of the Markowitz problem to generate stable portfolios with respect to perturbations of the problem parameters. The stability is obtained proposing novel calibrations of the covariance matrix between the returns that can be cast as convex or quasiconvex optimization problems. A statistical study as well as a sensitivity analysis of the Markowitz problem allow us to justify these calibrations. Our approach can be used to do a global and explicit sensitivity analysis of a class of quadratic optimization problems. Numerical simulations finally show the benefits of the proposed calibrations using real data.

European Journal of Operational Research
In this paper we investigate the dual of a Multistage Stochastic Linear Program (MSLP) to study t... more In this paper we investigate the dual of a Multistage Stochastic Linear Program (MSLP) to study two related questions for this class of problems. The first of these questions is the study of the optimal value of the problem as a function of the involved parameters. For this sensitivity analysis problem, we provide formulas for the derivatives of the value function with respect to the parameters and illustrate their application on an inventory problem. Since these formulas involve optimal dual solutions, we need an algorithm that computes such solutions to use them, i.e., we need to solve the dual problem. In this context, the second question we address is the study of solution methods for the dual problem. Writing Dynamic Programming equations for the dual, we can use an SDDP type method, called Dual SDDP, which solves these Dynamic Programming equations computing a sequence of nonincreasing deterministic upper bounds on the optimal value of the problem. However, applying this method will only be possible if the Relatively Complete Recourse (RCR) holds for the dual. Since the RCR assumption may fail to hold (even for simple problems), we design two variants of Dual SDDP, namely Dual SDDP with penalizations and Dual SDDP with feasibility cuts, that converge to the optimal value of the dual (and therefore primal when there is no duality gap) problem under mild assumptions. We also show that optimal dual solutions can be obtained computing dual solutions of the subproblems solved when applying Primal SDDP to the original primal MSLP. The study of this second question allows us to take a fresh look at the class of MSLP with interstage dependent cost coefficients. Indeed, for this class of problems, cost-to-go functions are non-convex and solution methods were so far using SDDP for a Markov chain approximation of the cost coefficients process. For these problems, we propose to apply Dual SDDP with penalizations to the cost-to-go functions of the dual which are concave. This algorithm converges to the optimal value of the problem. Finally, as a proof of concept of the tools developed, we present the results of numerical experiments computing the sensitivity of the optimal value of an inventory problem as a function of parameters of the demand process and compare Primal and Dual SDDP on the inventory and a hydro-thermal planning problems.
arXiv (Cornell University), May 24, 2017
We introduce a variant of Multicut Decomposition Algorithms (MuDA), called CuSMuDA (Cut Selection... more We introduce a variant of Multicut Decomposition Algorithms (MuDA), called CuSMuDA (Cut Selection for Multicut Decomposition Algorithms), for solving multistage stochastic linear programs that incorporates strategies to select the most relevant cuts of the approximate recourse functions. We prove the convergence of the method in a finite number of iterations and use it to solve six portfolio problems with direct transaction costs under return uncertainty and six inventory management problems under demand uncertainty. On all problem instances CuSMuDA is much quicker than MuDA: between 5.1 and 12.6 times quicker for the porfolio problems considered and between 6.4 and 15.7 times quicker for the inventory problems.
arXiv (Cornell University), Dec 17, 2021
In this paper, we discuss an application of the SDDP type algorithm to nested risk-averse formula... more In this paper, we discuss an application of the SDDP type algorithm to nested risk-averse formulations of Stochastic Optimal Control (SOC) problems. We propose a construction of a statistical upper bound for the optimal value of risk-averse SOC problems. This outlines an approach to a solution of a long standing problem in that area of research. The bound holds for a large class of convex and monotone conditional risk mappings. Finally, we show the validity of the statistical upper bound to solve a real-life stochastic hydro-thermal planning problem.

We define a risk averse nonanticipative feasible policy for multistage stochastic programs and pr... more We define a risk averse nonanticipative feasible policy for multistage stochastic programs and propose a methodology to implement it. The approach is based on dynamic programming equations written for a risk averse formulation of the problem. This formulation relies on a new class of multiperiod risk functionals called extended polyhedral risk measures. Dual representations of such risk functionals are given and used to derive conditions of coherence. In the one-period case, conditions for convexity and consistency with second order stochastic dominance are also provided. The risk averse dynamic programming equations are specialized considering convex combinations of one-period extended polyhedral risk measures such as spectral risk measures. To implement the proposed policy, the approximation of the risk averse recourse functions for stochastic linear programs is discussed. In this context, we detail a stochastic dual dynamic programming algorithm which converges to the optimal value of the risk averse problem.
Electronic Journal of Statistics, 2018
The goal of the paper is to develop a specific application of the convex optimization based hypot... more The goal of the paper is to develop a specific application of the convex optimization based hypothesis testing techniques developed in A.

arXiv: Optimization and Control, 2017
We introduce an extension of Dual Dynamic Programming (DDP) to solve convex nonlinear dynamic pro... more We introduce an extension of Dual Dynamic Programming (DDP) to solve convex nonlinear dynamic programming equations. We call Inexact DDP (IDDP) this extension which applies to situations where some or all primal and dual subproblems to be solved along the iterations of the method are solved with a bounded error. We show that any accumulation point of the sequence of decisions is an approximate solution to the dynamic programming equations. When these errors tend to zero as the number of iterations goes to infinity, we show that IDDP solves the dynamic programming equations. We extend the analysis to stochastic convex nonlinear dynamic programming equations, introducing Inexact Stochastic Dual Dynamic Programming (ISDDP), an inexact variant of SDDP corresponding to the situation where some or all problems to be solved in the forward and backward passes of SDDP are solved approximately. We also show the almost sure convergence of ISDDP for vanishing errors.
arXiv: Optimization and Control, 2017
We consider convex optimization problems formulated using dynamic programming equations. Such pro... more We consider convex optimization problems formulated using dynamic programming equations. Such problems can be solved using the Dual Dynamic Programming algorithm combined with the Level 1 cut selection strategy or the Territory algorithm to select the most relevant Benders cuts. We propose a limited memory variant of Level 1 and show the convergence of DDP combined with the Territory algorithm, Level 1 or its variant for nonlinear optimization problems. In the special case of linear programs, we show convergence in a finite number of iterations. Numerical simulations illustrate the interest of our variant and show that it can be much quicker than a simplex algorithm on some large instances of portfolio selection and inventory problems.

arXiv: Optimization and Control, 2019
We introduce a variant of Multicut Decomposition Algorithms (MuDA), called CuSMuDA (Cut Selection... more We introduce a variant of Multicut Decomposition Algorithms (MuDA), called CuSMuDA (Cut Selection for Multicut Decomposition Algorithms), for solving multistage stochastic linear programs that incorporates a class of cut selection strategies to choose the most relevant cuts of the approximate recourse functions. This class contains Level 1 and Limited Memory Level 1 cut selection strategies, initially introduced for respectively Stochastic Dual Dynamic Programming (SDDP) and Dual Dynamic Programming (DDP). We prove the almost sure convergence of the method in a finite number of iterations and obtain as a by-product the almost sure convergence in a finite number of iterations of SDDP combined with our class of cut selection strategies. We compare the performance of MuDA, SDDP, and their variants with cut selection (using Level 1 and Limited Memory Level 1) on several instances of a portfolio problem and of an inventory problem. On these experiments, in general, SDDP is quicker (i.e.,...

arXiv: Optimization and Control, 2016
We study statistical properties of the optimal value and optimal solutions of the Sample Average ... more We study statistical properties of the optimal value and optimal solutions of the Sample Average Approximation of risk averse stochastic problems. Central Limit Theorem type results are derived for the optimal value and optimal solutions when the stochastic program is expressed in terms of a law invariant coherent risk measure. The obtained results are applied to hypotheses testing problems aiming at comparing the optimal values of several risk averse convex stochastic programs on the basis of samples of the underlying random vectors. We also consider non-asymptotic tests based on confidence intervals on the optimal values of the stochastic programs obtained using the Stochastic Mirror Descent algorithm. Numerical simulations show how to use our developments to choose among different distributions and show the superiority of the asymptotic tests on a class of risk averse stochastic programs.

Journal of Optimization Theory and Applications, 2021
We introduce Stochastic Dynamic Cutting Plane (StoDCuP), an extension of the Stochastic Dual Dyna... more We introduce Stochastic Dynamic Cutting Plane (StoDCuP), an extension of the Stochastic Dual Dynamic Programming (SDDP) algorithm to solve multistage stochastic convex optimization problems. At each iteration, the algorithm builds lower bounding affine functions not only for the cost-to-go functions, as SDDP does, but also for some or all nonlinear cost and constraint functions. We show the almost sure convergence of StoDCuP. We also introduce an inexact variant of StoDCuP where all subproblems are solved approximately (with bounded errors) and show the almost sure convergence of this variant for vanishing errors. Finally, numerical experiments are presented on nondifferentiable multistage stochastic programs where Inexact StoD-CuP computes a good approximate policy quicker than StoDCuP while SDDP and the previous inexact variant of SDDP combined with Mosek library to solve subproblems were not able to solve the differentiable reformulation of the problem.

European Journal of Operational Research, 2021
In this paper, we introduce a new class of decision rules, referred to as Constant Depth Decision... more In this paper, we introduce a new class of decision rules, referred to as Constant Depth Decision Rules (CDDRs), for multistage optimization under linear constraints with uncertaintyaffected right-hand sides. We consider two uncertainty classes: discrete uncertainties which can take at each stage at most a fixed number d of different values, and polytopic uncertainties which, at each stage, are elements of a convex hull of at most d points. Given the depth µ of the decision rule, the decision at stage t is expressed as the sum of t functions of µ consecutive values of the underlying uncertain parameters. These functions are arbitrary in the case of discrete uncertainties and are poly-affine in the case of polytopic uncertainties. For these uncertainty classes, we show that when the uncertain right-hand sides of the constraints of the multistage problem are of the same additive structure as the decision rules, these constraints can be reformulated as a system of linear inequality constraints where the numbers of variables and constraints is O(1)(n + m)d µ N 2 with n the maximal dimension of control variables, m the maximal number of inequality constraints at each stage, and N the number of stages. As an illustration, we discuss an application of the proposed approach to a Multistage Stochastic Program arising in the problem of hydro-thermal production planning with interstage dependent inflows. For problems with a small number of stages, we present the results of a numerical study in which optimal CDDRs show similar performance, in terms of optimization objective, to that of Stochastic Dual Dynamic Programming (SDDP) policies, often at much smaller computational cost.

Optimization and Engineering, 2020
We define a regularized variant of the Dual Dynamic Programming algorithm called DDP-REG to solve... more We define a regularized variant of the Dual Dynamic Programming algorithm called DDP-REG to solve nonlinear dynamic programming equations. We extend the algorithm to solve nonlinear stochastic dynamic programming equations. The corresponding algorithm, called SDDP-REG, can be seen as an extension of a regularization of the Stochastic Dual Dynamic Programming (SDDP) algorithm recently introduced which was studied for linear problems only and with less general prox-centers. We show the convergence of DDP-REG and SDDP-REG. We assess the performance of DDP-REG and SDDP-REG on portfolio models with direct transaction and market impact costs. In particular, we propose a risk-neutral portfolio selection model which can be cast as a multistage stochastic second-order cone program. The formulation is motivated by the impact of market impact costs on large portfolio rebalancing operations. Numerical simulations show that DDP-REG is much quicker than DDP on all problem instances considered (up to 184 times quicker than DDP) and that SDDP-REG is quicker on the instances of portfolio selection problems with market impact costs tested and much faster on the instance of risk-neutral multistage stochastic linear program implemented (8.2 times faster).
Annales de l'Institut Henri Poincaré, Probabilités et Statistiques, 2020
We discuss an "operational" approach to testing convex composite hypotheses when the underlying d... more We discuss an "operational" approach to testing convex composite hypotheses when the underlying distributions are heavy-tailed. It relies upon Euclidean separation of convex sets and can be seen as an extension of the approach to testing by convex optimization developed in [8, 12]. In particular, we show how one can construct quasi-optimal testing procedures for families of distributions which are majorated, in a certain precise sense, by a sub-spherical symmetric one and study the relationship between tests based on Euclidean separation and "potential-based tests." We apply the promoted methodology in the problem of sequential detection and illustrate its practical implementation in an application to sequential detection of changes in the input of a dynamic system.
Uploads
Papers by Vincent Guigues