We study an extended set of Mean-Gini portfolio optimization models that encompasses a general ve... more We study an extended set of Mean-Gini portfolio optimization models that encompasses a general version of the mean-risk formulation, the Minimal Gini model (MinG) that minimizes Gini's Mean Differences, and the new risk-adjusted Mean-Gini Ratio (MGR) model. We analyze the properties of the various models, prove that a performance measure based on a Risk Adjusted version of the Mean Gini Ratio (RAMGR) is coherent, and establish the equivalence between maximizing this performance measure and solving for the maximal Mean-Gini ratio. We propose a linearization approach for the fractional programming formulation of the MGR model. We also conduct a thorough evaluation of the various Mean-Gini models based on four data sets that represent combinations of bullish and bearish scenarios in the in-sample and out-of-sample phases. The performance is (i) analyzed with respect to eight return, risk, and risk-adjusted criteria, (ii) benchmarked with the S&P500 index, and (iii) compared with their Mean-Variance counterparts for varying risk aversion levels and with the Minimal CVaR and Minimal Semi-Deviation models. For the data sets used in our study, our results suggest that the various Mean-Gini models almost always result in solutions that outperform the S&P500 benchmark index with respect to the out-of-sample cumulative return. Further, particular instances of Mean-Gini models result in solutions that are as good or better (for example, MinG in bullish in-sample scenarios, and MGR in bearish out-of-sample scenarios) than the solutions obtained with their counterparts in Mean-Variance, Minimal CVaR and Minimal Semi-Deviation models.
We study the distributionally robust linearized stable tail adjusted return ratio (DRLSTARR) port... more We study the distributionally robust linearized stable tail adjusted return ratio (DRLSTARR) portfolio optimization problem, in which the objective is to maximize the worst-case linearized stable tail adjusted return ratio (LSTARR) performance measure under data-driven Wasserstein ambiguity. We consider two types of imperfectly known uncertainties, named uncertain probabilities and continuum of realizations, associated with the losses of assets. We account for two typical combinatorial trading constraints, called buy-in threshold and diversification constraints, to reflect stock market restrictions. Leveraging conic duality theory to tackle the distributionally robust worst-case expectation, the proposed problems are reformulated into mixed-integer linear programming problems. We carry out a series of empirical tests to illustrate the scalability and effectiveness of the proposed solution framework, and to evaluate the performance of the DRLSTARR-constructed portfolios. The cross-validation results obtained using a rolling-horizon procedure show the superior out-of-sample performance of the DRLSTARR portfolios under an uncertain continuum of realizations.
Multi-portfolio optimization problems and the incorporation of marginal risk contribution constra... more Multi-portfolio optimization problems and the incorporation of marginal risk contribution constraints have recently received a sustained interest from academia and financial practitioners. We propose a class of new stochastic risk budgeting multi-portfolio optimization models that impose portfolio as well as marginal risk constraints. The models permit the simultaneous and integrated optimization of multiple sub-portfolios in which the marginal risk contribution of each individual security is accounted for. A risk budget defined with a downside risk measure is allocated to each security. We consider the two cases in which the asset universes of the sub-portfolios are either disjoint (diversification of style) or overlap (diversification of judgment). The proposed models take the form of stochastic programming problems and include each a probabilistic constraint with multi-row random technology matrix. We expand a combinatorial modeling framework to represent the feasible set of the chance constraints first as a set of mixed-integer linear inequalities. The new reformulation proposed in this paper is much sparser than previously presented reformulations and allows the efficient solution of problem instances that could not be solved otherwise. We evaluate the efficiency and scalability of the proposed method that is general enough to be applied to general chance-constrained optimization problems. We conduct a cross-validation study via a rolling-horizon procedure to assess the performance of the models, and understand the impact of the parameters and diversification types on the portfolios.
We investigate a class of fractional distributionally robust optimization problems with uncertain... more We investigate a class of fractional distributionally robust optimization problems with uncertain probabilities. They consist in the maximization of ambiguous fractional functions representing reward-risk ratios and have a semi-infinite programming epigraphic formulation. We derive a new fully parameterized closed-form to compute a new bound on the size of the Wasserstein ambiguity ball. We design a data-driven reformulation and solution framework. The reformulation phase involves the derivation of the support function of the ambiguity set and the concave conjugate of the ratio function. We design modular bisection algorithms which enjoy the finite convergence property. This class of problems has wide applicability in finance, and we specify new ambiguous portfolio optimization models for the Sharpe and Omega ratios. The computational study shows the applicability and scalability of the framework to solve quickly large, industry-relevant-size problems, which cannot be solved in one day with state-of-the-art mixed-integer nonlinear programming (MINLP) solvers.
We study a multi-objective portfolio optimization model that employs two conflicting objectives—m... more We study a multi-objective portfolio optimization model that employs two conflicting objectives—maximizing mean return, and minimizing risk as measured by the Gini Mean Difference (GMD). We assume that an investor’s implicit utility is a function of these two objectives and help the investor identify the optimal (i.e., most preferred) portfolio among the efficient ones. We develop an interactive solution procedure based on the concept of domination cones that can be used with a class of utility functions defined over Mean-Gini criteria. The investor’s preferences are elicited interactively through pairwise comparisons of efficient Mean-Gini portfolios based on which domination cones are derived to guide the search for the most preferred portfolio. The interactive solution method enjoys a finite convergence property. Computational results illustrating the effectiveness of the interactive procedure and the out-of-sample performance of the optimal portfolios for a range of implicit utility functions are presented. The results indicate that the optimal portfolios defined by our models consistently outperform the S&P 500 index. Further, an out-of-sample performance analysis reveals that a strategy emphasizing mean return over Gini performs best under similar market conditions over the training and testing sets, while a risk-averse strategy emphasizing Gini over mean return performs best under market reversal conditions.
We define a regularized variant of the Dual Dynamic Programming algorithm called DDP-REG to solve... more We define a regularized variant of the Dual Dynamic Programming algorithm called DDP-REG to solve nonlinear dynamic programming equations. We extend the algorithm to solve nonlinear stochastic dynamic programming equations. The corresponding algorithm, called SDDP-REG, can be seen as an extension of a regularization of the Stochastic Dual Dynamic Programming (SDDP) algorithm recently introduced which was studied for linear problems only and with less general prox-centers. We show the convergence of DDP-REG and SDDP-REG. We assess the performance of DDP-REG and SDDP-REG on portfolio models with direct transaction and market impact costs. In particular, we propose a risk-neutral portfolio selection model which can be cast as a multistage stochastic second-order cone program. The formulation is motivated by the impact of market impact costs on large portfolio rebalancing operations. Numerical simulations show that DDP-REG is much quicker than DDP on all problem instances considered (up to 184 times quicker than DDP) and that SDDP-REG is quicker on the instances of portfolio selection problems with market impact costs tested and much faster on the instance of risk-neutral multistage stochastic linear program implemented (8.2 times faster).
We propose a new medical evacuation (MEDEVAC) model with endogenous uncertainty in the casualty d... more We propose a new medical evacuation (MEDEVAC) model with endogenous uncertainty in the casualty delivery times. The goal is to provide timely medical treatment to injured soldiers and a prompt evacuation via air ambulances. The model determines where to locate medical treatment facilities (MTF) and air ambulances, how to dispatch air ambulances to the point-of-injury, and to which MTF to channel the casualty. The model captures the effect of a delayed MEDEVAC response on the survivability of soldiers, enforces the Golden Hour evacuation doctrine, and represents the availability of air ambulances as an endogenous source of uncertainty since it is contingent on the locations of MTFs. The MEDEVAC model is an MINLP problem whose continuous relaxation is in general nonconvex and for which we develop a new algorithmic method articulated around two main components: i) new bounding techniques obtained through the solution of restriction and relaxation problems, and ii) a spatial branch-and-bound algorithm solving conic mixed-integer programs at each node of the tree. The computational study based on data from the Operation Enduring Freedom reveals that: the bounding problems can be quickly solved regardless of the problem size; the bounds are tight; and the spatial branch-and-bound dominates the Cplex and the Baron solvers in terms of both computational time and robustness. As compared to the standard MEDEVAC myopic policy, our approach increases the number of casualties treated timely and contributes to reducing the number of deaths on the battlefield. The benefits increase as the MEDEVAC resources become tighter and the combats intensify. The model can be used at the strategic level to design an efficient MEDEVAC system and at the tactical level for intelligent tasking and dispatching. Additionally, this study provides valuable contributions to the civilian emergency care community and to the MINLP discipline.
We propose a new fractional stochastic integer programming model for forestry revenue management.... more We propose a new fractional stochastic integer programming model for forestry revenue management. The model takes into account the main sources of uncertainties-wood prices and tree growth-and maximizes a reliability-to-stability revenue ratio that reflects two major goals pursued by forest owners. The model includes a joint chance constraint with multirow random technology matrix to account for reliability and a joint integrated chance constraint to account for stability. We propose a reformulation framework to obtain an equivalent mixed-integer linear programming formulation amenable to a numerical solution. We use a Boolean modeling framework to reformulate the chance constraint and a series of linearization techniques to handle the nonlinearities due to the joint integrated chance constraint, the fractional objective function, and the bilinear terms. The computational study attests that the reformulation of the model can handle large number of scenarios and can be solved efficiently for sizable forest harvesting problems.
Widespread outbreaks of infectious disease, i.e., the so-called pandemics that may travel quickly... more Widespread outbreaks of infectious disease, i.e., the so-called pandemics that may travel quickly and silently beyond boundaries, can significantly upsurge the morbidity and mortality over largescale geographical areas. They commonly result in enormous economic losses, political disruptions, social unrest, and quickly evolve to a national security concern. Societies have been shaped by pandemics and outbreaks for as long as we have had societies. While differing in nature and in realizations, they all place the normal life of modern societies on hold. Common interruptions include job loss, infrastructure failure, and political ramifications. The electric power systems, upon which our modern society relies, is driving a myriad of interdependent services, such as water systems, communication networks, transportation systems, health services, etc. With the sudden shifts in electric power generation and demand portfolios and the need to sustain quality electricity supply to end customers (particularly mission-critical services) during pandemics, safeguarding the nation's electric power grid in the face of such rapidly evolving outbreaks is among the top priorities. This paper explores the various mechanisms through which the electric power grids around the globe are influenced by pandemics in general and COVID-19 in particular, shares the lessons learned and best practices taken in different sectors of the electric industry in responding to the dramatic shifts enforced by such threats, and provides visions for a pandemic-resilient electric grid of the future.
Production and Operations Management, Jul 22, 2021
Inspired by the opportunities provided by the Industry 4.0 technologies for smarter, risk‐informe... more Inspired by the opportunities provided by the Industry 4.0 technologies for smarter, risk‐informed, safer, and resilient operation, control, and management of the lifeline critical networks, this study investigates mobility‐as‐a‐service for resilience delivery during natural disasters. Focusing on effective service restoration in power distribution systems, we introduce mobile power sources (MPSs) as the restoration technology of the future, the mobility of which can be harnessed for spatiotemporal flexibility exchange and effective response and recovery during disasters. We present automated decision‐making solutions that coordinate the MPSs utilization with repair crew (RC) schedules taking into account constraints in both energy and transportation networks. When integrated, the suggested technology aided by the proposed optimization models will have the potential to disrupt the current practice in boosting the resilience and operational endurance of the mission‐critical systems and services during disasters, ultimately resulting in an enriched social welfare and national security.
We consider a lender (bank) who determines the optimal loan price (interest rate) to offer to pro... more We consider a lender (bank) who determines the optimal loan price (interest rate) to offer to prospective borrowers under uncertain borrower response and default risk. A borrower may or may not accept the loan at the price offered, and both the principal loaned and the interest income become uncertain due to the risk of default. We present a risk-based loan pricing optimization framework that explicitly takes into account the marginal risk contribution, the portfolio risk, and a borrower's acceptance probability. Marginal risk assesses the incremental risk contribution of a prospective loan to the bank's overall portfolio risk by capturing the dependencies between the prospective loan and the existing portfolio, and is evaluated with respect to the Value-at-Risk and Conditional Value-at-Risk measures. We examine the properties and computational challenges of the formulations. We design a reformulation method based on the concavifiability concept to transform the nonlinear objective functions and to derive equivalent mixed-integer nonlinear reformulations with convex continuous relaxations. We also extend the approach to the multi-loan pricing problems, which feature explicit loan selection decisions in addition to pricing decisions. We derive formulations with multiple loans that take the form of mixed-integer nonlinear problems with nonconvex continuous relaxations and develop a computationally efficient algorithmic method. We provide numerical evidence demonstrating the value of the proposed framework, test the computational tractability, and discuss managerial implications.
This paper proposes a multistage stochastic programming approach for the asset-liability manageme... more This paper proposes a multistage stochastic programming approach for the asset-liability management of Brazilian pension funds. We generate asset price scenarios with stochastic differential equations-Geometric Brownian Motion model for stocks and Cox-Ingersoll-Ross model for fixed income securities. Intertemporal solvency regulatory rules for Brazilian pension funds are considered endogenously in the model and enforced with a combinatorial constraint. A VaR probabilistic constraint is incorporated to obtain a positive funding ratio at each time period with high probability. Our approach uses multiple trees to provide a representative characterization of the uncertainty and is not computationally prohibitive. We evaluate the insolvency probability under different initial funding ratios through extensive simulations. The study reveals that the likely decrease of interest rate premiums in the next years will force pension fund managers to significantly change their portfolio strategies. They will have to take more risk in order to deliver the cash flows required to cover the liabilities and satisfy the regulatory constraints.
This paper proposes a distributionally robust chance-constrained (DRCC) optimization model for op... more This paper proposes a distributionally robust chance-constrained (DRCC) optimization model for optimal topology control in power grids overwhelmed with significant renewable uncertainties. A novel moment-based ambiguity set is characterized to capture the renewable uncertainties with no knowledge on the probability distributions of the random parameters. A distributionally robust optimization (DRO) formulation is proposed to guarantee the robustness of the network topology control plans against all uncertainty distributions defined within the moment-based ambiguity set. The proposed model minimizes the system operation cost by co-optimizing dispatch of the lowercost generating units and network topology-i.e., dynamically harnessing the way how electricity flows through the system. In order to solve the problem, the DRCC problem are reformulated into a tractable mixed-integer second order cone programming problem (MISOCP) which can be efficiently solved by off-theshelf solvers. Numerical results on the IEEE 118-bus test system verify the effectiveness of the proposed network reconfiguration methodology under uncertainties.
E nhanced indexation is a structured investment approach that combines passive and active financi... more E nhanced indexation is a structured investment approach that combines passive and active financial management techniques. We propose an enhanced indexation model whose goal is to maximize the excess return that can be attained with high reliability, while ensuring that the relative market risk does not exceed a specified limit. We measure the relative risk with the coherent semideviation risk functional and model the asset returns as random variables. We consider that the probability distributions of the index fund and excess returns are imperfectly known and belong to a class of distributions characterized by an ellipsoidal distributional set. We provide a game theoretical formulation for the enhanced indexation problem in which we maximize the minimum excess return over all allowable probability distributions. The variance of the excess return is calculated with a computationally efficient method that avoids model specification issues. Finally, we show that the game theoretical model can be recast as a convex programming problem and discuss the results of numerical experiments.
This study revisits the celebrated p-efficiency concept introduced by Prékopa [23] and defines a ... more This study revisits the celebrated p-efficiency concept introduced by Prékopa [23] and defines a p-efficient point (pLEP) as a combinatorial pattern. The new definition uses elements from the combinatorial pattern recognition field and is based on the combinatorial pattern framework for stochastic programming problems proposed in [16]. The approach is based on the binarization of the probability distribution, and the generation of a consistent partially defined Boolean function representing the combination (F, p) of the binarized probability distribution F and the enforced probability level p. A combinatorial pattern provides a compact representation of the defining characteristics of a pLEP and opens the door to new methods for the generation of pLEPs. We show that a combinatorial pattern representing a pLEP constitutes a strong and prime pattern and we derive it through the solution of an integer programming problem. Next, we demonstrate that the (finite) collection of pLEPs can be represented as a disjunctive normal form (DNF) and propose an mixed-integer programming formulation allowing for the construction of the DNF that is shown to be prime and irreducible. We illustrate the proposed method to a problem studied by Prékopa [25].
We develop a new modeling and exact solution method for stochastic programming problems that incl... more We develop a new modeling and exact solution method for stochastic programming problems that include a joint probabilistic constraint in which the multi-row random technology matrix is discretely distributed. We binarize the probability distribution of the random variables in such a way that we can extract a threshold partially defined Boolean function (pdBf) representing the probabilistic constraint. We then construct a tight threshold Boolean minorant for the pdBf. Any separating structure of the tight threshold Boolean minorant defines sufficient conditions for the satisfaction of the probabilistic constraint and takes the form of a system of linear constraints. We use the separating structure to derive three new deterministic formulations equivalent to the studied stochastic problem. We derive a set of strengthening valid inequalities for the reformulated problems. A crucial feature of the new integer formulations is that the number of integer variables does not depend on the number of scenarios used to represent uncertainty. The computational study, based on instances of the stochastic capital rationing problem, shows that the MIP reformulations are much easier and orders of magnitude faster to solve than the MINLP formulation. The method integrating the derived valid inequalities in a branch-and-bound algorithm has the best performance.
European Journal of Operational Research, Dec 1, 2010
Probabilistically constrained problems, in which the random variables are finitely distributed, a... more Probabilistically constrained problems, in which the random variables are finitely distributed, are nonconvex in general and hard to solve. The p-efficiency concept has been widely used to develop efficient methods to solve such problems. Those methods require the generation of p-efficient points (pLEPs) and use an enumeration scheme to identify pLEPs. In this paper, we consider a random vector characterized by a finite set of scenarios and generate pLEPs by solving a mixed-integer programming (MIP) problem. We solve this computationally challenging MIP problem with a new mathematical programming framework. It involves solving a series of increasingly tighter outer approximations and employs, as algorithmic techniques, a bundle preprocessing method, strengthening valid inequalities, and a fixing strategy. The method is exact (resp., heuristic) and ensures the generation of pLEPs (resp., quasi pLEPs) if the fixing strategy is not (resp., is) employed, and it can be used to generate multiple pLEPs. To the best of our knowledge, generating a set of pLEPs using an optimization-based approach and developing effective methods for the application of the p-efficiency concept to the random variables described by a finite set of scenarios are novel. We present extensive numerical results that highlight the computational efficiency and effectiveness of the overall framework and of each of the specific algorithmic techniques.
European Journal of Operational Research, May 1, 2014
Administrators/Decision Makers (DMs) responsible for making locational decisions for public facil... more Administrators/Decision Makers (DMs) responsible for making locational decisions for public facilities have many other overriding factors to consider that dominate traditional OR/MS objectives that relate to response time. We propose that an appropriate role for the OR/MS analyst is to help the DMs identify a good set of solutions rather than an optimal solution that may not be practical. In this paper, good solutions can be generated/prescribed assuming that the DMs have (i) a dispersion criterion that ensures a minimum distance between every pair of facilities, and (ii) a population criterion which stipulates that the distance from a demand point to its closest facility is inversely proportional to its population, and (iii) an equity criterion which stipulates that no demand point is further than a specified distance from its closest facility. We define parameters capturing these three criteria and specify values for them based on the p-median solution. Sensitivity analysis with respect to the parameters is performed and computational results for both real and simulated networks are reported. Our results show close collaboration with the p-median solution when decision makers restrict location to demand points, and use parameter values for the population, dispersion, and equity criteria as implied by the p-median solution. The significance of our work is twofold. For practitioners, it is comforting to know that using common-sense measures such as the above criteria result in fairly good solutions. For researchers it suggests the need for developing techniques for finding the k best solutions of the p median problem.
We propose a new modeling and solution method for probabilistically constrained optimization prob... more We propose a new modeling and solution method for probabilistically constrained optimization problems. The methodology is based on the integration of the stochastic programming and combinatorial pattern recognition fields. It permits the very fast solution of stochastic optimization problems in which the random variables are represented by an extremely large number of scenarios. The method involves the binarization of the probability distribution, and the generation of a consistent partially defined Boolean function (pdBf) representing the combination (F, p) of the binarized probability distribution F and the enforced probability level p. We show that the pdBf representing (F, p) can be compactly extended as a disjunctive normal form (DNF). The DNF is a collection of combinatorial p-patterns, each of which defining sufficient conditions for a probabilistic constraint to hold. We propose two linear programming formulations for the generation of p-patterns which can be subsequently used to derive a linear programming inner approximation of the original stochastic problem. A formulation allowing for the concurrent generation of a p-pattern and the solution of the deterministic equivalent of the stochastic problem is also proposed. Results show that large-scale stochastic problems, in which up to 50,000 scenarios are used to describe the stochastic variables, can be consistently solved to optimality within a few seconds.
We study an extended set of Mean-Gini portfolio optimization models that encompasses a general ve... more We study an extended set of Mean-Gini portfolio optimization models that encompasses a general version of the mean-risk formulation, the Minimal Gini model (MinG) that minimizes Gini's Mean Differences, and the new risk-adjusted Mean-Gini Ratio (MGR) model. We analyze the properties of the various models, prove that a performance measure based on a Risk Adjusted version of the Mean Gini Ratio (RAMGR) is coherent, and establish the equivalence between maximizing this performance measure and solving for the maximal Mean-Gini ratio. We propose a linearization approach for the fractional programming formulation of the MGR model. We also conduct a thorough evaluation of the various Mean-Gini models based on four data sets that represent combinations of bullish and bearish scenarios in the in-sample and out-of-sample phases. The performance is (i) analyzed with respect to eight return, risk, and risk-adjusted criteria, (ii) benchmarked with the S&P500 index, and (iii) compared with their Mean-Variance counterparts for varying risk aversion levels and with the Minimal CVaR and Minimal Semi-Deviation models. For the data sets used in our study, our results suggest that the various Mean-Gini models almost always result in solutions that outperform the S&P500 benchmark index with respect to the out-of-sample cumulative return. Further, particular instances of Mean-Gini models result in solutions that are as good or better (for example, MinG in bullish in-sample scenarios, and MGR in bearish out-of-sample scenarios) than the solutions obtained with their counterparts in Mean-Variance, Minimal CVaR and Minimal Semi-Deviation models.
We study the distributionally robust linearized stable tail adjusted return ratio (DRLSTARR) port... more We study the distributionally robust linearized stable tail adjusted return ratio (DRLSTARR) portfolio optimization problem, in which the objective is to maximize the worst-case linearized stable tail adjusted return ratio (LSTARR) performance measure under data-driven Wasserstein ambiguity. We consider two types of imperfectly known uncertainties, named uncertain probabilities and continuum of realizations, associated with the losses of assets. We account for two typical combinatorial trading constraints, called buy-in threshold and diversification constraints, to reflect stock market restrictions. Leveraging conic duality theory to tackle the distributionally robust worst-case expectation, the proposed problems are reformulated into mixed-integer linear programming problems. We carry out a series of empirical tests to illustrate the scalability and effectiveness of the proposed solution framework, and to evaluate the performance of the DRLSTARR-constructed portfolios. The cross-validation results obtained using a rolling-horizon procedure show the superior out-of-sample performance of the DRLSTARR portfolios under an uncertain continuum of realizations.
Multi-portfolio optimization problems and the incorporation of marginal risk contribution constra... more Multi-portfolio optimization problems and the incorporation of marginal risk contribution constraints have recently received a sustained interest from academia and financial practitioners. We propose a class of new stochastic risk budgeting multi-portfolio optimization models that impose portfolio as well as marginal risk constraints. The models permit the simultaneous and integrated optimization of multiple sub-portfolios in which the marginal risk contribution of each individual security is accounted for. A risk budget defined with a downside risk measure is allocated to each security. We consider the two cases in which the asset universes of the sub-portfolios are either disjoint (diversification of style) or overlap (diversification of judgment). The proposed models take the form of stochastic programming problems and include each a probabilistic constraint with multi-row random technology matrix. We expand a combinatorial modeling framework to represent the feasible set of the chance constraints first as a set of mixed-integer linear inequalities. The new reformulation proposed in this paper is much sparser than previously presented reformulations and allows the efficient solution of problem instances that could not be solved otherwise. We evaluate the efficiency and scalability of the proposed method that is general enough to be applied to general chance-constrained optimization problems. We conduct a cross-validation study via a rolling-horizon procedure to assess the performance of the models, and understand the impact of the parameters and diversification types on the portfolios.
We investigate a class of fractional distributionally robust optimization problems with uncertain... more We investigate a class of fractional distributionally robust optimization problems with uncertain probabilities. They consist in the maximization of ambiguous fractional functions representing reward-risk ratios and have a semi-infinite programming epigraphic formulation. We derive a new fully parameterized closed-form to compute a new bound on the size of the Wasserstein ambiguity ball. We design a data-driven reformulation and solution framework. The reformulation phase involves the derivation of the support function of the ambiguity set and the concave conjugate of the ratio function. We design modular bisection algorithms which enjoy the finite convergence property. This class of problems has wide applicability in finance, and we specify new ambiguous portfolio optimization models for the Sharpe and Omega ratios. The computational study shows the applicability and scalability of the framework to solve quickly large, industry-relevant-size problems, which cannot be solved in one day with state-of-the-art mixed-integer nonlinear programming (MINLP) solvers.
We study a multi-objective portfolio optimization model that employs two conflicting objectives—m... more We study a multi-objective portfolio optimization model that employs two conflicting objectives—maximizing mean return, and minimizing risk as measured by the Gini Mean Difference (GMD). We assume that an investor’s implicit utility is a function of these two objectives and help the investor identify the optimal (i.e., most preferred) portfolio among the efficient ones. We develop an interactive solution procedure based on the concept of domination cones that can be used with a class of utility functions defined over Mean-Gini criteria. The investor’s preferences are elicited interactively through pairwise comparisons of efficient Mean-Gini portfolios based on which domination cones are derived to guide the search for the most preferred portfolio. The interactive solution method enjoys a finite convergence property. Computational results illustrating the effectiveness of the interactive procedure and the out-of-sample performance of the optimal portfolios for a range of implicit utility functions are presented. The results indicate that the optimal portfolios defined by our models consistently outperform the S&P 500 index. Further, an out-of-sample performance analysis reveals that a strategy emphasizing mean return over Gini performs best under similar market conditions over the training and testing sets, while a risk-averse strategy emphasizing Gini over mean return performs best under market reversal conditions.
We define a regularized variant of the Dual Dynamic Programming algorithm called DDP-REG to solve... more We define a regularized variant of the Dual Dynamic Programming algorithm called DDP-REG to solve nonlinear dynamic programming equations. We extend the algorithm to solve nonlinear stochastic dynamic programming equations. The corresponding algorithm, called SDDP-REG, can be seen as an extension of a regularization of the Stochastic Dual Dynamic Programming (SDDP) algorithm recently introduced which was studied for linear problems only and with less general prox-centers. We show the convergence of DDP-REG and SDDP-REG. We assess the performance of DDP-REG and SDDP-REG on portfolio models with direct transaction and market impact costs. In particular, we propose a risk-neutral portfolio selection model which can be cast as a multistage stochastic second-order cone program. The formulation is motivated by the impact of market impact costs on large portfolio rebalancing operations. Numerical simulations show that DDP-REG is much quicker than DDP on all problem instances considered (up to 184 times quicker than DDP) and that SDDP-REG is quicker on the instances of portfolio selection problems with market impact costs tested and much faster on the instance of risk-neutral multistage stochastic linear program implemented (8.2 times faster).
We propose a new medical evacuation (MEDEVAC) model with endogenous uncertainty in the casualty d... more We propose a new medical evacuation (MEDEVAC) model with endogenous uncertainty in the casualty delivery times. The goal is to provide timely medical treatment to injured soldiers and a prompt evacuation via air ambulances. The model determines where to locate medical treatment facilities (MTF) and air ambulances, how to dispatch air ambulances to the point-of-injury, and to which MTF to channel the casualty. The model captures the effect of a delayed MEDEVAC response on the survivability of soldiers, enforces the Golden Hour evacuation doctrine, and represents the availability of air ambulances as an endogenous source of uncertainty since it is contingent on the locations of MTFs. The MEDEVAC model is an MINLP problem whose continuous relaxation is in general nonconvex and for which we develop a new algorithmic method articulated around two main components: i) new bounding techniques obtained through the solution of restriction and relaxation problems, and ii) a spatial branch-and-bound algorithm solving conic mixed-integer programs at each node of the tree. The computational study based on data from the Operation Enduring Freedom reveals that: the bounding problems can be quickly solved regardless of the problem size; the bounds are tight; and the spatial branch-and-bound dominates the Cplex and the Baron solvers in terms of both computational time and robustness. As compared to the standard MEDEVAC myopic policy, our approach increases the number of casualties treated timely and contributes to reducing the number of deaths on the battlefield. The benefits increase as the MEDEVAC resources become tighter and the combats intensify. The model can be used at the strategic level to design an efficient MEDEVAC system and at the tactical level for intelligent tasking and dispatching. Additionally, this study provides valuable contributions to the civilian emergency care community and to the MINLP discipline.
We propose a new fractional stochastic integer programming model for forestry revenue management.... more We propose a new fractional stochastic integer programming model for forestry revenue management. The model takes into account the main sources of uncertainties-wood prices and tree growth-and maximizes a reliability-to-stability revenue ratio that reflects two major goals pursued by forest owners. The model includes a joint chance constraint with multirow random technology matrix to account for reliability and a joint integrated chance constraint to account for stability. We propose a reformulation framework to obtain an equivalent mixed-integer linear programming formulation amenable to a numerical solution. We use a Boolean modeling framework to reformulate the chance constraint and a series of linearization techniques to handle the nonlinearities due to the joint integrated chance constraint, the fractional objective function, and the bilinear terms. The computational study attests that the reformulation of the model can handle large number of scenarios and can be solved efficiently for sizable forest harvesting problems.
Widespread outbreaks of infectious disease, i.e., the so-called pandemics that may travel quickly... more Widespread outbreaks of infectious disease, i.e., the so-called pandemics that may travel quickly and silently beyond boundaries, can significantly upsurge the morbidity and mortality over largescale geographical areas. They commonly result in enormous economic losses, political disruptions, social unrest, and quickly evolve to a national security concern. Societies have been shaped by pandemics and outbreaks for as long as we have had societies. While differing in nature and in realizations, they all place the normal life of modern societies on hold. Common interruptions include job loss, infrastructure failure, and political ramifications. The electric power systems, upon which our modern society relies, is driving a myriad of interdependent services, such as water systems, communication networks, transportation systems, health services, etc. With the sudden shifts in electric power generation and demand portfolios and the need to sustain quality electricity supply to end customers (particularly mission-critical services) during pandemics, safeguarding the nation's electric power grid in the face of such rapidly evolving outbreaks is among the top priorities. This paper explores the various mechanisms through which the electric power grids around the globe are influenced by pandemics in general and COVID-19 in particular, shares the lessons learned and best practices taken in different sectors of the electric industry in responding to the dramatic shifts enforced by such threats, and provides visions for a pandemic-resilient electric grid of the future.
Production and Operations Management, Jul 22, 2021
Inspired by the opportunities provided by the Industry 4.0 technologies for smarter, risk‐informe... more Inspired by the opportunities provided by the Industry 4.0 technologies for smarter, risk‐informed, safer, and resilient operation, control, and management of the lifeline critical networks, this study investigates mobility‐as‐a‐service for resilience delivery during natural disasters. Focusing on effective service restoration in power distribution systems, we introduce mobile power sources (MPSs) as the restoration technology of the future, the mobility of which can be harnessed for spatiotemporal flexibility exchange and effective response and recovery during disasters. We present automated decision‐making solutions that coordinate the MPSs utilization with repair crew (RC) schedules taking into account constraints in both energy and transportation networks. When integrated, the suggested technology aided by the proposed optimization models will have the potential to disrupt the current practice in boosting the resilience and operational endurance of the mission‐critical systems and services during disasters, ultimately resulting in an enriched social welfare and national security.
We consider a lender (bank) who determines the optimal loan price (interest rate) to offer to pro... more We consider a lender (bank) who determines the optimal loan price (interest rate) to offer to prospective borrowers under uncertain borrower response and default risk. A borrower may or may not accept the loan at the price offered, and both the principal loaned and the interest income become uncertain due to the risk of default. We present a risk-based loan pricing optimization framework that explicitly takes into account the marginal risk contribution, the portfolio risk, and a borrower's acceptance probability. Marginal risk assesses the incremental risk contribution of a prospective loan to the bank's overall portfolio risk by capturing the dependencies between the prospective loan and the existing portfolio, and is evaluated with respect to the Value-at-Risk and Conditional Value-at-Risk measures. We examine the properties and computational challenges of the formulations. We design a reformulation method based on the concavifiability concept to transform the nonlinear objective functions and to derive equivalent mixed-integer nonlinear reformulations with convex continuous relaxations. We also extend the approach to the multi-loan pricing problems, which feature explicit loan selection decisions in addition to pricing decisions. We derive formulations with multiple loans that take the form of mixed-integer nonlinear problems with nonconvex continuous relaxations and develop a computationally efficient algorithmic method. We provide numerical evidence demonstrating the value of the proposed framework, test the computational tractability, and discuss managerial implications.
This paper proposes a multistage stochastic programming approach for the asset-liability manageme... more This paper proposes a multistage stochastic programming approach for the asset-liability management of Brazilian pension funds. We generate asset price scenarios with stochastic differential equations-Geometric Brownian Motion model for stocks and Cox-Ingersoll-Ross model for fixed income securities. Intertemporal solvency regulatory rules for Brazilian pension funds are considered endogenously in the model and enforced with a combinatorial constraint. A VaR probabilistic constraint is incorporated to obtain a positive funding ratio at each time period with high probability. Our approach uses multiple trees to provide a representative characterization of the uncertainty and is not computationally prohibitive. We evaluate the insolvency probability under different initial funding ratios through extensive simulations. The study reveals that the likely decrease of interest rate premiums in the next years will force pension fund managers to significantly change their portfolio strategies. They will have to take more risk in order to deliver the cash flows required to cover the liabilities and satisfy the regulatory constraints.
This paper proposes a distributionally robust chance-constrained (DRCC) optimization model for op... more This paper proposes a distributionally robust chance-constrained (DRCC) optimization model for optimal topology control in power grids overwhelmed with significant renewable uncertainties. A novel moment-based ambiguity set is characterized to capture the renewable uncertainties with no knowledge on the probability distributions of the random parameters. A distributionally robust optimization (DRO) formulation is proposed to guarantee the robustness of the network topology control plans against all uncertainty distributions defined within the moment-based ambiguity set. The proposed model minimizes the system operation cost by co-optimizing dispatch of the lowercost generating units and network topology-i.e., dynamically harnessing the way how electricity flows through the system. In order to solve the problem, the DRCC problem are reformulated into a tractable mixed-integer second order cone programming problem (MISOCP) which can be efficiently solved by off-theshelf solvers. Numerical results on the IEEE 118-bus test system verify the effectiveness of the proposed network reconfiguration methodology under uncertainties.
E nhanced indexation is a structured investment approach that combines passive and active financi... more E nhanced indexation is a structured investment approach that combines passive and active financial management techniques. We propose an enhanced indexation model whose goal is to maximize the excess return that can be attained with high reliability, while ensuring that the relative market risk does not exceed a specified limit. We measure the relative risk with the coherent semideviation risk functional and model the asset returns as random variables. We consider that the probability distributions of the index fund and excess returns are imperfectly known and belong to a class of distributions characterized by an ellipsoidal distributional set. We provide a game theoretical formulation for the enhanced indexation problem in which we maximize the minimum excess return over all allowable probability distributions. The variance of the excess return is calculated with a computationally efficient method that avoids model specification issues. Finally, we show that the game theoretical model can be recast as a convex programming problem and discuss the results of numerical experiments.
This study revisits the celebrated p-efficiency concept introduced by Prékopa [23] and defines a ... more This study revisits the celebrated p-efficiency concept introduced by Prékopa [23] and defines a p-efficient point (pLEP) as a combinatorial pattern. The new definition uses elements from the combinatorial pattern recognition field and is based on the combinatorial pattern framework for stochastic programming problems proposed in [16]. The approach is based on the binarization of the probability distribution, and the generation of a consistent partially defined Boolean function representing the combination (F, p) of the binarized probability distribution F and the enforced probability level p. A combinatorial pattern provides a compact representation of the defining characteristics of a pLEP and opens the door to new methods for the generation of pLEPs. We show that a combinatorial pattern representing a pLEP constitutes a strong and prime pattern and we derive it through the solution of an integer programming problem. Next, we demonstrate that the (finite) collection of pLEPs can be represented as a disjunctive normal form (DNF) and propose an mixed-integer programming formulation allowing for the construction of the DNF that is shown to be prime and irreducible. We illustrate the proposed method to a problem studied by Prékopa [25].
We develop a new modeling and exact solution method for stochastic programming problems that incl... more We develop a new modeling and exact solution method for stochastic programming problems that include a joint probabilistic constraint in which the multi-row random technology matrix is discretely distributed. We binarize the probability distribution of the random variables in such a way that we can extract a threshold partially defined Boolean function (pdBf) representing the probabilistic constraint. We then construct a tight threshold Boolean minorant for the pdBf. Any separating structure of the tight threshold Boolean minorant defines sufficient conditions for the satisfaction of the probabilistic constraint and takes the form of a system of linear constraints. We use the separating structure to derive three new deterministic formulations equivalent to the studied stochastic problem. We derive a set of strengthening valid inequalities for the reformulated problems. A crucial feature of the new integer formulations is that the number of integer variables does not depend on the number of scenarios used to represent uncertainty. The computational study, based on instances of the stochastic capital rationing problem, shows that the MIP reformulations are much easier and orders of magnitude faster to solve than the MINLP formulation. The method integrating the derived valid inequalities in a branch-and-bound algorithm has the best performance.
European Journal of Operational Research, Dec 1, 2010
Probabilistically constrained problems, in which the random variables are finitely distributed, a... more Probabilistically constrained problems, in which the random variables are finitely distributed, are nonconvex in general and hard to solve. The p-efficiency concept has been widely used to develop efficient methods to solve such problems. Those methods require the generation of p-efficient points (pLEPs) and use an enumeration scheme to identify pLEPs. In this paper, we consider a random vector characterized by a finite set of scenarios and generate pLEPs by solving a mixed-integer programming (MIP) problem. We solve this computationally challenging MIP problem with a new mathematical programming framework. It involves solving a series of increasingly tighter outer approximations and employs, as algorithmic techniques, a bundle preprocessing method, strengthening valid inequalities, and a fixing strategy. The method is exact (resp., heuristic) and ensures the generation of pLEPs (resp., quasi pLEPs) if the fixing strategy is not (resp., is) employed, and it can be used to generate multiple pLEPs. To the best of our knowledge, generating a set of pLEPs using an optimization-based approach and developing effective methods for the application of the p-efficiency concept to the random variables described by a finite set of scenarios are novel. We present extensive numerical results that highlight the computational efficiency and effectiveness of the overall framework and of each of the specific algorithmic techniques.
European Journal of Operational Research, May 1, 2014
Administrators/Decision Makers (DMs) responsible for making locational decisions for public facil... more Administrators/Decision Makers (DMs) responsible for making locational decisions for public facilities have many other overriding factors to consider that dominate traditional OR/MS objectives that relate to response time. We propose that an appropriate role for the OR/MS analyst is to help the DMs identify a good set of solutions rather than an optimal solution that may not be practical. In this paper, good solutions can be generated/prescribed assuming that the DMs have (i) a dispersion criterion that ensures a minimum distance between every pair of facilities, and (ii) a population criterion which stipulates that the distance from a demand point to its closest facility is inversely proportional to its population, and (iii) an equity criterion which stipulates that no demand point is further than a specified distance from its closest facility. We define parameters capturing these three criteria and specify values for them based on the p-median solution. Sensitivity analysis with respect to the parameters is performed and computational results for both real and simulated networks are reported. Our results show close collaboration with the p-median solution when decision makers restrict location to demand points, and use parameter values for the population, dispersion, and equity criteria as implied by the p-median solution. The significance of our work is twofold. For practitioners, it is comforting to know that using common-sense measures such as the above criteria result in fairly good solutions. For researchers it suggests the need for developing techniques for finding the k best solutions of the p median problem.
We propose a new modeling and solution method for probabilistically constrained optimization prob... more We propose a new modeling and solution method for probabilistically constrained optimization problems. The methodology is based on the integration of the stochastic programming and combinatorial pattern recognition fields. It permits the very fast solution of stochastic optimization problems in which the random variables are represented by an extremely large number of scenarios. The method involves the binarization of the probability distribution, and the generation of a consistent partially defined Boolean function (pdBf) representing the combination (F, p) of the binarized probability distribution F and the enforced probability level p. We show that the pdBf representing (F, p) can be compactly extended as a disjunctive normal form (DNF). The DNF is a collection of combinatorial p-patterns, each of which defining sufficient conditions for a probabilistic constraint to hold. We propose two linear programming formulations for the generation of p-patterns which can be subsequently used to derive a linear programming inner approximation of the original stochastic problem. A formulation allowing for the concurrent generation of a p-pattern and the solution of the deterministic equivalent of the stochastic problem is also proposed. Results show that large-scale stochastic problems, in which up to 50,000 scenarios are used to describe the stochastic variables, can be consistently solved to optimality within a few seconds.
Uploads
Papers by Miguel Lejeune