A celebrated financial application of convex duality theory gives an explicit relation between th... more A celebrated financial application of convex duality theory gives an explicit relation between the following two quantities: (i) The optimal terminal wealth X * (T) := X ϕ * (T) of the problem to maximize the expected U-utility of the terminal wealth X ϕ (T) generated by admissible portfolios ϕ(t); 0 ≤ t ≤ T in a market with the risky asset price process modeled as a semimartingale; (ii) The optimal scenario dQ * dP of the dual problem to minimize the expected V-value of dQ dP over a family of equivalent local martingale measures Q, where V is the convex conjugate function of the concave function U .
The purpose of this paper is to study optimal control of conditional McKean-Vlasov (mean-field) s... more The purpose of this paper is to study optimal control of conditional McKean-Vlasov (mean-field) stochastic differential equations with jumps To this end, we first prove a stochastic Fokker-Planck equation for the conditional law of the solution of such equations. Combining this equation with the original state equation, we obtain a Markovian system for the state and its conditional law. Furthermore, we apply this to formulate an Hamilton-Jacobi-Bellman (HJB) equation for the optimal control of conditional McKean-Vlasov jump diffusions. Then we study the situation when the law is absolutely continuous with respect to Lebesgue measure. In that case the Fokker-Planck equation reduces to a stochastic partial differential equation (SPDE) for the Radon-Nikodym derivative of the conditional law. Finally we apply these results to solve explicitly the following problems: • Linear-quadratic optimal control of conditional stochastic McKean-Vlasov jump diffusions. • Optimal consumption from a cash flow modelled as a conditional stochastic McKean-Vlasov differential equation with jumps.
Journal of Mathematical Analysis and Applications, Aug 1, 2019
We use a white noise approach to study the problem of optimal inside control of a stochastic dela... more We use a white noise approach to study the problem of optimal inside control of a stochastic delay equation driven by a Brownian motion B and a Poisson random measure N. In particular, we use Hida-Malliavin calculus and the Donsker delta functional to study the problem.
Stochastics An International Journal of Probability and Stochastic Processes, Mar 24, 2017
We introduce the concept of singular recursive utility. This leads to a kind of singular BSDE whi... more We introduce the concept of singular recursive utility. This leads to a kind of singular BSDE which, to the best of our knowledge, has not been studied before. We show conditions for existence and uniqueness of a solution for this kind of singular BSDE. Furthermore, we analyze the problem of maximizing the singular recursive utility. We derive sufficient and necessary maximum principles for this problem, and connect it to the Skorohod reflection problem. Finally, we apply our results to a specific cash flow. In this case, we find that the optimal consumption rate is given by the solution to the corresponding Skorohod reflection problem.
HAL (Le Centre pour la Communication Scientifique Directe), Nov 9, 2011
In the first part, we consider general singular control problems for random fields given by a sto... more In the first part, we consider general singular control problems for random fields given by a stochastic partial differential equation (SPDE). We show that under some conditions the optimal singular control can be identified with the solution of a coupled system of SPDE and a kind of reflected backward SPDE (RB-SPDE).In the second part, existence and uniqueness of solutions of RBSPDEs are established, which is of independent interest.
We consider general singular control problems for random fields given by a stochastic partial dif... more We consider general singular control problems for random fields given by a stochastic partial differential equation (SPDE). We show that under some conditions the optimal singular control can be identified with the solution of a coupled system of SPDE and a reflected backward SPDE (RBSPDE). As an illustration we apply the result to a singular optimal harvesting problem from a population whose density is modeled as a stochastic reaction-diffusion equation. Existence and uniqueness of solutions of RBSPDEs are established, as well as a comparison theorems. We then establish a relation between RBSPDEs and optimal stopping of SPDEs, and apply the result to a risk minimizing stopping problem.
We study optimal insider control problems, i.e. optimal control problems of stochastic systems wh... more We study optimal insider control problems, i.e. optimal control problems of stochastic systems where the controller at any time t, in addition to knowledge about the history of the system up to this time, also has additional information related to a future value of the system. Since this puts the associated controlled systems outside the context of semimartingales, we apply anticipative white noise analysis, including forward integration and Hida-Malliavin calculus to study the problem. Combining this with Donsker delta functionals we transform the insider control problem into a classical (but parametrised) adapted control system, albeit with a non-classical performance functional. We establish a sufficient and a necessary maximum principle for such systems. Then we apply the results to obtain explicit solutions for some optimal insider portfolio problems in financial markets described by Itô-Lévy processes. Finally, in the Appendix we give a brief survey of the concepts and results we need from the theory of white noise, forward integrals and Hida-Malliavin calculus.
We combine stochastic control methods, white noise analysis and Hida-Malliavin calculus applied t... more We combine stochastic control methods, white noise analysis and Hida-Malliavin calculus applied to the Donsker delta functional to obtain explicit representations of semimartingale decompositions under enlargement of filtrations. Some of the expressions are more explicit than previously known. The results are illustrated by examples.
Stochastic Analysis and Applications, Jan 11, 2002
We study how the value function (minimal cost function) V c of certain impulse control problems d... more We study how the value function (minimal cost function) V c of certain impulse control problems depends on the intervention cost c. We consider the case when the cost of interfering with an impulse control of size ζ∈R is given by c+|ζ| with c≥0,λ>0 constants, and we show (under some assumptions) that V c is very sensitive (non-robust) to an increase in c near c=0 in the sense that dV c dc c=0 =+∞
In this paper we study the mean-field backward stochastic differential equations (mean-field bsde... more In this paper we study the mean-field backward stochastic differential equations (mean-field bsde) of the form dY (t) = −f (t, Y (t), Z(t), K(t, •), E[ϕ(Y (t), Z(t), K(t, •))])dt + Z(t)dB(t) + R 0 K(t, ζ)Ñ (dt, dζ), where B is a Brownian motion,Ñ is the compensated Poisson random measure. Under some mild conditions, we prove the existence and uniqueness of the solution triplet (Y, Z, K). It is commonly believed that there is no comparison theorem for general mean-field bsde. However, we prove a comparison theorem for a subclass of these equations.When the mean-field bsde is linear, we give an explicit formula for the first component Y (t) of the solution triplet. Our results are applied to solve a mean-field recursive utility optimization problem in finance.
We study the problem of optimal control for mean-field stochastic partial differential equations ... more We study the problem of optimal control for mean-field stochastic partial differential equations (stochastic evolution equations) driven by a Brownian motion and an independent Poisson random measure, in the case of partial information control. One important novelty of our problem is represented by the introduction of general meanfield operators, acting on both the controlled state process and the control process. We first formulate a sufficient and a necessary maximum principle for this type of control. We then prove existence and uniqueness of the solution of such general forward and backward mean-field stochastic partial differential equations. We finally apply our results to find the explicit optimal control for an optimal harvesting problem.
Solutions of stochastic Volterra (integral) equations are not Markov processes, and therefore cla... more Solutions of stochastic Volterra (integral) equations are not Markov processes, and therefore classical methods, like dynamic programming, cannot be used to study optimal control problems for such equations. However, we show that, by using Malliavin calculus, it is possible to formulate a modified functional type of maximum principle suitable for such systems. This principle also applies to situations where the controller has only partial information available to base her decisions upon. We present both a sufficient and a necessary maximum principle of this type, and then we use the results to study some specific examples. In particular, we solve an optimal portfolio problem in a financial market model with memory.
The purpose of this paper is to study optimal control of conditional McKean-Vlasov (mean-field) s... more The purpose of this paper is to study optimal control of conditional McKean-Vlasov (mean-field) stochastic differential equations with jumps (conditional McKean-Vlasov jump diffusions, for short). To this end, we first prove a stochastic Fokker-Planck equation for the conditional law of the solution of such equations. Combining this equation with the original state equation, we obtain a Markovian system for the state and its conditional law. Furthermore, we apply this to formulate an Hamilton-Jacobi-Bellman (HJB) equation for the optimal control of conditional McKean-Vlasov jump diffusions. Then we study the situation when the law is absolutely continuous with respect to Lebesgue measure. In that case the Fokker-Planck equation reduces to a stochastic partial differential equation (SPDE) for the Radon-Nikodym derivative of the conditional law. Finally we apply these results to solve explicitly the following problems: • Linear-quadratic optimal control of conditional stochastic McKean-Vlasov jump diffusions. • Optimal consumption from a cash flow modelled as a conditional stochastic McKean-Vlasov differential equation with jumps.
We study the problem of optimal inside control of a stochastic delay equation driven by a Brownia... more We study the problem of optimal inside control of a stochastic delay equation driven by a Brownian motion and a Poisson random measure. We prove a sufficient and a necessary maximum principle for the optimal control when the trader from the beginning has inside information about the future value of some random variable related to the system. The results are applied to the problem of finding the optimal insider portfolio in a financial market where the risky asset price is given by a stochastic delay equation.
This paper considers a controlled Itô-Lévy process where the information available to the control... more This paper considers a controlled Itô-Lévy process where the information available to the controller is possibly less than the overall information. All the system coefficients and the objective performance functional are allowed to be random, possibly non-Markovian. Malliavin calculus is employed to derive a maximum principle for the optimal control of such a system where the adjoint process is explicitly expressed.
Journal of Optimization Theory and Applications, Feb 20, 2018
We study the problem of optimal control for mean-field stochastic partial differential equations ... more We study the problem of optimal control for mean-field stochastic partial differential equations (stochastic evolution equations) driven by a Brownian motion and an independent Poisson random measure, in the case of partial information control. One important novelty of our problem is represented by the introduction of general meanfield operators, acting on both the controlled state process and the control process. We first formulate a sufficient and a necessary maximum principle for this type of control. We then prove existence and uniqueness of the solution of such general forward and backward mean-field stochastic partial differential equations. We finally apply our results to find the explicit optimal control for an optimal harvesting problem.
We consider a problem of optimal control of an infinite horizon system governed by forward-backwa... more We consider a problem of optimal control of an infinite horizon system governed by forward-backward stochastic differential equations with delay. Sufficient and necessary maximum principles for optimal control under partial information in infinite horizon are derived. We illustrate our results by an application to a problem of optimal consumption with respect to recursive utility from a cash flow with delay.
In this article we consider a stochastic optimal control problem where the dynamics of the state ... more In this article we consider a stochastic optimal control problem where the dynamics of the state process, X(t), is a controlled stochastic differential equation with jumps, delay and noisy memory. The term noisy memory is, to the best of our knowledge, new. By this we mean that the dynamics of X(t) depend on t t−δ X(s)dB(s) (where B(t) is a Brownian motion). Hence, the dependence is noisy because of the Brownian motion, and it involves memory due to the influence from the previous values of the state process. We derive necessary and sufficient maximum principles for this stochastic control problem in two different ways, resulting in two sets of maximum principles. The first set of maximum principles is derived using Malliavin calculus techniques, while the second set comes from reduction to a discrete delay optimal control problem, and application of previously known results by Øksendal, Sulem and Zhang. The maximum principles also apply to the case where the controller has only partial information, in the sense that the admissible controls are adapted to a sub-σ-algebra of the natural filtration.
We study the problem of optimal stopping of conditional McKean-Vlasov (mean-field) stochastic dif... more We study the problem of optimal stopping of conditional McKean-Vlasov (mean-field) stochastic differential equations with jumps (conditional McKean-Vlasov jump diffusions, for short). We obtain sufficient variational inequalities for a function to be the value function of such a problem and for a stopping time to be optimal. To achieve this, we combine the state equation for the conditional McKean-Vlasov equation with the associated stochastic Fokker-Planck equation for the conditional law of the solution of the state. This gives us a Markovian system which can be handled by using a version of the Dynkin formula. We illustrate our result by solving explicitly two optimal stopping problems for conditional McKean-Vlasov jump diffusions. More specifically, we first find the optimal time to sell in a market with common noise and jumps, and, next, we find the stopping time to quit a project whose state is modelled by a jump diffusion, when the performance functional involves the conditional mean of the state.
We study the problem of optimal inside control of an SPDE (a stochastic evolution equation) drive... more We study the problem of optimal inside control of an SPDE (a stochastic evolution equation) driven by a Brownian motion and a Poisson random measure. Our optimal control problem is new in two ways: • (i) The controller has access to inside information, i.e. access to information about a future state of the system, • (ii) The integro-differential operator of the SPDE might depend on the control. In the first part of the paper, we formulate a sufficient and a necessary maximum principle for this type of control problem, in two cases:
A celebrated financial application of convex duality theory gives an explicit relation between th... more A celebrated financial application of convex duality theory gives an explicit relation between the following two quantities: (i) The optimal terminal wealth X * (T) := X ϕ * (T) of the problem to maximize the expected U-utility of the terminal wealth X ϕ (T) generated by admissible portfolios ϕ(t); 0 ≤ t ≤ T in a market with the risky asset price process modeled as a semimartingale; (ii) The optimal scenario dQ * dP of the dual problem to minimize the expected V-value of dQ dP over a family of equivalent local martingale measures Q, where V is the convex conjugate function of the concave function U .
The purpose of this paper is to study optimal control of conditional McKean-Vlasov (mean-field) s... more The purpose of this paper is to study optimal control of conditional McKean-Vlasov (mean-field) stochastic differential equations with jumps To this end, we first prove a stochastic Fokker-Planck equation for the conditional law of the solution of such equations. Combining this equation with the original state equation, we obtain a Markovian system for the state and its conditional law. Furthermore, we apply this to formulate an Hamilton-Jacobi-Bellman (HJB) equation for the optimal control of conditional McKean-Vlasov jump diffusions. Then we study the situation when the law is absolutely continuous with respect to Lebesgue measure. In that case the Fokker-Planck equation reduces to a stochastic partial differential equation (SPDE) for the Radon-Nikodym derivative of the conditional law. Finally we apply these results to solve explicitly the following problems: • Linear-quadratic optimal control of conditional stochastic McKean-Vlasov jump diffusions. • Optimal consumption from a cash flow modelled as a conditional stochastic McKean-Vlasov differential equation with jumps.
Journal of Mathematical Analysis and Applications, Aug 1, 2019
We use a white noise approach to study the problem of optimal inside control of a stochastic dela... more We use a white noise approach to study the problem of optimal inside control of a stochastic delay equation driven by a Brownian motion B and a Poisson random measure N. In particular, we use Hida-Malliavin calculus and the Donsker delta functional to study the problem.
Stochastics An International Journal of Probability and Stochastic Processes, Mar 24, 2017
We introduce the concept of singular recursive utility. This leads to a kind of singular BSDE whi... more We introduce the concept of singular recursive utility. This leads to a kind of singular BSDE which, to the best of our knowledge, has not been studied before. We show conditions for existence and uniqueness of a solution for this kind of singular BSDE. Furthermore, we analyze the problem of maximizing the singular recursive utility. We derive sufficient and necessary maximum principles for this problem, and connect it to the Skorohod reflection problem. Finally, we apply our results to a specific cash flow. In this case, we find that the optimal consumption rate is given by the solution to the corresponding Skorohod reflection problem.
HAL (Le Centre pour la Communication Scientifique Directe), Nov 9, 2011
In the first part, we consider general singular control problems for random fields given by a sto... more In the first part, we consider general singular control problems for random fields given by a stochastic partial differential equation (SPDE). We show that under some conditions the optimal singular control can be identified with the solution of a coupled system of SPDE and a kind of reflected backward SPDE (RB-SPDE).In the second part, existence and uniqueness of solutions of RBSPDEs are established, which is of independent interest.
We consider general singular control problems for random fields given by a stochastic partial dif... more We consider general singular control problems for random fields given by a stochastic partial differential equation (SPDE). We show that under some conditions the optimal singular control can be identified with the solution of a coupled system of SPDE and a reflected backward SPDE (RBSPDE). As an illustration we apply the result to a singular optimal harvesting problem from a population whose density is modeled as a stochastic reaction-diffusion equation. Existence and uniqueness of solutions of RBSPDEs are established, as well as a comparison theorems. We then establish a relation between RBSPDEs and optimal stopping of SPDEs, and apply the result to a risk minimizing stopping problem.
We study optimal insider control problems, i.e. optimal control problems of stochastic systems wh... more We study optimal insider control problems, i.e. optimal control problems of stochastic systems where the controller at any time t, in addition to knowledge about the history of the system up to this time, also has additional information related to a future value of the system. Since this puts the associated controlled systems outside the context of semimartingales, we apply anticipative white noise analysis, including forward integration and Hida-Malliavin calculus to study the problem. Combining this with Donsker delta functionals we transform the insider control problem into a classical (but parametrised) adapted control system, albeit with a non-classical performance functional. We establish a sufficient and a necessary maximum principle for such systems. Then we apply the results to obtain explicit solutions for some optimal insider portfolio problems in financial markets described by Itô-Lévy processes. Finally, in the Appendix we give a brief survey of the concepts and results we need from the theory of white noise, forward integrals and Hida-Malliavin calculus.
We combine stochastic control methods, white noise analysis and Hida-Malliavin calculus applied t... more We combine stochastic control methods, white noise analysis and Hida-Malliavin calculus applied to the Donsker delta functional to obtain explicit representations of semimartingale decompositions under enlargement of filtrations. Some of the expressions are more explicit than previously known. The results are illustrated by examples.
Stochastic Analysis and Applications, Jan 11, 2002
We study how the value function (minimal cost function) V c of certain impulse control problems d... more We study how the value function (minimal cost function) V c of certain impulse control problems depends on the intervention cost c. We consider the case when the cost of interfering with an impulse control of size ζ∈R is given by c+|ζ| with c≥0,λ>0 constants, and we show (under some assumptions) that V c is very sensitive (non-robust) to an increase in c near c=0 in the sense that dV c dc c=0 =+∞
In this paper we study the mean-field backward stochastic differential equations (mean-field bsde... more In this paper we study the mean-field backward stochastic differential equations (mean-field bsde) of the form dY (t) = −f (t, Y (t), Z(t), K(t, •), E[ϕ(Y (t), Z(t), K(t, •))])dt + Z(t)dB(t) + R 0 K(t, ζ)Ñ (dt, dζ), where B is a Brownian motion,Ñ is the compensated Poisson random measure. Under some mild conditions, we prove the existence and uniqueness of the solution triplet (Y, Z, K). It is commonly believed that there is no comparison theorem for general mean-field bsde. However, we prove a comparison theorem for a subclass of these equations.When the mean-field bsde is linear, we give an explicit formula for the first component Y (t) of the solution triplet. Our results are applied to solve a mean-field recursive utility optimization problem in finance.
We study the problem of optimal control for mean-field stochastic partial differential equations ... more We study the problem of optimal control for mean-field stochastic partial differential equations (stochastic evolution equations) driven by a Brownian motion and an independent Poisson random measure, in the case of partial information control. One important novelty of our problem is represented by the introduction of general meanfield operators, acting on both the controlled state process and the control process. We first formulate a sufficient and a necessary maximum principle for this type of control. We then prove existence and uniqueness of the solution of such general forward and backward mean-field stochastic partial differential equations. We finally apply our results to find the explicit optimal control for an optimal harvesting problem.
Solutions of stochastic Volterra (integral) equations are not Markov processes, and therefore cla... more Solutions of stochastic Volterra (integral) equations are not Markov processes, and therefore classical methods, like dynamic programming, cannot be used to study optimal control problems for such equations. However, we show that, by using Malliavin calculus, it is possible to formulate a modified functional type of maximum principle suitable for such systems. This principle also applies to situations where the controller has only partial information available to base her decisions upon. We present both a sufficient and a necessary maximum principle of this type, and then we use the results to study some specific examples. In particular, we solve an optimal portfolio problem in a financial market model with memory.
The purpose of this paper is to study optimal control of conditional McKean-Vlasov (mean-field) s... more The purpose of this paper is to study optimal control of conditional McKean-Vlasov (mean-field) stochastic differential equations with jumps (conditional McKean-Vlasov jump diffusions, for short). To this end, we first prove a stochastic Fokker-Planck equation for the conditional law of the solution of such equations. Combining this equation with the original state equation, we obtain a Markovian system for the state and its conditional law. Furthermore, we apply this to formulate an Hamilton-Jacobi-Bellman (HJB) equation for the optimal control of conditional McKean-Vlasov jump diffusions. Then we study the situation when the law is absolutely continuous with respect to Lebesgue measure. In that case the Fokker-Planck equation reduces to a stochastic partial differential equation (SPDE) for the Radon-Nikodym derivative of the conditional law. Finally we apply these results to solve explicitly the following problems: • Linear-quadratic optimal control of conditional stochastic McKean-Vlasov jump diffusions. • Optimal consumption from a cash flow modelled as a conditional stochastic McKean-Vlasov differential equation with jumps.
We study the problem of optimal inside control of a stochastic delay equation driven by a Brownia... more We study the problem of optimal inside control of a stochastic delay equation driven by a Brownian motion and a Poisson random measure. We prove a sufficient and a necessary maximum principle for the optimal control when the trader from the beginning has inside information about the future value of some random variable related to the system. The results are applied to the problem of finding the optimal insider portfolio in a financial market where the risky asset price is given by a stochastic delay equation.
This paper considers a controlled Itô-Lévy process where the information available to the control... more This paper considers a controlled Itô-Lévy process where the information available to the controller is possibly less than the overall information. All the system coefficients and the objective performance functional are allowed to be random, possibly non-Markovian. Malliavin calculus is employed to derive a maximum principle for the optimal control of such a system where the adjoint process is explicitly expressed.
Journal of Optimization Theory and Applications, Feb 20, 2018
We study the problem of optimal control for mean-field stochastic partial differential equations ... more We study the problem of optimal control for mean-field stochastic partial differential equations (stochastic evolution equations) driven by a Brownian motion and an independent Poisson random measure, in the case of partial information control. One important novelty of our problem is represented by the introduction of general meanfield operators, acting on both the controlled state process and the control process. We first formulate a sufficient and a necessary maximum principle for this type of control. We then prove existence and uniqueness of the solution of such general forward and backward mean-field stochastic partial differential equations. We finally apply our results to find the explicit optimal control for an optimal harvesting problem.
We consider a problem of optimal control of an infinite horizon system governed by forward-backwa... more We consider a problem of optimal control of an infinite horizon system governed by forward-backward stochastic differential equations with delay. Sufficient and necessary maximum principles for optimal control under partial information in infinite horizon are derived. We illustrate our results by an application to a problem of optimal consumption with respect to recursive utility from a cash flow with delay.
In this article we consider a stochastic optimal control problem where the dynamics of the state ... more In this article we consider a stochastic optimal control problem where the dynamics of the state process, X(t), is a controlled stochastic differential equation with jumps, delay and noisy memory. The term noisy memory is, to the best of our knowledge, new. By this we mean that the dynamics of X(t) depend on t t−δ X(s)dB(s) (where B(t) is a Brownian motion). Hence, the dependence is noisy because of the Brownian motion, and it involves memory due to the influence from the previous values of the state process. We derive necessary and sufficient maximum principles for this stochastic control problem in two different ways, resulting in two sets of maximum principles. The first set of maximum principles is derived using Malliavin calculus techniques, while the second set comes from reduction to a discrete delay optimal control problem, and application of previously known results by Øksendal, Sulem and Zhang. The maximum principles also apply to the case where the controller has only partial information, in the sense that the admissible controls are adapted to a sub-σ-algebra of the natural filtration.
We study the problem of optimal stopping of conditional McKean-Vlasov (mean-field) stochastic dif... more We study the problem of optimal stopping of conditional McKean-Vlasov (mean-field) stochastic differential equations with jumps (conditional McKean-Vlasov jump diffusions, for short). We obtain sufficient variational inequalities for a function to be the value function of such a problem and for a stopping time to be optimal. To achieve this, we combine the state equation for the conditional McKean-Vlasov equation with the associated stochastic Fokker-Planck equation for the conditional law of the solution of the state. This gives us a Markovian system which can be handled by using a version of the Dynkin formula. We illustrate our result by solving explicitly two optimal stopping problems for conditional McKean-Vlasov jump diffusions. More specifically, we first find the optimal time to sell in a market with common noise and jumps, and, next, we find the stopping time to quit a project whose state is modelled by a jump diffusion, when the performance functional involves the conditional mean of the state.
We study the problem of optimal inside control of an SPDE (a stochastic evolution equation) drive... more We study the problem of optimal inside control of an SPDE (a stochastic evolution equation) driven by a Brownian motion and a Poisson random measure. Our optimal control problem is new in two ways: • (i) The controller has access to inside information, i.e. access to information about a future state of the system, • (ii) The integro-differential operator of the SPDE might depend on the control. In the first part of the paper, we formulate a sufficient and a necessary maximum principle for this type of control problem, in two cases:
Uploads
Papers by Bernt Oksendal