Abstract—In this paper, we consider the distributed compu-tation of equilibria arising in monoton... more Abstract—In this paper, we consider the distributed compu-tation of equilibria arising in monotone stochastic Nash games over continuous strategy sets. Such games arise in settings when the gradient map of the player objectives is a monotone mapping over the cartesian product of strategy sets, leading to a monotone stochastic variational inequality. We consider the application of projection-based stochastic approximation schemes. However, such techniques are characterized by a key shortcoming: they can accommodate strongly monotone mappings only. In fact, standard extensions of stochastic ap-proximation schemes for merely monotone mappings require the solution of a sequence of related strongly monotone prob-lems, a natively two-timescale scheme. Accordingly, we consider the development of single timescale techniques for computing equilibria when the associated gradient map does not admit strong monotonicity. We first show that, under suitable assump-tions, standard projection scheme...
We consider a Cartesian stochastic variational inequality problem with a monotone map. For this p... more We consider a Cartesian stochastic variational inequality problem with a monotone map. For this problem, we develop and analyze distributed iterative stochastic approximation algorithms. Such a problem arises, for example, as an equilibrium problem in monotone stochastic Nash games over continuous strategy sets. We introduce two classes of stochastic approximation methods, each of which requires exactly one projection step at every iteration, and we provide convergence analysis for them. Of these, the first is the stochastic iterative Tikhonov regularization method which necessitates the update of regularization parameter after every iteration. The second method is a stochastic iterative proximal-point method, where the centering term is updated after every iteration. Notably, we present a generalization of this method where the weighting in the proximal-point method can also be updated after every iteration. Conditions are provided for recovering global convergence in limited coord...
This thesis pertains to the development of distributed algorithms in the context of networked mul... more This thesis pertains to the development of distributed algorithms in the context of networked multi-agent systems. Such engineered systems may be tasked with a variety of goals, ranging from the solution of optimization problems to addressing the solution of variational inequality problems. Two key complicating characteristics of multi-agent systems are the following: (i) the lack of availability of system-wide information at any given location; and (ii) the absence of any central coordinator. These intricacies make it infeasible to collect all the information at a location and preclude the use of centralized algorithms. Consequently, a fundamental question in the design of such systems is the need for developing algorithms that can support their functioning. Accord-ingly, our goal lies in developing distributed algorithms that can be implemented at a local level while guaranteeing a global system-level requirement. In such techniques, each agent uses locally available information, ...
We consider a Cartesian stochastic variational inequality problem with a monotone map. For this p... more We consider a Cartesian stochastic variational inequality problem with a monotone map. For this problem, we develop and analyze distributed iterative stochastic approximation algorithms. Such a problem arises, for example, as an equilibrium problem in monotone stochastic Nash games over continuous strategy sets. We introduce two classes of stochastic approximation methods, each of which requires exactly one projection step at every iteration, and we provide convergence analysis for them. Of these, the first is the stochastic iterative Tikhonov regularization method which necessitates the update of regularization parameter after every iteration. The second method is a stochastic iterative proximal-point method, where the centering term is updated after every iteration. Notably, we present a generalization of this method where the weighting in the proximal-point method can also be updated after every iteration. Conditions are provided for recovering global convergence in limited coord...
We consider a Cartesian stochastic variational inequality problem with a monotone map. For this p... more We consider a Cartesian stochastic variational inequality problem with a monotone map. For this problem, we develop and analyze distributed iterative stochastic approximation algorithms. Monotone stochastic variational inequalities arise naturrally, for instance, from the equilibrium conditions of monotone stochastic Nash games over continuous strategy sets. We introduce two classes of stochastic approximation methods, each of which requires exactly one projection step at every iteration, and provide convergence analysis for them. Of these, the first is the stochastic iterative Tikhonov regularization method which necessitates the update of regularization parameter after every iteration. The second method is a stochastic iterative proximal-point method, where the centering term is updated after every iteration. Conditions are provided for recovering global convergence in limited coordination extensions of such schemes where agents are allowed to choose their steplength sequences, regularization and centering parameters independently, while meeting a suitable coordination requirement. We apply the proposed class of techniques and their limited coordination versions to a stochastic networked rate allocation problem.
In this paper, we consider the distributed computation of equilibria arising in monotone stochast... more In this paper, we consider the distributed computation of equilibria arising in monotone stochastic Nash games over continuous strategy sets. Such games arise in settings when the gradient map of the player objectives is a monotone mapping over the cartesian product of strategy sets, leading to a monotone stochastic variational inequality. We consider the application of projection-based stochastic approximation schemes. However, such techniques are characterized by a key shortcoming: they can accommodate strongly monotone mappings only. In fact, standard extensions of stochastic approximation schemes for merely monotone mappings require the solution of a sequence of related strongly monotone problems, a natively two-timescale scheme. Accordingly, we consider the development of single timescale techniques for computing equilibria when the associated gradient map does not admit strong monotonicity. We first show that, under suitable assumptions, standard projection schemes can indeed be extended to allow for strict, rather than strong monotonicity. Furthermore, we introduce a class of regularized stochastic approximation schemes, in which the regularization parameter is updated at every step, leading to a single timescale method. The scheme is a stochastic extension of an iterative Tikhonov regularization method and its global convergence is established. To aid in networked implementations, we consider an extension to this result where players are allowed to choose their steplengths independently and show if the deviation across their choices is suitably constrained, then the convergence of the scheme may be claimed.
In this paper, we consider the distributed computation of equilibria arising in monotone stochast... more In this paper, we consider the distributed computation of equilibria arising in monotone stochastic Nash games over continuous strategy sets. Such games arise in settings when the gradient map of the player objectives is a monotone mapping over the cartesian product of strategy sets, in which case the associated equilibrium conditions can be compactly stated as a monotone stochastic variational inequality problem.
We consider a class of multiuser optimization problems in which user interactions are seen throug... more We consider a class of multiuser optimization problems in which user interactions are seen through congestion cost functions or coupling constraints. Our primary emphasis lies on the convergence and error analysis of distributed algorithms in which users communicate through aggregate user information. Traditional implementations are reliant on strong convexity assumptions, require coordination across users in terms of consistent stepsizes, and often rule out early termination by a group of users. We consider how some of these assumptions can be weakened in the context of projection methods motivated by fixed-point formulations of the problem. Specifically, we focus on (approximate) primal and primal-dual projection algorithms. We analyze the convergence behavior of the methods and provide error bounds in settings with limited coordination across users and regimes where a group of users may prematurely terminate affecting the convergence point.
Proceedings of the 48h IEEE Conference on Decision and Control (CDC) held jointly with 2009 28th Chinese Control Conference, 2009
Traditionally, a multiuser problem is a constrained optimization problem characterized by a set o... more Traditionally, a multiuser problem is a constrained optimization problem characterized by a set of users, an objective given by a sum of user-specific utility functions, and a collection of linear constraints that couple the user decisions. The users do not share the information about their utilities, but do communicate values of their decision variables. The multiuser problem is to maximize the sum of the users-specific utility functions subject to the coupling constraints, while abiding by the informational requirements of each user. In this paper, we focus on generalizations of convex multiuser optimization problems where the objective and constraints are not separable by user and instead consider instances where user decisions are coupled, both in the objective and through nonlinear coupling constraints. To solve this problem, we consider the application of gradient-based distributed algorithms on an approximation of the multiuser problem. Such an approximation is obtained through a Tikhonov regularization and is equipped with estimates of the difference between the optimal function values of the original problem and its regularized counterpart. In the algorithmic development, we consider constant steplength primal, primal-dual and dual schemes in which the iterate computations are distributed naturally across the users, i.e., each user updates its own decision only. The primal scheme, of relevance when user decisions are uncoupled, is presented along with per-iteration error bounds for regimes where communication failures across users may occur. When user decisions are coupled, we consider primal-dual and dual schemes. Convergence theory in the primal-dual space is provided in limited coordination settings, and allows for differing steplengths across users as well as across the primal and dual space. An alternative to primal-dual schemes can be found in dual schemes which are analyzed in regimes where primal solutions are obtained through a fixed number of gradient steps. Our results are supported by a case-study in which the proposed algorithms are applied to a multi-user problem arising in a congested traffic network.
2011 17th International Conference on Digital Signal Processing (DSP), 2011
In the face of increasing demand for wireless services, the design of spectrum assignment policie... more In the face of increasing demand for wireless services, the design of spectrum assignment policies has gained enormous relevance. We consider one such instance in cognitive radio systems where recent efforts have focused on the application of game-theoretic approaches. Much of this work has been restricted to deterministic regimes and this paper considers distributed schemes in the stochastic regime. The corresponding problems are seen to be stochastic Nash games over continuous strategy sets. Notably, the gradient map of player utilities is seen to be monotone mappping over the cartesian product of strategy sets, leading to a monotone stochastic variational inequality. We consider the application of projection-based stochastic approximation schemes. However, such techniques are characterized by a key shortcoming: they can accommodate strongly monotone mappings only with standard extensions of stochastic approximation schemes for merely monotone mappings is natively a two-timescale scheme. Accordingly, we consider the development of single timescale techniques for computing equilibria when the associated gradient map does not admit strong monotonicity. We develop convergence theory for distributed single-timescale stochastic approximation schemes, namely stochastic iterative proximal point method which requires exactly one projection step at every step. Finally we apply this framework to the design of cognitive radio systems in uncertain regimes under temperature-interference constraints.
2012 IEEE 51st IEEE Conference on Decision and Control (CDC), 2012
We consider a class of games, termed as aggregative games, being played over a distributed multia... more We consider a class of games, termed as aggregative games, being played over a distributed multiagent networked system. In an aggregative game, an agent's objective function is coupled through a function of the aggregate of all agents decisions. Every agent maintains an estimate of the aggregate and agents exchange this information over a connected network. We study the gossip-based distributed algorithm for information exchange and computation of equilibrium decisions of agents over the network. Our primary emphasis is on proving the convergence of the algorithm under an assumption of a diminishing (agent-specific) stepsize sequence. Under standard conditions, we establish the almost-sure convergence of the algorithm to an equilibrium point. Finally, we present numerical results to assess the performance of the gossip algorithm for aggregative games.
49th IEEE Conference on Decision and Control (CDC), 2010
In this paper, we consider the distributed computation of equilibria arising in monotone stochast... more In this paper, we consider the distributed computation of equilibria arising in monotone stochastic Nash games over continuous strategy sets. Such games arise in settings when the gradient map of the player objectives is a monotone mapping over the cartesian product of strategy sets, in which case the associated equilibrium conditions can be compactly stated as a monotone stochastic variational inequality problem.
Abstract We consider a Cartesian stochastic variational inequality problem with a monotone map. F... more Abstract We consider a Cartesian stochastic variational inequality problem with a monotone map. For this problem, we develop and analyze distributed iterative stochastic approximation algorithms. Monotone stochastic variational inequalities arise naturrally, for ...
Traditionally, a multiuser problem is a constrained optimization problem characterized by a set o... more Traditionally, a multiuser problem is a constrained optimization problem characterized by a set of users, an objective given by a sum of user-specific utility functions, and a collection of linear constraints that couple the user decisions. The users do not share the information about their utilities, but do communicate values of their decision variables. The multiuser problem is to maximize the sum of the user-specific utility functions subject to the coupling constraints, while abiding by the informational requirements of each user. In this paper, we focus on generalizations of convex multiuser optimization problems where the objective and constraints are not separable by user and instead consider instances where user decisions are coupled, both in the objective and through nonlinear coupling constraints. To solve this problem, we consider the application of gradient-based distributed algorithms on an approximation of the multiuser problem. Such an approximation is obtained through a Tikhonov regularization and is equipped with estimates of the difference between the optimal function values of the original problem and its regularized counterpart. In the algorithmic development, we consider constant step-length primal-dual and dual schemes in which the iterate computations are distributed naturally across the users; i.e., each user updates its own decision only. Convergence in the primal-dual space is provided in limited coordination settings, which allows for differing step lengths across users as well as across the primal and dual space. We observe that a generalization of this result is also available when users choose their regularization parameters independently from a prescribed range. An alternative to primal-dual schemes can be found in dual schemes that are analyzed in regimes where approximate primal solutions are obtained through a fixed number of gradient steps. Per-iteration error bounds are provided in such regimes, and extensions are provided to regimes where users independently choose their regularization parameters. Our results are supported by a case study in which the proposed algorithms are applied to a multiuser problem arising in a congested traffic network.
Abstract—In this paper, we consider the distributed compu-tation of equilibria arising in monoton... more Abstract—In this paper, we consider the distributed compu-tation of equilibria arising in monotone stochastic Nash games over continuous strategy sets. Such games arise in settings when the gradient map of the player objectives is a monotone mapping over the cartesian product of strategy sets, leading to a monotone stochastic variational inequality. We consider the application of projection-based stochastic approximation schemes. However, such techniques are characterized by a key shortcoming: they can accommodate strongly monotone mappings only. In fact, standard extensions of stochastic ap-proximation schemes for merely monotone mappings require the solution of a sequence of related strongly monotone prob-lems, a natively two-timescale scheme. Accordingly, we consider the development of single timescale techniques for computing equilibria when the associated gradient map does not admit strong monotonicity. We first show that, under suitable assump-tions, standard projection scheme...
We consider a Cartesian stochastic variational inequality problem with a monotone map. For this p... more We consider a Cartesian stochastic variational inequality problem with a monotone map. For this problem, we develop and analyze distributed iterative stochastic approximation algorithms. Such a problem arises, for example, as an equilibrium problem in monotone stochastic Nash games over continuous strategy sets. We introduce two classes of stochastic approximation methods, each of which requires exactly one projection step at every iteration, and we provide convergence analysis for them. Of these, the first is the stochastic iterative Tikhonov regularization method which necessitates the update of regularization parameter after every iteration. The second method is a stochastic iterative proximal-point method, where the centering term is updated after every iteration. Notably, we present a generalization of this method where the weighting in the proximal-point method can also be updated after every iteration. Conditions are provided for recovering global convergence in limited coord...
This thesis pertains to the development of distributed algorithms in the context of networked mul... more This thesis pertains to the development of distributed algorithms in the context of networked multi-agent systems. Such engineered systems may be tasked with a variety of goals, ranging from the solution of optimization problems to addressing the solution of variational inequality problems. Two key complicating characteristics of multi-agent systems are the following: (i) the lack of availability of system-wide information at any given location; and (ii) the absence of any central coordinator. These intricacies make it infeasible to collect all the information at a location and preclude the use of centralized algorithms. Consequently, a fundamental question in the design of such systems is the need for developing algorithms that can support their functioning. Accord-ingly, our goal lies in developing distributed algorithms that can be implemented at a local level while guaranteeing a global system-level requirement. In such techniques, each agent uses locally available information, ...
We consider a Cartesian stochastic variational inequality problem with a monotone map. For this p... more We consider a Cartesian stochastic variational inequality problem with a monotone map. For this problem, we develop and analyze distributed iterative stochastic approximation algorithms. Such a problem arises, for example, as an equilibrium problem in monotone stochastic Nash games over continuous strategy sets. We introduce two classes of stochastic approximation methods, each of which requires exactly one projection step at every iteration, and we provide convergence analysis for them. Of these, the first is the stochastic iterative Tikhonov regularization method which necessitates the update of regularization parameter after every iteration. The second method is a stochastic iterative proximal-point method, where the centering term is updated after every iteration. Notably, we present a generalization of this method where the weighting in the proximal-point method can also be updated after every iteration. Conditions are provided for recovering global convergence in limited coord...
We consider a Cartesian stochastic variational inequality problem with a monotone map. For this p... more We consider a Cartesian stochastic variational inequality problem with a monotone map. For this problem, we develop and analyze distributed iterative stochastic approximation algorithms. Monotone stochastic variational inequalities arise naturrally, for instance, from the equilibrium conditions of monotone stochastic Nash games over continuous strategy sets. We introduce two classes of stochastic approximation methods, each of which requires exactly one projection step at every iteration, and provide convergence analysis for them. Of these, the first is the stochastic iterative Tikhonov regularization method which necessitates the update of regularization parameter after every iteration. The second method is a stochastic iterative proximal-point method, where the centering term is updated after every iteration. Conditions are provided for recovering global convergence in limited coordination extensions of such schemes where agents are allowed to choose their steplength sequences, regularization and centering parameters independently, while meeting a suitable coordination requirement. We apply the proposed class of techniques and their limited coordination versions to a stochastic networked rate allocation problem.
In this paper, we consider the distributed computation of equilibria arising in monotone stochast... more In this paper, we consider the distributed computation of equilibria arising in monotone stochastic Nash games over continuous strategy sets. Such games arise in settings when the gradient map of the player objectives is a monotone mapping over the cartesian product of strategy sets, leading to a monotone stochastic variational inequality. We consider the application of projection-based stochastic approximation schemes. However, such techniques are characterized by a key shortcoming: they can accommodate strongly monotone mappings only. In fact, standard extensions of stochastic approximation schemes for merely monotone mappings require the solution of a sequence of related strongly monotone problems, a natively two-timescale scheme. Accordingly, we consider the development of single timescale techniques for computing equilibria when the associated gradient map does not admit strong monotonicity. We first show that, under suitable assumptions, standard projection schemes can indeed be extended to allow for strict, rather than strong monotonicity. Furthermore, we introduce a class of regularized stochastic approximation schemes, in which the regularization parameter is updated at every step, leading to a single timescale method. The scheme is a stochastic extension of an iterative Tikhonov regularization method and its global convergence is established. To aid in networked implementations, we consider an extension to this result where players are allowed to choose their steplengths independently and show if the deviation across their choices is suitably constrained, then the convergence of the scheme may be claimed.
In this paper, we consider the distributed computation of equilibria arising in monotone stochast... more In this paper, we consider the distributed computation of equilibria arising in monotone stochastic Nash games over continuous strategy sets. Such games arise in settings when the gradient map of the player objectives is a monotone mapping over the cartesian product of strategy sets, in which case the associated equilibrium conditions can be compactly stated as a monotone stochastic variational inequality problem.
We consider a class of multiuser optimization problems in which user interactions are seen throug... more We consider a class of multiuser optimization problems in which user interactions are seen through congestion cost functions or coupling constraints. Our primary emphasis lies on the convergence and error analysis of distributed algorithms in which users communicate through aggregate user information. Traditional implementations are reliant on strong convexity assumptions, require coordination across users in terms of consistent stepsizes, and often rule out early termination by a group of users. We consider how some of these assumptions can be weakened in the context of projection methods motivated by fixed-point formulations of the problem. Specifically, we focus on (approximate) primal and primal-dual projection algorithms. We analyze the convergence behavior of the methods and provide error bounds in settings with limited coordination across users and regimes where a group of users may prematurely terminate affecting the convergence point.
Proceedings of the 48h IEEE Conference on Decision and Control (CDC) held jointly with 2009 28th Chinese Control Conference, 2009
Traditionally, a multiuser problem is a constrained optimization problem characterized by a set o... more Traditionally, a multiuser problem is a constrained optimization problem characterized by a set of users, an objective given by a sum of user-specific utility functions, and a collection of linear constraints that couple the user decisions. The users do not share the information about their utilities, but do communicate values of their decision variables. The multiuser problem is to maximize the sum of the users-specific utility functions subject to the coupling constraints, while abiding by the informational requirements of each user. In this paper, we focus on generalizations of convex multiuser optimization problems where the objective and constraints are not separable by user and instead consider instances where user decisions are coupled, both in the objective and through nonlinear coupling constraints. To solve this problem, we consider the application of gradient-based distributed algorithms on an approximation of the multiuser problem. Such an approximation is obtained through a Tikhonov regularization and is equipped with estimates of the difference between the optimal function values of the original problem and its regularized counterpart. In the algorithmic development, we consider constant steplength primal, primal-dual and dual schemes in which the iterate computations are distributed naturally across the users, i.e., each user updates its own decision only. The primal scheme, of relevance when user decisions are uncoupled, is presented along with per-iteration error bounds for regimes where communication failures across users may occur. When user decisions are coupled, we consider primal-dual and dual schemes. Convergence theory in the primal-dual space is provided in limited coordination settings, and allows for differing steplengths across users as well as across the primal and dual space. An alternative to primal-dual schemes can be found in dual schemes which are analyzed in regimes where primal solutions are obtained through a fixed number of gradient steps. Our results are supported by a case-study in which the proposed algorithms are applied to a multi-user problem arising in a congested traffic network.
2011 17th International Conference on Digital Signal Processing (DSP), 2011
In the face of increasing demand for wireless services, the design of spectrum assignment policie... more In the face of increasing demand for wireless services, the design of spectrum assignment policies has gained enormous relevance. We consider one such instance in cognitive radio systems where recent efforts have focused on the application of game-theoretic approaches. Much of this work has been restricted to deterministic regimes and this paper considers distributed schemes in the stochastic regime. The corresponding problems are seen to be stochastic Nash games over continuous strategy sets. Notably, the gradient map of player utilities is seen to be monotone mappping over the cartesian product of strategy sets, leading to a monotone stochastic variational inequality. We consider the application of projection-based stochastic approximation schemes. However, such techniques are characterized by a key shortcoming: they can accommodate strongly monotone mappings only with standard extensions of stochastic approximation schemes for merely monotone mappings is natively a two-timescale scheme. Accordingly, we consider the development of single timescale techniques for computing equilibria when the associated gradient map does not admit strong monotonicity. We develop convergence theory for distributed single-timescale stochastic approximation schemes, namely stochastic iterative proximal point method which requires exactly one projection step at every step. Finally we apply this framework to the design of cognitive radio systems in uncertain regimes under temperature-interference constraints.
2012 IEEE 51st IEEE Conference on Decision and Control (CDC), 2012
We consider a class of games, termed as aggregative games, being played over a distributed multia... more We consider a class of games, termed as aggregative games, being played over a distributed multiagent networked system. In an aggregative game, an agent's objective function is coupled through a function of the aggregate of all agents decisions. Every agent maintains an estimate of the aggregate and agents exchange this information over a connected network. We study the gossip-based distributed algorithm for information exchange and computation of equilibrium decisions of agents over the network. Our primary emphasis is on proving the convergence of the algorithm under an assumption of a diminishing (agent-specific) stepsize sequence. Under standard conditions, we establish the almost-sure convergence of the algorithm to an equilibrium point. Finally, we present numerical results to assess the performance of the gossip algorithm for aggregative games.
49th IEEE Conference on Decision and Control (CDC), 2010
In this paper, we consider the distributed computation of equilibria arising in monotone stochast... more In this paper, we consider the distributed computation of equilibria arising in monotone stochastic Nash games over continuous strategy sets. Such games arise in settings when the gradient map of the player objectives is a monotone mapping over the cartesian product of strategy sets, in which case the associated equilibrium conditions can be compactly stated as a monotone stochastic variational inequality problem.
Abstract We consider a Cartesian stochastic variational inequality problem with a monotone map. F... more Abstract We consider a Cartesian stochastic variational inequality problem with a monotone map. For this problem, we develop and analyze distributed iterative stochastic approximation algorithms. Monotone stochastic variational inequalities arise naturrally, for ...
Traditionally, a multiuser problem is a constrained optimization problem characterized by a set o... more Traditionally, a multiuser problem is a constrained optimization problem characterized by a set of users, an objective given by a sum of user-specific utility functions, and a collection of linear constraints that couple the user decisions. The users do not share the information about their utilities, but do communicate values of their decision variables. The multiuser problem is to maximize the sum of the user-specific utility functions subject to the coupling constraints, while abiding by the informational requirements of each user. In this paper, we focus on generalizations of convex multiuser optimization problems where the objective and constraints are not separable by user and instead consider instances where user decisions are coupled, both in the objective and through nonlinear coupling constraints. To solve this problem, we consider the application of gradient-based distributed algorithms on an approximation of the multiuser problem. Such an approximation is obtained through a Tikhonov regularization and is equipped with estimates of the difference between the optimal function values of the original problem and its regularized counterpart. In the algorithmic development, we consider constant step-length primal-dual and dual schemes in which the iterate computations are distributed naturally across the users; i.e., each user updates its own decision only. Convergence in the primal-dual space is provided in limited coordination settings, which allows for differing step lengths across users as well as across the primal and dual space. We observe that a generalization of this result is also available when users choose their regularization parameters independently from a prescribed range. An alternative to primal-dual schemes can be found in dual schemes that are analyzed in regimes where approximate primal solutions are obtained through a fixed number of gradient steps. Per-iteration error bounds are provided in such regimes, and extensions are provided to regimes where users independently choose their regularization parameters. Our results are supported by a case study in which the proposed algorithms are applied to a multiuser problem arising in a congested traffic network.
Uploads
Papers by Jayash Koshal