Dynamic pricing is a promising strategy to address the challenges of smart charging, as tradition... more Dynamic pricing is a promising strategy to address the challenges of smart charging, as traditional time-of-use (ToU) rates and stationary pricing (SP) do not dynamically react to changes in operating conditions, reducing revenue for charging station (CS) vendors and affecting grid stability. Previous studies evaluated single objectives or linear combinations of objectives for EV CS pricing solutions, simplifying trade-offs and preferences among objectives. We develop a novel formulation for the dynamic pricing problem by addressing multiple conflicting objectives efficiently instead of solely focusing on one objective or metric, as in earlier works. We find optimal trade-offs or Pareto solutions efficiently using Non-dominated Sorting Genetic Algorithms (NSGA) II and NSGA III. A dynamic pricing model quantifies the relationship between demand and price while simultaneously solving multiple conflicting objectives, such as revenue, quality of service (QoS), and peak-to-average ratios (PAR). A single method can only address some of the above aspects of dynamic pricing comprehensively. We present a three-part dynamic pricing approach using a Bayesian model, multiobjective optimization, and multi-criteria decision-making (MCDM) using pseudo-weight vectors. To address the research gap in CS pricing, our method selects solutions using revenue, QoS, and PAR metrics simultaneously. Two California charging sites' real-world data validates our approach.
47th International ACM SIGIR Conference on Research and Development in Information Retrieval, 2024
Extracting relevant information from legal documents is a challenging task due to the technical c... more Extracting relevant information from legal documents is a challenging task due to the technical complexity and volume of their content. These factors also increase the costs of annotating large datasets, which are required to train state-of-the-art summarization systems. To address these challenges, we introduce CivilSum, a collection of 23,350 legal case decisions from the Supreme Court of India and other Indian High Courts paired with human-written summaries. Compared to previous datasets such as IN-Abs, Civil-Sum not only has more legal decisions but also provides shorter and more abstractive summaries, thus offering a challenging benchmark for legal summarization. Unlike other domains such as news articles, our analysis shows the most important content tends to appear at the end of the documents. We measure the effect of this tail bias on summarization performance using strong architectures for long-document abstractive summarization, and the results highlight the importance of long sequence modeling for the proposed task. CivilSum and related code are publicly available to the research community to advance text summarization in the legal domain. 1 CCS CONCEPTS • Computing methodologies → Natural language processing; • Applied computing → Law; • Information systems → Summarization.
16th Annual IEEE Green Technologies Conference (GreenTech 2024), 2024
In the context of rising greenhouse gas emissions and climate change, we propose a pollution cont... more In the context of rising greenhouse gas emissions and climate change, we propose a pollution control system through the use of the Vickrey-Clarke-Groves (VCG) auction mechanism. Agents bid on pollution permits that grant them a right to pollute a single unit of a pollutant. This auction algorithm efficiently allocates the pollution permits for multiple different pollutants based on the cost of pollution reduction. Previous work addressed only the single pollutant setting while this work addresses the multi-pollutant setting. Our analysis also shows conditions under which we can achieve the highly desirable property of budget balancing.
Accurate prediction of the phages that target a bacterial host plays an important role in combati... more Accurate prediction of the phages that target a bacterial host plays an important role in combating anti-microbial resistance. Our work explores the power of deep neural networks, convolutional neural networks, and pre-trained large DNA/protein language models to predict the host for a given phage. This work mainly uses the data provided by Gonzales et al. that contains receptor-binding protein sequences of the phages and the target host genus. We used pre-trained language models to obtain the dense representations of protein/nucleotide sequences to train a deep neural network to predict the target host genus. Additionally, convolutional neural networks were trained on one-hot encoding of nucleotide sequences to predict the target host genus. We achieved a weighted F1-score of 73.76% outperforming state-of-the-art models with an improvement of around 11% by using the protein language model ESM-1b. The data and the source code are available at https://github.com/sumanth2002629/ Bacteriophage-Research.
IEEE Transactions on Control Systems Technology, 2024
Management of power is a crucial problem in computing systems where power is finite, processor pe... more Management of power is a crucial problem in computing systems where power is finite, processor performance and energy needs are high, and thermal constraints have to be respected. The trade-off between performance and energy expenditure is well recognized. To satisfy these conflicting requirements, in this paper, a dynamic system framework is adopted, and results from optimal control theory, notably Pontryagin's Minimum Principle (PMP), are applied to derive an energyoptimal time-varying processor speed law, or frequency governor, to execute assigned tasks. PMP is chosen as it allows for system input constraints as well as thermal and power budget constraints to be considered; the PMP-based governor is also compared with a Model Predictive Controller (MPC) implemented by following the Explicit-MPC framework using a linear model. The main contributions of this paper are i determining an empirical time-invariant non-linear dynamic model of an Intel CPU with task execution rate, power consumption, and temperature as the outputs, and clock frequency as the input; ii the Linux implementation of a PMP-based clock frequency governor on the CPU based on a linear model as well as the non-linear model; and iii hardware implementation of an Explicit-MPC on the same platform using the frequency schedule derived from linear model simulations. Limits on task completion times and energy savings achieved in the execution of three benchmark tasks: MiBench, LINPACK, and Sorting positive integers, are presented. Experimental results show that it is possible to reduce energy consumption with an increase in task execution time while executing these benchmark tasks; it is also shown that it is possible to tune the PMP and MPC parameters to obtain similar performances. The approach presented in this paper can be applied to design optimal controllers for other types of standalone or heterogeneous computing systems.
Modelling the engaging behaviour of humans using multimodal data collected during human-robot int... more Modelling the engaging behaviour of humans using multimodal data collected during human-robot interactions has attracted much research interest. Most methods that have been proposed previously predict engaging behaviour directly from multimodal features, and do not incorporate personality inferences or any theories of interpersonal behaviour in human-human interactions. This work investigates whether personality inferences and attributes from interpersonal theories of behaviour (like attitude and emotion) further augment the modelling of engaging behaviour. We present a novel pipeline to model engaging behaviour that incorporates the Big Five personality traits, the Interpersonal Circumplex (IPC), and the Triandis Theory of Interpersonal Behaviour (TIB). We extract first-person vision and physiological features from the MHHRI dataset and predict the Big Five personality traits using a Support Vector Machine. Subsequently, we empirically validate the advantage of incorporating personality in modelling engaging behaviour and present a novel method that effectively uses the IPC to obtain scores for a human's attitude and emotion from their Big Five traits. Finally, our results demonstrate that attitude and emotion are correlates of behaviour even in human-robot interactions, as suggested by the TIB for human-human interactions. Furthermore, incorporating the IPC and the Big Five traits helps generate behavioural inferences that supplement the engaging behaviour prediction, thus enriching the pipeline. Engagement modelling has a wide range of applications in domains like online learning platforms, assistive robotics, and intelligent conversational agents. Practitioners can also use this work in cognitive modelling and psychology to find more complex and subtle relations between humans' behaviour and personality traits, and discover new dynamics of the human psyche. The code will be made available at: https://github.com/soham-joshi/ engagement-prediction-mhhri.
Federated optimization, wherein several agents in a network collaborate with a central server to ... more Federated optimization, wherein several agents in a network collaborate with a central server to achieve optimal social cost over the network with no requirement for exchanging information among agents, has attracted significant interest from the research community. In this context, agents demand resources based on their local computation. Due to the exchange of optimization parameters such as states, constraints, or objective functions with a central server, an adversary may infer sensitive information of agents. We develop a differentially-private additive-increase and multiplicative-decrease algorithm to allocate multiple divisible shared heterogeneous resources to agents in a network. The developed algorithm provides a differential privacy guarantee to each agent in the network. The algorithm does not require inter-agent communication, and the agents do not need to share their cost function or their derivatives with other agents or a central server; however, they share their allocation states with a central server that keeps track of the aggregate consumption of resources. The algorithm incurs very little communication overhead; for m heterogeneous resources in the system, the asymptotic upper bound on the communication complexity is O(m) bits at a time step. Furthermore, if the algorithm converges in K time steps, then the upper bound communication complexity will be O(mK) bits. The algorithm can find applications in several areas, including smart cities, smart energy systems, resource management in the sixth generation (6G) wireless networks with privacy guarantees, etc. We present experimental results to check the efficacy of the algorithm. Furthermore, we present empirical analyses for the trade-off between privacy and algorithm efficiency.
Researchers worldwide have become increasingly interested in developing computational approaches ... more Researchers worldwide have become increasingly interested in developing computational approaches to handle challenges facing electric vehicles (EVs) in recent years. This paper examines the challenges and future potential of computational approaches for problems such as EV routing, EV charging scheduling, EV charging station (CS) placement, CS sizing, and energy or load management. In addition, a summary of the fundamental mathematical models employed to solve the EV computational problems is presented. We cover recent work on computational solutions for various EV problems utilizing single and coupled mathematical models. Finally, we also examine potential research avenues that researchers could pursue to realize the objective of environment-friendly transportation and smart grids (SGs).
This paper proposes a Robust Gradient Classification Framework (RGCF) for Byzantine fault toleran... more This paper proposes a Robust Gradient Classification Framework (RGCF) for Byzantine fault tolerance in distributed stochastic gradient descent. The framework consists of a pattern recognition filter which we train to be able to classify individual gradients as Byzantine by using their direction alone. This filter is robust to an arbitrary number of Byzantine workers for convex as well as non-convex optimisation settings, which is a significant improvement on the prior work that is robust to Byzantine faults only when up to 50% of the workers are Byzantine. This solution does not require an estimate of the number of Byzantine workers; its running time is not dependent on the number of workers and can scale up to training instances with a large number of workers without a loss in performance. We validate our solution by training convolutional neural networks on the MNIST dataset in the presence of Byzantine workers.
In toy environments like video games, a reinforcement learning agent is deployed and operates wit... more In toy environments like video games, a reinforcement learning agent is deployed and operates within the same state space in which it was trained. However, in robotics applications such as industrial systems or autonomous vehicles, this cannot be guaranteed. A robot can be pushed out of its training space by some unforeseen perturbation, which may cause it to go into an unknown state from which it has not been trained to move towards its goal. While most prior work in the area of RL safety focuses on ensuring safety in the training phase, this paper focuses on ensuring the safe deployment of a robot that has already been trained to operate within a safe space. This work defines a condition on the state and action spaces, that if satisfied, guarantees the robot's recovery to safety independently. We also propose a strategy and design that facilitate this recovery within a finite number of steps after perturbation. This is implemented and tested against a standard RL model, and the results indicate a significant improvement in performance.
A finite-time resilient consensus protocol (RCP) is developed for a connected network of agents, ... more A finite-time resilient consensus protocol (RCP) is developed for a connected network of agents, where communication between agents occurs locally, a few of the agents are malicious (MA), and the nonmalicious or cooperating (CO) agents do not know the locations of the MA ones. Networks with a single leader and several followers as well as leaderless networks are considered. Agents are modelled with first order dynamics, and the inputs to each CO agent that enable consensus are designed using the principles of sliding mode control (SMC). An SMC-based consensus protocol (CP), derived using the Laplacian matrix of the graph describing the network that permits consensus amongst CO agents, is first illustrated; that MA agents can prevent consensus with the use of this CP is also discussed. The SMC-based RCP proposed in this paper requires that the CO agents know the bounds, defined by a statistical distribution, of other CO agents, and that the subgraph of the CO agents be connected; knowledge of MA agents is not needed. With these assumptions, in the case of networks with a single MA agent, CO agents can reach a consensus by disregarding information transmitted by this agent if its state violates some statistical properties. On the other hand, multiple MA agents can be disregarded without the need for these assumptions, owing to the relations that hold with the occurrence of sliding mode. The RCP consists of a message-passing algorithm whose basis can be found in the distributed computing literature. The proposed RCP can also be applied for a leader-follower network.
As usage of artificial intelligence (AI) technologies across industries increases, there is a gro... more As usage of artificial intelligence (AI) technologies across industries increases, there is a growing need for creating large marketplaces to host and transact good-quality data sets to train AI algorithms. Our study analyzes the characteristics of such an oligopsony crowdsourced AI Marketplace (AIM) that has a large number of producers and few consumers who transact data sets as per their expectations of price and quality. Using agent-based modeling (ABM), we incorporate heterogeneity in agent attributes and self-learning by the agents that are reflective of real-world marketplaces. Our research augments the existing studies on the effect of and reputation systems in such market places. Extensive simulations using ABM indicate that ratings of the data sets as a feedback mechanism plays an important role in improving the quality of said data sets, and hence the reputations of producers. While such marketplaces are evolving, regulators have started enacting varying rules to oversee the appropriate functioning of such marketplaces, to minimize market distortions. In one of the first such studies, we integrate regulatory interventions in a marketplace model to analyze the impacts of various types of regulations on the functioning of an AIM. Our results indicate that very stringent regulatory measures negatively affect the production of quality data sets in the marketplace. On the other hand, regulatory oversight along with a ratings-based feedback mechanism improves the functioning of an AIM, and hence is recommended for governments and policy makers to adopt.
We present a multi-agent system where agents can cooperate to solve a
system of dependent tasks, ... more We present a multi-agent system where agents can cooperate to solve a system of dependent tasks, with agents having the capability to explore a solution space, make inferences, as well as query for information under a limited budget. Re-exploration of the solution space takes place by an agent when an older solution expires and is thus able to adapt to dynamic changes in the environment. We investigate the effects of task dependencies, with highly-dependent graph $G_{40}$ (a well-known program graph that contains $40$ highly interlinked nodes, each representing a task) and less-dependent graphs $G_{18}$ (a program graph that contains $18$ tasks with fewer links), increasing the speed of the agents and the complexity of the problem space and the query budgets available to agents. Specifically, we evaluate trade-offs between the agent's speed and query budget. During the experiments, we observed that increasing the speed of a single agent improves the system performance to a certain point only, and increasing the number of faster agents may not improve the system performance due to task dependencies. Favoring faster agents during budget allocation enhances the system performance, in line with the "Matthew effect." We also observe that allocating more budget to a faster agent gives better performance for a less-dependent system, but increasing the number of faster agents gives a better performance for a highly-dependent system.
Existing studies on prejudice, which is important in multi-group dynamics in societies, focus on ... more Existing studies on prejudice, which is important in multi-group dynamics in societies, focus on the social-psychological knowledge behind the processes involving prejudice and its propagation. We instead create a multi-agent framework that simulates the propagation of prejudice and measures its tangible impact on the prosperity of individuals as well as of larger social structures, including groups and factions within. Groups in society help us define prejudice, and factions represent smaller tightknit circles of individuals with similar opinions. We model social interactions using the Continuous Prisoner's Dilemma (CPD) and a type of agent called a prejudiced agent, whose cooperation is affected by a prejudice attribute, updated over time based both on the agent's own experiences and those of others in its faction. Our simulations show that modeling prejudice as an exclusively outgroup phenomenon generates implicit in-group promotion, which eventually leads to higher relative prosperity of the prejudiced population. This skew in prosperity is shown to be correlated to factors such as size difference between groups and the number of prejudiced agents in a group. Although prejudiced agents achieve higher prosperity within prejudiced societies, their presence degrades the overall prosperity levels of their societies. Our proposed system model can serve as a basis for promoting a deeper understanding of origins, propagation, and ramifications of prejudice through rigorous simulative studies grounded in apt theoretical backgrounds. This can help conduct impactful research on prominent social issues such as racism, religious discrimination, and unfair immigrant treatment. This model can also serve as a foundation to study other socio-psychological phenomena in tandem with prejudice such as the distribution of wealth, social status, and ethnocentrism in a society.
Charging station (CS) planning for electric vehicles (EVs) for a region has become an important c... more Charging station (CS) planning for electric vehicles (EVs) for a region has become an important concern for urban planners and the public alike to improve the adoption of EVs. Two major problems comprising this research area are: (i) the EV charging station placement (EVCSP) problem, and (ii) the CS need estimation problem for a region. In this work, different explainable solutions based on machine learning (ML) and simulation were investigated by incorporating quantitative and qualitative metrics. The solutions were compared with traditional approaches using a real CS area of Austin and a greenfield area of Bengaluru. For EVCSP, a different class of clustering solutions, i.e., mean-based, density-based, spectrum- or eigenvalues-based, and Gaussian distribution were evaluated. Different perspectives, such as the urban planner perspective, i.e., the clustering efficiency, and the EV owner perspective, i.e., an acceptable distance to the nearest CS, were considered. For the CS need estimation, ML solutions based on quadratic regression and simulations were evaluated. Using our CS planning methods urban planners can make better CS placement decisions and can estimate CS needs for the present and the future.
IEEE Transactions on Computational Social Systems, 2023
Social media are extensively used in today's world, and facilitate quick and easy sharing of info... more Social media are extensively used in today's world, and facilitate quick and easy sharing of information, which makes them a good way to advertize products. Influencers of a social media network, owing to their massive popularity, provide a huge potential customer base. However, it is not straightforward to decide which influencers should be selected for an advertizing campaign that can generate high returns with low investment. In this work, we present an agent-based model (ABM) that can simulate the dynamics of influencer advertizing campaigns in a variety of scenarios and can help to discover the best influencer marketing strategy. Our system is a probabilistic graph-based model that provides the additional advantage to incorporate real-world factors such as customers' interest in a product, customer behavior, the willingness to pay, a brand's investment cap, influencers' engagement with influence diffusion, and the nature of the product being advertized viz. luxury and nonluxury. Using customer acquisition cost and conversion ratio as a unit economic, we evaluate the performance of different kinds of influencers under a variety of circumstances that are simulated by varying the nature of the product and the customers' interest. Our results exemplify the circumstancedependent nature of influencer marketing and provide insight into which kinds of influencers would be a better strategy under respective circumstances. For instance, we show that as the nature of the product varies from luxury to non-luxury, the performance of celebrities declines whereas the performance of nano-influencers improves. In terms of the customers' interest, we find that the performance of nano-influencers declines with the decrease in customers' interest whereas the performance of celebrities improves.
IEEE Transactions on Intelligent Transportation Systems, 2022
With the advent of Electric Vehicles (EVs), issues connected to the electric vehicle charging sch... more With the advent of Electric Vehicles (EVs), issues connected to the electric vehicle charging scheduling (EVCS) problem, which is $\\NP$-hard, have become important. In previous studies, EVCS has been mainly formulated as a constrained shortest path problem; however, such formulations have not involved variables such as the charging rates, traffic congestion, scalability, and waiting time at a charging station (CS), that need to be considered in practical settings. Earlier research has also tended to focus on the strengths of particular evolutionary optimization algorithms like differential evolution (DE) or particle swarm optimization (PSO) over others or a traditional mathematical programming method, with only a limited study of hybrid approaches. In this paper, fast and slow charging options at a station have been considered in the EVCS problem for practical use. In previous studies, EVs have been considered to have fixed speeds; however, in order to mitigate CS congestion and thus waiting times at CSs dynamic speed control of EVs has been considered in this work. This work also investigates the scalability of EVCS solutions. A hybrid approach using PSO and the Firefly algorithm (FFA) with L\\'evy flights search strategy has been designed and implemented to solve the EVCS. Also, different hybrid methods variants of PSO and FFA have been evaluated in this paper to find the best performing hybrid variant. Experimental results validate the effectiveness of our approach on both synthetic and the real-world transportation networks.
Sustainable Computing: Informatics and Systems, 2021
The rise in the penetration of the internet across the world has led to a rapid increase in the c... more The rise in the penetration of the internet across the world has led to a rapid increase in the consumption of energy at the data centers established by leading cloud data service providers. High power consumption by these data centers [DCs] leads to high operational costs and high carbon emissions into the environment. From a sustainability point of view, the ultimate goal is to maximize the productivity and efficiency of these data centers while keeping greenhouse gas emissions to the minimum and maximize data center productivity. This goal can be achieved by better resource utilization and replacing carbon-intensive approaches of energy production with green sources of energy. Due to the limited intermittent availability of renewable sources of energy, the ideal 'Green' design for the DCs, should incorporate inter-operability with both renewable and non-renewable sources of energy. In this paper, we propose a Ren-aware scheduler to schedule computational workload by prioritizing their execution within the duration of green energy availability on the basis of the predicted hourly green energy and workload data of DCs. Our results demonstrate that our Ren-aware scheduler can increase the green energy consumption by 51 % compared to the conventional Randomized scheduler that distributes load without considering green energy and load. It can also reduce the total energy consumption by 25 % by putting the DCs to sleep during their idle time, as it saves 4.5 times more idle energy than the Randomized scheduler. Additionally, the results also demonstrate how the role of time zones of the DCs and the duration of green energy availability in them is pivotal in our Ren-aware scheduler's performance.
Physica A: Statistical Mechanics and its Applications, 2021
The effects of the Peter Principle (PP) on a hierarchical firm have been extensively studied, but... more The effects of the Peter Principle (PP) on a hierarchical firm have been extensively studied, but existing firm models fail to capture real-world firm dynamics such as employee motivation and CEO characteristics. We thus extend an existing firm model to introduce the notion of employee motivation and a CEO agent with parameters for leadership and managerial qualities, and incorporate the vitality curve. Through our experiments, we show that a firm's performance depends on the characteristics of the CEO agent, as observed in reality. We run simulations for a firm model under two hypotheses: when a firm is subject to PP and when it is not. We find that a non-standard vitality curve setting leads to an efficiency gain over the standard one. We also study the effects of PP on firms competing in Stackelberg and Cournot games. We find that there exists a possibility for a follower firm to overtake a leader firm in a Stackelberg game when the leader is subject to PP. In a Cournot game, we find that a firm that is not subject to PP produces more quantity and makes more profit compared to a firm that is.
Developing a framework for the locomotion of a six-legged robot or a hexapod is a complex task th... more Developing a framework for the locomotion of a six-legged robot or a hexapod is a complex task that has extensive hardware and computational requirements. In this paper, we present a bio-inspired framework for the locomotion of a hexapod. Our locomotion model draws inspiration from the structure of a cockroach, with its fairly simple central nervous system, and results in our model being computationally inexpensive with simpler control mechanisms. We consider the limb morphology for a hexapod, the corresponding central pattern generators for its limbs, and the inter-limb coordination required to generate appropriate patterns in its limbs. We also designed two experiments to validate our locomotion model. Our first experiment models the predator-prey dynamics between a cockroach and its predator. Our second experiment makes use of a reinforcement learning-based algorithm, putting forward a realization of our locomotion model. These experiments suggest that this model will help realize practical hexapod robot designs.
Dynamic pricing is a promising strategy to address the challenges of smart charging, as tradition... more Dynamic pricing is a promising strategy to address the challenges of smart charging, as traditional time-of-use (ToU) rates and stationary pricing (SP) do not dynamically react to changes in operating conditions, reducing revenue for charging station (CS) vendors and affecting grid stability. Previous studies evaluated single objectives or linear combinations of objectives for EV CS pricing solutions, simplifying trade-offs and preferences among objectives. We develop a novel formulation for the dynamic pricing problem by addressing multiple conflicting objectives efficiently instead of solely focusing on one objective or metric, as in earlier works. We find optimal trade-offs or Pareto solutions efficiently using Non-dominated Sorting Genetic Algorithms (NSGA) II and NSGA III. A dynamic pricing model quantifies the relationship between demand and price while simultaneously solving multiple conflicting objectives, such as revenue, quality of service (QoS), and peak-to-average ratios (PAR). A single method can only address some of the above aspects of dynamic pricing comprehensively. We present a three-part dynamic pricing approach using a Bayesian model, multiobjective optimization, and multi-criteria decision-making (MCDM) using pseudo-weight vectors. To address the research gap in CS pricing, our method selects solutions using revenue, QoS, and PAR metrics simultaneously. Two California charging sites' real-world data validates our approach.
47th International ACM SIGIR Conference on Research and Development in Information Retrieval, 2024
Extracting relevant information from legal documents is a challenging task due to the technical c... more Extracting relevant information from legal documents is a challenging task due to the technical complexity and volume of their content. These factors also increase the costs of annotating large datasets, which are required to train state-of-the-art summarization systems. To address these challenges, we introduce CivilSum, a collection of 23,350 legal case decisions from the Supreme Court of India and other Indian High Courts paired with human-written summaries. Compared to previous datasets such as IN-Abs, Civil-Sum not only has more legal decisions but also provides shorter and more abstractive summaries, thus offering a challenging benchmark for legal summarization. Unlike other domains such as news articles, our analysis shows the most important content tends to appear at the end of the documents. We measure the effect of this tail bias on summarization performance using strong architectures for long-document abstractive summarization, and the results highlight the importance of long sequence modeling for the proposed task. CivilSum and related code are publicly available to the research community to advance text summarization in the legal domain. 1 CCS CONCEPTS • Computing methodologies → Natural language processing; • Applied computing → Law; • Information systems → Summarization.
16th Annual IEEE Green Technologies Conference (GreenTech 2024), 2024
In the context of rising greenhouse gas emissions and climate change, we propose a pollution cont... more In the context of rising greenhouse gas emissions and climate change, we propose a pollution control system through the use of the Vickrey-Clarke-Groves (VCG) auction mechanism. Agents bid on pollution permits that grant them a right to pollute a single unit of a pollutant. This auction algorithm efficiently allocates the pollution permits for multiple different pollutants based on the cost of pollution reduction. Previous work addressed only the single pollutant setting while this work addresses the multi-pollutant setting. Our analysis also shows conditions under which we can achieve the highly desirable property of budget balancing.
Accurate prediction of the phages that target a bacterial host plays an important role in combati... more Accurate prediction of the phages that target a bacterial host plays an important role in combating anti-microbial resistance. Our work explores the power of deep neural networks, convolutional neural networks, and pre-trained large DNA/protein language models to predict the host for a given phage. This work mainly uses the data provided by Gonzales et al. that contains receptor-binding protein sequences of the phages and the target host genus. We used pre-trained language models to obtain the dense representations of protein/nucleotide sequences to train a deep neural network to predict the target host genus. Additionally, convolutional neural networks were trained on one-hot encoding of nucleotide sequences to predict the target host genus. We achieved a weighted F1-score of 73.76% outperforming state-of-the-art models with an improvement of around 11% by using the protein language model ESM-1b. The data and the source code are available at https://github.com/sumanth2002629/ Bacteriophage-Research.
IEEE Transactions on Control Systems Technology, 2024
Management of power is a crucial problem in computing systems where power is finite, processor pe... more Management of power is a crucial problem in computing systems where power is finite, processor performance and energy needs are high, and thermal constraints have to be respected. The trade-off between performance and energy expenditure is well recognized. To satisfy these conflicting requirements, in this paper, a dynamic system framework is adopted, and results from optimal control theory, notably Pontryagin's Minimum Principle (PMP), are applied to derive an energyoptimal time-varying processor speed law, or frequency governor, to execute assigned tasks. PMP is chosen as it allows for system input constraints as well as thermal and power budget constraints to be considered; the PMP-based governor is also compared with a Model Predictive Controller (MPC) implemented by following the Explicit-MPC framework using a linear model. The main contributions of this paper are i determining an empirical time-invariant non-linear dynamic model of an Intel CPU with task execution rate, power consumption, and temperature as the outputs, and clock frequency as the input; ii the Linux implementation of a PMP-based clock frequency governor on the CPU based on a linear model as well as the non-linear model; and iii hardware implementation of an Explicit-MPC on the same platform using the frequency schedule derived from linear model simulations. Limits on task completion times and energy savings achieved in the execution of three benchmark tasks: MiBench, LINPACK, and Sorting positive integers, are presented. Experimental results show that it is possible to reduce energy consumption with an increase in task execution time while executing these benchmark tasks; it is also shown that it is possible to tune the PMP and MPC parameters to obtain similar performances. The approach presented in this paper can be applied to design optimal controllers for other types of standalone or heterogeneous computing systems.
Modelling the engaging behaviour of humans using multimodal data collected during human-robot int... more Modelling the engaging behaviour of humans using multimodal data collected during human-robot interactions has attracted much research interest. Most methods that have been proposed previously predict engaging behaviour directly from multimodal features, and do not incorporate personality inferences or any theories of interpersonal behaviour in human-human interactions. This work investigates whether personality inferences and attributes from interpersonal theories of behaviour (like attitude and emotion) further augment the modelling of engaging behaviour. We present a novel pipeline to model engaging behaviour that incorporates the Big Five personality traits, the Interpersonal Circumplex (IPC), and the Triandis Theory of Interpersonal Behaviour (TIB). We extract first-person vision and physiological features from the MHHRI dataset and predict the Big Five personality traits using a Support Vector Machine. Subsequently, we empirically validate the advantage of incorporating personality in modelling engaging behaviour and present a novel method that effectively uses the IPC to obtain scores for a human's attitude and emotion from their Big Five traits. Finally, our results demonstrate that attitude and emotion are correlates of behaviour even in human-robot interactions, as suggested by the TIB for human-human interactions. Furthermore, incorporating the IPC and the Big Five traits helps generate behavioural inferences that supplement the engaging behaviour prediction, thus enriching the pipeline. Engagement modelling has a wide range of applications in domains like online learning platforms, assistive robotics, and intelligent conversational agents. Practitioners can also use this work in cognitive modelling and psychology to find more complex and subtle relations between humans' behaviour and personality traits, and discover new dynamics of the human psyche. The code will be made available at: https://github.com/soham-joshi/ engagement-prediction-mhhri.
Federated optimization, wherein several agents in a network collaborate with a central server to ... more Federated optimization, wherein several agents in a network collaborate with a central server to achieve optimal social cost over the network with no requirement for exchanging information among agents, has attracted significant interest from the research community. In this context, agents demand resources based on their local computation. Due to the exchange of optimization parameters such as states, constraints, or objective functions with a central server, an adversary may infer sensitive information of agents. We develop a differentially-private additive-increase and multiplicative-decrease algorithm to allocate multiple divisible shared heterogeneous resources to agents in a network. The developed algorithm provides a differential privacy guarantee to each agent in the network. The algorithm does not require inter-agent communication, and the agents do not need to share their cost function or their derivatives with other agents or a central server; however, they share their allocation states with a central server that keeps track of the aggregate consumption of resources. The algorithm incurs very little communication overhead; for m heterogeneous resources in the system, the asymptotic upper bound on the communication complexity is O(m) bits at a time step. Furthermore, if the algorithm converges in K time steps, then the upper bound communication complexity will be O(mK) bits. The algorithm can find applications in several areas, including smart cities, smart energy systems, resource management in the sixth generation (6G) wireless networks with privacy guarantees, etc. We present experimental results to check the efficacy of the algorithm. Furthermore, we present empirical analyses for the trade-off between privacy and algorithm efficiency.
Researchers worldwide have become increasingly interested in developing computational approaches ... more Researchers worldwide have become increasingly interested in developing computational approaches to handle challenges facing electric vehicles (EVs) in recent years. This paper examines the challenges and future potential of computational approaches for problems such as EV routing, EV charging scheduling, EV charging station (CS) placement, CS sizing, and energy or load management. In addition, a summary of the fundamental mathematical models employed to solve the EV computational problems is presented. We cover recent work on computational solutions for various EV problems utilizing single and coupled mathematical models. Finally, we also examine potential research avenues that researchers could pursue to realize the objective of environment-friendly transportation and smart grids (SGs).
This paper proposes a Robust Gradient Classification Framework (RGCF) for Byzantine fault toleran... more This paper proposes a Robust Gradient Classification Framework (RGCF) for Byzantine fault tolerance in distributed stochastic gradient descent. The framework consists of a pattern recognition filter which we train to be able to classify individual gradients as Byzantine by using their direction alone. This filter is robust to an arbitrary number of Byzantine workers for convex as well as non-convex optimisation settings, which is a significant improvement on the prior work that is robust to Byzantine faults only when up to 50% of the workers are Byzantine. This solution does not require an estimate of the number of Byzantine workers; its running time is not dependent on the number of workers and can scale up to training instances with a large number of workers without a loss in performance. We validate our solution by training convolutional neural networks on the MNIST dataset in the presence of Byzantine workers.
In toy environments like video games, a reinforcement learning agent is deployed and operates wit... more In toy environments like video games, a reinforcement learning agent is deployed and operates within the same state space in which it was trained. However, in robotics applications such as industrial systems or autonomous vehicles, this cannot be guaranteed. A robot can be pushed out of its training space by some unforeseen perturbation, which may cause it to go into an unknown state from which it has not been trained to move towards its goal. While most prior work in the area of RL safety focuses on ensuring safety in the training phase, this paper focuses on ensuring the safe deployment of a robot that has already been trained to operate within a safe space. This work defines a condition on the state and action spaces, that if satisfied, guarantees the robot's recovery to safety independently. We also propose a strategy and design that facilitate this recovery within a finite number of steps after perturbation. This is implemented and tested against a standard RL model, and the results indicate a significant improvement in performance.
A finite-time resilient consensus protocol (RCP) is developed for a connected network of agents, ... more A finite-time resilient consensus protocol (RCP) is developed for a connected network of agents, where communication between agents occurs locally, a few of the agents are malicious (MA), and the nonmalicious or cooperating (CO) agents do not know the locations of the MA ones. Networks with a single leader and several followers as well as leaderless networks are considered. Agents are modelled with first order dynamics, and the inputs to each CO agent that enable consensus are designed using the principles of sliding mode control (SMC). An SMC-based consensus protocol (CP), derived using the Laplacian matrix of the graph describing the network that permits consensus amongst CO agents, is first illustrated; that MA agents can prevent consensus with the use of this CP is also discussed. The SMC-based RCP proposed in this paper requires that the CO agents know the bounds, defined by a statistical distribution, of other CO agents, and that the subgraph of the CO agents be connected; knowledge of MA agents is not needed. With these assumptions, in the case of networks with a single MA agent, CO agents can reach a consensus by disregarding information transmitted by this agent if its state violates some statistical properties. On the other hand, multiple MA agents can be disregarded without the need for these assumptions, owing to the relations that hold with the occurrence of sliding mode. The RCP consists of a message-passing algorithm whose basis can be found in the distributed computing literature. The proposed RCP can also be applied for a leader-follower network.
As usage of artificial intelligence (AI) technologies across industries increases, there is a gro... more As usage of artificial intelligence (AI) technologies across industries increases, there is a growing need for creating large marketplaces to host and transact good-quality data sets to train AI algorithms. Our study analyzes the characteristics of such an oligopsony crowdsourced AI Marketplace (AIM) that has a large number of producers and few consumers who transact data sets as per their expectations of price and quality. Using agent-based modeling (ABM), we incorporate heterogeneity in agent attributes and self-learning by the agents that are reflective of real-world marketplaces. Our research augments the existing studies on the effect of and reputation systems in such market places. Extensive simulations using ABM indicate that ratings of the data sets as a feedback mechanism plays an important role in improving the quality of said data sets, and hence the reputations of producers. While such marketplaces are evolving, regulators have started enacting varying rules to oversee the appropriate functioning of such marketplaces, to minimize market distortions. In one of the first such studies, we integrate regulatory interventions in a marketplace model to analyze the impacts of various types of regulations on the functioning of an AIM. Our results indicate that very stringent regulatory measures negatively affect the production of quality data sets in the marketplace. On the other hand, regulatory oversight along with a ratings-based feedback mechanism improves the functioning of an AIM, and hence is recommended for governments and policy makers to adopt.
We present a multi-agent system where agents can cooperate to solve a
system of dependent tasks, ... more We present a multi-agent system where agents can cooperate to solve a system of dependent tasks, with agents having the capability to explore a solution space, make inferences, as well as query for information under a limited budget. Re-exploration of the solution space takes place by an agent when an older solution expires and is thus able to adapt to dynamic changes in the environment. We investigate the effects of task dependencies, with highly-dependent graph $G_{40}$ (a well-known program graph that contains $40$ highly interlinked nodes, each representing a task) and less-dependent graphs $G_{18}$ (a program graph that contains $18$ tasks with fewer links), increasing the speed of the agents and the complexity of the problem space and the query budgets available to agents. Specifically, we evaluate trade-offs between the agent's speed and query budget. During the experiments, we observed that increasing the speed of a single agent improves the system performance to a certain point only, and increasing the number of faster agents may not improve the system performance due to task dependencies. Favoring faster agents during budget allocation enhances the system performance, in line with the "Matthew effect." We also observe that allocating more budget to a faster agent gives better performance for a less-dependent system, but increasing the number of faster agents gives a better performance for a highly-dependent system.
Existing studies on prejudice, which is important in multi-group dynamics in societies, focus on ... more Existing studies on prejudice, which is important in multi-group dynamics in societies, focus on the social-psychological knowledge behind the processes involving prejudice and its propagation. We instead create a multi-agent framework that simulates the propagation of prejudice and measures its tangible impact on the prosperity of individuals as well as of larger social structures, including groups and factions within. Groups in society help us define prejudice, and factions represent smaller tightknit circles of individuals with similar opinions. We model social interactions using the Continuous Prisoner's Dilemma (CPD) and a type of agent called a prejudiced agent, whose cooperation is affected by a prejudice attribute, updated over time based both on the agent's own experiences and those of others in its faction. Our simulations show that modeling prejudice as an exclusively outgroup phenomenon generates implicit in-group promotion, which eventually leads to higher relative prosperity of the prejudiced population. This skew in prosperity is shown to be correlated to factors such as size difference between groups and the number of prejudiced agents in a group. Although prejudiced agents achieve higher prosperity within prejudiced societies, their presence degrades the overall prosperity levels of their societies. Our proposed system model can serve as a basis for promoting a deeper understanding of origins, propagation, and ramifications of prejudice through rigorous simulative studies grounded in apt theoretical backgrounds. This can help conduct impactful research on prominent social issues such as racism, religious discrimination, and unfair immigrant treatment. This model can also serve as a foundation to study other socio-psychological phenomena in tandem with prejudice such as the distribution of wealth, social status, and ethnocentrism in a society.
Charging station (CS) planning for electric vehicles (EVs) for a region has become an important c... more Charging station (CS) planning for electric vehicles (EVs) for a region has become an important concern for urban planners and the public alike to improve the adoption of EVs. Two major problems comprising this research area are: (i) the EV charging station placement (EVCSP) problem, and (ii) the CS need estimation problem for a region. In this work, different explainable solutions based on machine learning (ML) and simulation were investigated by incorporating quantitative and qualitative metrics. The solutions were compared with traditional approaches using a real CS area of Austin and a greenfield area of Bengaluru. For EVCSP, a different class of clustering solutions, i.e., mean-based, density-based, spectrum- or eigenvalues-based, and Gaussian distribution were evaluated. Different perspectives, such as the urban planner perspective, i.e., the clustering efficiency, and the EV owner perspective, i.e., an acceptable distance to the nearest CS, were considered. For the CS need estimation, ML solutions based on quadratic regression and simulations were evaluated. Using our CS planning methods urban planners can make better CS placement decisions and can estimate CS needs for the present and the future.
IEEE Transactions on Computational Social Systems, 2023
Social media are extensively used in today's world, and facilitate quick and easy sharing of info... more Social media are extensively used in today's world, and facilitate quick and easy sharing of information, which makes them a good way to advertize products. Influencers of a social media network, owing to their massive popularity, provide a huge potential customer base. However, it is not straightforward to decide which influencers should be selected for an advertizing campaign that can generate high returns with low investment. In this work, we present an agent-based model (ABM) that can simulate the dynamics of influencer advertizing campaigns in a variety of scenarios and can help to discover the best influencer marketing strategy. Our system is a probabilistic graph-based model that provides the additional advantage to incorporate real-world factors such as customers' interest in a product, customer behavior, the willingness to pay, a brand's investment cap, influencers' engagement with influence diffusion, and the nature of the product being advertized viz. luxury and nonluxury. Using customer acquisition cost and conversion ratio as a unit economic, we evaluate the performance of different kinds of influencers under a variety of circumstances that are simulated by varying the nature of the product and the customers' interest. Our results exemplify the circumstancedependent nature of influencer marketing and provide insight into which kinds of influencers would be a better strategy under respective circumstances. For instance, we show that as the nature of the product varies from luxury to non-luxury, the performance of celebrities declines whereas the performance of nano-influencers improves. In terms of the customers' interest, we find that the performance of nano-influencers declines with the decrease in customers' interest whereas the performance of celebrities improves.
IEEE Transactions on Intelligent Transportation Systems, 2022
With the advent of Electric Vehicles (EVs), issues connected to the electric vehicle charging sch... more With the advent of Electric Vehicles (EVs), issues connected to the electric vehicle charging scheduling (EVCS) problem, which is $\\NP$-hard, have become important. In previous studies, EVCS has been mainly formulated as a constrained shortest path problem; however, such formulations have not involved variables such as the charging rates, traffic congestion, scalability, and waiting time at a charging station (CS), that need to be considered in practical settings. Earlier research has also tended to focus on the strengths of particular evolutionary optimization algorithms like differential evolution (DE) or particle swarm optimization (PSO) over others or a traditional mathematical programming method, with only a limited study of hybrid approaches. In this paper, fast and slow charging options at a station have been considered in the EVCS problem for practical use. In previous studies, EVs have been considered to have fixed speeds; however, in order to mitigate CS congestion and thus waiting times at CSs dynamic speed control of EVs has been considered in this work. This work also investigates the scalability of EVCS solutions. A hybrid approach using PSO and the Firefly algorithm (FFA) with L\\'evy flights search strategy has been designed and implemented to solve the EVCS. Also, different hybrid methods variants of PSO and FFA have been evaluated in this paper to find the best performing hybrid variant. Experimental results validate the effectiveness of our approach on both synthetic and the real-world transportation networks.
Sustainable Computing: Informatics and Systems, 2021
The rise in the penetration of the internet across the world has led to a rapid increase in the c... more The rise in the penetration of the internet across the world has led to a rapid increase in the consumption of energy at the data centers established by leading cloud data service providers. High power consumption by these data centers [DCs] leads to high operational costs and high carbon emissions into the environment. From a sustainability point of view, the ultimate goal is to maximize the productivity and efficiency of these data centers while keeping greenhouse gas emissions to the minimum and maximize data center productivity. This goal can be achieved by better resource utilization and replacing carbon-intensive approaches of energy production with green sources of energy. Due to the limited intermittent availability of renewable sources of energy, the ideal 'Green' design for the DCs, should incorporate inter-operability with both renewable and non-renewable sources of energy. In this paper, we propose a Ren-aware scheduler to schedule computational workload by prioritizing their execution within the duration of green energy availability on the basis of the predicted hourly green energy and workload data of DCs. Our results demonstrate that our Ren-aware scheduler can increase the green energy consumption by 51 % compared to the conventional Randomized scheduler that distributes load without considering green energy and load. It can also reduce the total energy consumption by 25 % by putting the DCs to sleep during their idle time, as it saves 4.5 times more idle energy than the Randomized scheduler. Additionally, the results also demonstrate how the role of time zones of the DCs and the duration of green energy availability in them is pivotal in our Ren-aware scheduler's performance.
Physica A: Statistical Mechanics and its Applications, 2021
The effects of the Peter Principle (PP) on a hierarchical firm have been extensively studied, but... more The effects of the Peter Principle (PP) on a hierarchical firm have been extensively studied, but existing firm models fail to capture real-world firm dynamics such as employee motivation and CEO characteristics. We thus extend an existing firm model to introduce the notion of employee motivation and a CEO agent with parameters for leadership and managerial qualities, and incorporate the vitality curve. Through our experiments, we show that a firm's performance depends on the characteristics of the CEO agent, as observed in reality. We run simulations for a firm model under two hypotheses: when a firm is subject to PP and when it is not. We find that a non-standard vitality curve setting leads to an efficiency gain over the standard one. We also study the effects of PP on firms competing in Stackelberg and Cournot games. We find that there exists a possibility for a follower firm to overtake a leader firm in a Stackelberg game when the leader is subject to PP. In a Cournot game, we find that a firm that is not subject to PP produces more quantity and makes more profit compared to a firm that is.
Developing a framework for the locomotion of a six-legged robot or a hexapod is a complex task th... more Developing a framework for the locomotion of a six-legged robot or a hexapod is a complex task that has extensive hardware and computational requirements. In this paper, we present a bio-inspired framework for the locomotion of a hexapod. Our locomotion model draws inspiration from the structure of a cockroach, with its fairly simple central nervous system, and results in our model being computationally inexpensive with simpler control mechanisms. We consider the limb morphology for a hexapod, the corresponding central pattern generators for its limbs, and the inter-limb coordination required to generate appropriate patterns in its limbs. We also designed two experiments to validate our locomotion model. Our first experiment models the predator-prey dynamics between a cockroach and its predator. Our second experiment makes use of a reinforcement learning-based algorithm, putting forward a realization of our locomotion model. These experiments suggest that this model will help realize practical hexapod robot designs.
Uploads
Papers by Shrisha Rao
system of dependent tasks, with agents having the capability to
explore a solution space, make inferences, as well as query for
information under a limited budget. Re-exploration of the solution
space takes place by an agent when an older solution expires and is
thus able to adapt to dynamic changes in the environment. We
investigate the effects of task dependencies, with highly-dependent
graph $G_{40}$ (a well-known program graph that contains
$40$ highly interlinked nodes, each representing a task) and
less-dependent graphs $G_{18}$ (a program graph that
contains $18$ tasks with fewer links), increasing the speed of the
agents and the complexity of the problem space and the query budgets
available to agents. Specifically, we evaluate trade-offs between the
agent's speed and query budget. During the experiments, we observed
that increasing the speed of a single agent improves the system
performance to a certain point only, and increasing the number of
faster agents may not improve the system performance due to task
dependencies. Favoring faster agents during budget allocation enhances
the system performance, in line with the "Matthew effect." We
also observe that allocating more budget to a faster agent gives
better performance for a less-dependent system, but increasing the
number of faster agents gives a better performance for a
highly-dependent system.
Influencers of a social media network, owing to their massive popularity, provide a huge potential customer base. However, it is not straightforward to decide which influencers should be selected for an advertizing campaign that can generate high returns with low investment. In this work, we present an agent-based model (ABM) that can simulate the dynamics of influencer advertizing campaigns in a variety of scenarios and can help to discover the best influencer marketing strategy. Our system is a probabilistic graph-based model that provides the additional advantage to incorporate real-world factors such as customers' interest in a product, customer behavior, the willingness to pay, a brand's investment cap, influencers' engagement with influence diffusion, and the nature of the product being advertized viz. luxury and nonluxury. Using customer acquisition cost and conversion ratio as a unit economic, we evaluate the performance of different kinds of influencers under a variety of circumstances that are simulated by varying the nature of the product and the customers' interest. Our results exemplify the circumstancedependent nature of influencer marketing and provide insight into which kinds of influencers would be a better strategy under respective circumstances. For instance, we show that as the nature of the product varies from luxury to non-luxury, the performance of celebrities declines whereas the performance of nano-influencers improves. In terms of the customers' interest, we find that the performance of nano-influencers declines with the decrease in customers' interest whereas the performance of celebrities improves.
In previous studies, EVs have been considered to have fixed speeds; however, in order to mitigate CS congestion and thus waiting times at CSs dynamic speed control of EVs has been considered in this work.
This work also investigates the scalability of EVCS solutions. A hybrid approach using PSO and the Firefly algorithm (FFA) with L\\'evy flights search strategy has been designed and implemented to solve the EVCS.
Also, different hybrid methods variants of PSO and FFA have been evaluated in this paper to find the best performing hybrid variant. Experimental results validate the effectiveness of our approach on both synthetic and the real-world transportation networks.
system of dependent tasks, with agents having the capability to
explore a solution space, make inferences, as well as query for
information under a limited budget. Re-exploration of the solution
space takes place by an agent when an older solution expires and is
thus able to adapt to dynamic changes in the environment. We
investigate the effects of task dependencies, with highly-dependent
graph $G_{40}$ (a well-known program graph that contains
$40$ highly interlinked nodes, each representing a task) and
less-dependent graphs $G_{18}$ (a program graph that
contains $18$ tasks with fewer links), increasing the speed of the
agents and the complexity of the problem space and the query budgets
available to agents. Specifically, we evaluate trade-offs between the
agent's speed and query budget. During the experiments, we observed
that increasing the speed of a single agent improves the system
performance to a certain point only, and increasing the number of
faster agents may not improve the system performance due to task
dependencies. Favoring faster agents during budget allocation enhances
the system performance, in line with the "Matthew effect." We
also observe that allocating more budget to a faster agent gives
better performance for a less-dependent system, but increasing the
number of faster agents gives a better performance for a
highly-dependent system.
Influencers of a social media network, owing to their massive popularity, provide a huge potential customer base. However, it is not straightforward to decide which influencers should be selected for an advertizing campaign that can generate high returns with low investment. In this work, we present an agent-based model (ABM) that can simulate the dynamics of influencer advertizing campaigns in a variety of scenarios and can help to discover the best influencer marketing strategy. Our system is a probabilistic graph-based model that provides the additional advantage to incorporate real-world factors such as customers' interest in a product, customer behavior, the willingness to pay, a brand's investment cap, influencers' engagement with influence diffusion, and the nature of the product being advertized viz. luxury and nonluxury. Using customer acquisition cost and conversion ratio as a unit economic, we evaluate the performance of different kinds of influencers under a variety of circumstances that are simulated by varying the nature of the product and the customers' interest. Our results exemplify the circumstancedependent nature of influencer marketing and provide insight into which kinds of influencers would be a better strategy under respective circumstances. For instance, we show that as the nature of the product varies from luxury to non-luxury, the performance of celebrities declines whereas the performance of nano-influencers improves. In terms of the customers' interest, we find that the performance of nano-influencers declines with the decrease in customers' interest whereas the performance of celebrities improves.
In previous studies, EVs have been considered to have fixed speeds; however, in order to mitigate CS congestion and thus waiting times at CSs dynamic speed control of EVs has been considered in this work.
This work also investigates the scalability of EVCS solutions. A hybrid approach using PSO and the Firefly algorithm (FFA) with L\\'evy flights search strategy has been designed and implemented to solve the EVCS.
Also, different hybrid methods variants of PSO and FFA have been evaluated in this paper to find the best performing hybrid variant. Experimental results validate the effectiveness of our approach on both synthetic and the real-world transportation networks.