Connell's intermediate disturbance hypothesis (IDH) postulates that diversity peaks at an interme... more Connell's intermediate disturbance hypothesis (IDH) postulates that diversity peaks at an intermediate level of disturbance frequency or intensity. To test the validity of this concept for the species-poor marine hard-bottom community of the Western Baltic, we chose an in situ experimental approach. Undisturbed fouling communities of 2 different successional stages, 3 and 12 mo old, were submitted to various levels of emersion intensities, defined as time spans of continuous exposure to the air d-1. Disturbance levels ranged from 0.25 h up to 12 h of daily exposure. The study on 3 mo old communities was repeated in 2 subsequent years, 1999 and 2000. Species richness, evenness and diversity (Shannon index) were recorded to measure the effect of intensity treatments on community structure. The IDH was confirmed in the first year, when diversity was found to peak at intermediate disturbances. However, for communities of both successional stages, diversity-disturbance relationships were U-shaped or not significant in the second year. This ambiguous picture basically confirms the validity of the mechanisms proposed by the IDH, but shows that their forcing can be masked by fluctuations in environmental parameters, such as climatic conditions. An extension of the model is proposed, that considers diversity enhancement under extreme conditions due to a disturbanceinduced change in community structure. Furthermore we discuss a conceptual linkage of the IDH to the multiple stable-state hypothesis. Finally, we found community stability not to be positively correlated with community age and complexity.
Resumen-Este artículo analiza algunos de los desafíos más interesantes a los cuales los miembros ... more Resumen-Este artículo analiza algunos de los desafíos más interesantes a los cuales los miembros de la comunidad MAEB pueden enfrentarse en eĺ area de la aplicación de técnicas de Inteligencia Artificial/Computacional al diseño y creación de videojuegos. El artículo se centra en tres líneas, que en un futuro cercano, seguramente van a influenciar de forma significativa la industria del desarrollo de videojuegos, en concreto se enfoca en la Generación Automática de Contenido, en la Computación Afectiva aplicada a los videojuegos y en la Generación de Comportamientos que gestionen la toma de decisiones de las entidades no controladas por el jugador humano.
By reading, you can know the knowledge and things more, not only about what you get from people t... more By reading, you can know the knowledge and things more, not only about what you get from people to people. Book will be more trusted. As this applications of evolutionary computation 18th european conference evoapplications 2015 copenhagen denmark april 8 1
Color mememaps of self-balancing strategies in an island-based selecto-Lamarckian model. This fig... more Color mememaps of self-balancing strategies in an island-based selecto-Lamarckian model. This figure is a companion to paper "Studying Self-Balancing Strategies in Island-Based Multimemetic Algorithms".
We consider the use of island-based evolutionary algorithms (EAs) on fault-prone computational se... more We consider the use of island-based evolutionary algorithms (EAs) on fault-prone computational settings. More precisely, we consider scenarios plagued with correlated node failures. To this end, we use the sandpile model in order to induce such complex, correlated failures in the system. Several EA variants featuring self-adaptive capabilities aimed to alleviate the impact of node failures are considered, and their performance is studied in both correlated and non-correlated scenarios for increasingly large volatility rates. Simple island-based EAs are shown to have a significant performance degradation in the correlated scenario with respect to its uncorrelated counterpart. Resilience is however much improved via the use of self-? properties (self-scaling and self-healing), which leads to a more gentle degradation profile. The inclusion of self-generation also contributes to boost performance, leading to negligible degradation in the scenarios considered.
The International Journal of High Performance Computing Applications, 2016
Computational environments emerging from the pervasiveness of networked devices offer a plethora ... more Computational environments emerging from the pervasiveness of networked devices offer a plethora of opportunities and challenges. The latter arise from their dynamic, inherently volatile nature that tests the resilience of algorithms running on them. Here we consider the deployment of population-based optimization algorithms on such environments, using the island model of memetic algorithms for this purpose. These memetic algorithms are endowed with self-★ properties that give them the ability to work autonomously in order to optimize their performance and to react to the instability of computational resources. The main focus of this work is analyzing the performance of these memetic algorithms when the underlying computational substrate is not only volatile but also heterogeneous in terms of the computational power of each of its constituent nodes. To this end, we use a simulated environment that allows experimenting with different volatility rates and heterogeneity scenarios (that...
Digital collectible card games are not only a growing part of the video game industry, but also a... more Digital collectible card games are not only a growing part of the video game industry, but also an interesting research area for the field of computational intelligence. This game genre allows researchers to deal with hidden information, uncertainty and planning, among other aspects. This paper proposes the use of evolutionary algorithms (EAs) to develop agents who play a card game, Hearthstone, by optimizing a data-driven decision-making mechanism that takes into account all the elements currently in play. Agents feature self-learning by means of a competitive coevolutionary training approach, whereby no external sparring element defined by the user is required for the optimization process. One of the agents developed through the proposed approach was runner-up (best 6%) in an international Hearthstone Artificial Intelligence (AI) competition. Our proposal performed remarkably well, even when it faced state-of-the-art techniques that attempted to take into account future game states, such as Monte-Carlo Tree search. This outcome shows how evolutionary computation could represent a considerable advantage in developing AIs for collectible card games such as Hearthstone.
Memetic algorithms are techniques that orchestrate the interplay between population-based and tra... more Memetic algorithms are techniques that orchestrate the interplay between population-based and trajectory-based algorithmic components. In particular, some memetic models can be regarded under this broad interpretation as a group of autonomous basic optimization algorithms that interact among them in a cooperative way in order to deal with a specific optimization problem, aiming to obtain better results than the algorithms that constitute it separately. Going one step beyond this traditional view of cooperative optimization algorithms, this work tackles deep meta-cooperation, namely the use of cooperative optimization algorithms in which some components can in turn be cooperative methods themselves, thus exhibiting a deep algorithmic architecture. The objective of this paper is to demonstrate that such models can be considered as an efficient alternative to other traditional forms of cooperative algorithms. To validate this claim, different structural parameters, such as the communication topology between the agents, or the parameter that influences the depth of the cooperative effort (the depth of meta-cooperation), have been analyzed. To do this, a comparison with the state-of-the-art cooperative methods to solve a specific combinatorial problem, the Tool Switching Problem, has been performed. Results show that deep models are effective to solve this problem, outperforming metaheuristics proposed in the literature.
We consider the deployment of island-based memetic algorithms (MAs) endowed with self-properties ... more We consider the deployment of island-based memetic algorithms (MAs) endowed with self-properties on unstable computational environments composed of a collection of computing nodes whose availability fluctuates. In this context, these properties refer to the ability of the MA to work autonomously in order to optimize its performance and to react to the instability of computational resources. The main focus of this work is analyzing the performance of such MAs when the underlying computational substrate is not only volatile but also heterogeneous in terms of the computational power of each of its constituent nodes. We use for this purpose a simulated environment subject to different volatility rates, whose topology is modeled as scale-free networks and whose computing power is distributed among nodes following different distributions. We observe that in general computational homogeneity is preferable in scenarios with low instability; in case of high instability, MAs without self-scaling and self-healing perform better when the computational power follows a power law, but performance seems to be less sensitive to the distribution when these self-properties are used.
The use of volatile decentralized computational platforms such as, e.g., peer-to-peer networks, i... more The use of volatile decentralized computational platforms such as, e.g., peer-to-peer networks, is becoming an increasingly popular option to gain access to vast computing resources. Making an effective use of these resources requires algorithms adapted to such a changing environment, being resilient to resource volatility. We consider the use of a variant of evolutionary algorithms endowed with a classical faulttolerance technique, namely the creation of checkpoints in a safe external storage. We analyze the sensitivity of this approach on different kind of networks (scale-free and small-world) and under different volatility scenarios. We observe that while this strategy is robust under low volatility conditions, in cases of severe volatility performance degrades sharply unless a high checkpoint frequency is used. This suggest that other faulttolerance strategies are required in these situations.
This paper deals with the template design problem, a hard constrained combinatorial problem with ... more This paper deals with the template design problem, a hard constrained combinatorial problem with multiple applications. This problem is here formulated as a two-level combinatorial optimization problem whose solutions are integer matrices. In the higher level, a metaheuristic tackles the design of a collection of templates containing multiple instances of a set of components to be produced; in the lower level an integer linear programming solver is used to determine the optimal number of times each template has to be pressed in order to fulfill production requirements as closely as possible. Three metaheuristics, i.e., hill climbing, tabu search, and genetic algorithms, have been considered in the higher level, and LPSolve, a simplex-based software for linear and integer programming problems, in the lower level. An empirical evaluation on three scenarios of increasing complexity has been performed, indicating the better performance of genetic algorithms. These results are comparable to those shown by sequential ILP models, and hint the possibility of hybrid approaches.
Proceedings of the 7th International Joint Conference on Computational Intelligence, 2015
Computational devices with significant computing power are pervasive yet often under-exploited si... more Computational devices with significant computing power are pervasive yet often under-exploited since they are frequently idle or performing non-demanding tasks. Exploiting this power can be a cost-effective solution for solving complex computational tasks. Device-wise, this computational power can some times comprise a stable, long-lasting availability windows but it will more frequently take the form of brief, ephemeral bursts, mainly in the presence of devices "lent" voluntarily by their users. A highly dynamic and volatile computational landscape emerges from the collective contribution of numerous such devices. Algorithms consciously running on these environments require specific properties in terms of flexibility, plasticity and robustness. Bioinspired algorithms are particularly well suited to this endeavor, thanks to their intrinsic features: decentralized functioning, intrinsic parallelism, resilience, and adaptiveness. The latter is essential to exert advanced self-control on the functioning and/or structure of the algorithm. Much has been done in providing self-adaptation capabilities to these techniques, yet the science of self-bionspired algorithms is still nascent, in particular regarding to higher-level self-adaptation, and self-management in the context of large scale optimization problems and distributed ephemeral computing technologies. Deploying bioinspired techniques on this scenario will also pave the way for the application of other techniques on this computational domain.
Game design is a fundamental and critical part of the videogame development process, demanding a ... more Game design is a fundamental and critical part of the videogame development process, demanding a high cost in terms of time and effort from the team of designers. The availability of tools for assisting in this task is therefore of the foremost interest. These can not just speed up the process and reduce costs, but also improve the overall quality of the results by providing useful suggestions and hints. A conceptual system to approach the construction of this kind of tools is presented in this work. By using a learning component, the preferences and expertise of the designers can be modelled and to some extent simulated. This model is subsequently exploited by an optimization component that tries to create adequate game designs. A proof of concept of the system is provided in the context of level design in Metroidvania games. It is shown that the system can produce quality solutions and hints to the designer.
espanolEste trabajo estudia el remuestreo en algoritmos evolutivos aplicados a problemas de optim... more espanolEste trabajo estudia el remuestreo en algoritmos evolutivos aplicados a problemas de optimizacion combinatoria. Para ello se han escogido tres problemas diferentes: la construccion de la base de reglas de un controlador borroso, la planificacion de un sistema de produccion y la optimizacion de una funcion matematica. Los resultados obtenidos demuestran que las distintas tecnicas tienen un comportamiento coherente en dichos problemas, y que el remuestreo se reduce a medida que aumenta el tamano de la representacion. Adicionalmente, se muestra como el uso de un registro de la evolucion de los algoritmos puede reducir notablemente su tiempo de ejecucion cuando la funcion de evaluacion es costosa. EnglishThis work studies the retracing properties of evolutionary algorithms applied to combinatorial optimisation problems. For that purpose, three different problems have been chosen: the design of a fuzzy-controller rule-base, a flowshop scheduling and numerical optimisation. The obt...
IEEE Transactions on Computational Intelligence and AI in Games, 2016
The classical approach of Competitive Coevolution (CC) applied in games tries to exploit an arms ... more The classical approach of Competitive Coevolution (CC) applied in games tries to exploit an arms race between coevolving populations that belong to the same species (or at least to the same biotic niche), namely strategies, rules, tracks for racing, or any other. This paper proposes the co-evolution of entities belonging to different realms (namely biotic and abiotic) via a competitive approach. More precisely, we aim to coevolutionarily optimize both virtual players and game content. From a general perspective, our proposal can be viewed as a method of procedural content generation combined with a technique for generating game Artificial Intelligence (AI). This approach can not only help game designers in game creation but also generate content personalized to both specific players' profiles and game designer's objectives (e.g., create content that favors novice players over skillful players). As a case study we use Planet Wars, the Real Time Strategy (RTS) game associated with the 2010 Google AI Challenge contest, and demonstrate (via an empirical study) the validity of our approach.
Adaptive Tabu Tenure Computation in Local Search.- A Conflict Tabu Search Evolutionary Algorithm ... more Adaptive Tabu Tenure Computation in Local Search.- A Conflict Tabu Search Evolutionary Algorithm for Solving Constraint Satisfaction Problems.- Cooperative Particle Swarm Optimization for the Delay Constrained Least Cost Path Problem.- Effective Neighborhood Structures for the Generalized Traveling Salesman Problem.- Efficient Local Search Limitation Strategies for Vehicle Routing Problems.- Evolutionary Local Search for the Minimum Energy Broadcast Problem.- Exploring Multi-objective PSO and GRASP-PR for Rule Induction.- An Extended Beam-ACO Approach to the Time and Space Constrained Simple Assembly Line Balancing Problem.- Graph Colouring Heuristics Guided by Higher Order Graph Properties.- A Hybrid Column Generation Approach for the Berth Allocation Problem.- Hybrid Metaheuristic for the Prize Collecting Travelling Salesman Problem.- An ILS Based Heuristic for the Vehicle Routing Problem with Simultaneous Pickup and Delivery and Time Limit.- An Immune Genetic Algorithm Based on Bottleneck Jobs for the Job Shop Scheduling Problem.- Improved Construction Heuristics and Iterated Local Search for the Routing and Wavelength Assignment Problem.- Improving Metaheuristic Performance by Evolving a Variable Fitness Function.- Improving Query Expansion with Stemming Terms: A New Genetic Algorithm Approach.- Inc*: An Incremental Approach for Improving Local Search Heuristics.- Metaheuristics for the Bi-objective Ring Star Problem.- Multiobjective Prototype Optimization with Evolved Improvement Steps.- Optimising Multiple Kernels for SVM by Genetic Programming.- Optimization of Menu Layouts by Means of Genetic Algorithms.- A Path Relinking Approach with an Adaptive Mechanism to Control Parameters for the Vehicle Routing Problem with Time Windows.- Reactive Stochastic Local Search Algorithms for the Genomic Median Problem.- Solving Graph Coloring Problems Using Learning Automata.
The 2003 Congress on Evolutionary Computation, 2003. CEC '03.
Allelic representations are based on characterizing points of the search space as variable-size f... more Allelic representations are based on characterizing points of the search space as variable-size feature sets. Recombination processes are studied here from the point of view of this kind of representations. We focus on the structure of the information units manipulated during the process, and in the algorithmic aspects of this manipulation. In this sense, we provide a generic algorithmic template whose sufficiency is established. Moreover, the syntactic properties of the information units manipulated are analyzed and exemplified. This is done within the framework of Forma Analysis.
Connell's intermediate disturbance hypothesis (IDH) postulates that diversity peaks at an interme... more Connell's intermediate disturbance hypothesis (IDH) postulates that diversity peaks at an intermediate level of disturbance frequency or intensity. To test the validity of this concept for the species-poor marine hard-bottom community of the Western Baltic, we chose an in situ experimental approach. Undisturbed fouling communities of 2 different successional stages, 3 and 12 mo old, were submitted to various levels of emersion intensities, defined as time spans of continuous exposure to the air d-1. Disturbance levels ranged from 0.25 h up to 12 h of daily exposure. The study on 3 mo old communities was repeated in 2 subsequent years, 1999 and 2000. Species richness, evenness and diversity (Shannon index) were recorded to measure the effect of intensity treatments on community structure. The IDH was confirmed in the first year, when diversity was found to peak at intermediate disturbances. However, for communities of both successional stages, diversity-disturbance relationships were U-shaped or not significant in the second year. This ambiguous picture basically confirms the validity of the mechanisms proposed by the IDH, but shows that their forcing can be masked by fluctuations in environmental parameters, such as climatic conditions. An extension of the model is proposed, that considers diversity enhancement under extreme conditions due to a disturbanceinduced change in community structure. Furthermore we discuss a conceptual linkage of the IDH to the multiple stable-state hypothesis. Finally, we found community stability not to be positively correlated with community age and complexity.
Resumen-Este artículo analiza algunos de los desafíos más interesantes a los cuales los miembros ... more Resumen-Este artículo analiza algunos de los desafíos más interesantes a los cuales los miembros de la comunidad MAEB pueden enfrentarse en eĺ area de la aplicación de técnicas de Inteligencia Artificial/Computacional al diseño y creación de videojuegos. El artículo se centra en tres líneas, que en un futuro cercano, seguramente van a influenciar de forma significativa la industria del desarrollo de videojuegos, en concreto se enfoca en la Generación Automática de Contenido, en la Computación Afectiva aplicada a los videojuegos y en la Generación de Comportamientos que gestionen la toma de decisiones de las entidades no controladas por el jugador humano.
By reading, you can know the knowledge and things more, not only about what you get from people t... more By reading, you can know the knowledge and things more, not only about what you get from people to people. Book will be more trusted. As this applications of evolutionary computation 18th european conference evoapplications 2015 copenhagen denmark april 8 1
Color mememaps of self-balancing strategies in an island-based selecto-Lamarckian model. This fig... more Color mememaps of self-balancing strategies in an island-based selecto-Lamarckian model. This figure is a companion to paper "Studying Self-Balancing Strategies in Island-Based Multimemetic Algorithms".
We consider the use of island-based evolutionary algorithms (EAs) on fault-prone computational se... more We consider the use of island-based evolutionary algorithms (EAs) on fault-prone computational settings. More precisely, we consider scenarios plagued with correlated node failures. To this end, we use the sandpile model in order to induce such complex, correlated failures in the system. Several EA variants featuring self-adaptive capabilities aimed to alleviate the impact of node failures are considered, and their performance is studied in both correlated and non-correlated scenarios for increasingly large volatility rates. Simple island-based EAs are shown to have a significant performance degradation in the correlated scenario with respect to its uncorrelated counterpart. Resilience is however much improved via the use of self-? properties (self-scaling and self-healing), which leads to a more gentle degradation profile. The inclusion of self-generation also contributes to boost performance, leading to negligible degradation in the scenarios considered.
The International Journal of High Performance Computing Applications, 2016
Computational environments emerging from the pervasiveness of networked devices offer a plethora ... more Computational environments emerging from the pervasiveness of networked devices offer a plethora of opportunities and challenges. The latter arise from their dynamic, inherently volatile nature that tests the resilience of algorithms running on them. Here we consider the deployment of population-based optimization algorithms on such environments, using the island model of memetic algorithms for this purpose. These memetic algorithms are endowed with self-★ properties that give them the ability to work autonomously in order to optimize their performance and to react to the instability of computational resources. The main focus of this work is analyzing the performance of these memetic algorithms when the underlying computational substrate is not only volatile but also heterogeneous in terms of the computational power of each of its constituent nodes. To this end, we use a simulated environment that allows experimenting with different volatility rates and heterogeneity scenarios (that...
Digital collectible card games are not only a growing part of the video game industry, but also a... more Digital collectible card games are not only a growing part of the video game industry, but also an interesting research area for the field of computational intelligence. This game genre allows researchers to deal with hidden information, uncertainty and planning, among other aspects. This paper proposes the use of evolutionary algorithms (EAs) to develop agents who play a card game, Hearthstone, by optimizing a data-driven decision-making mechanism that takes into account all the elements currently in play. Agents feature self-learning by means of a competitive coevolutionary training approach, whereby no external sparring element defined by the user is required for the optimization process. One of the agents developed through the proposed approach was runner-up (best 6%) in an international Hearthstone Artificial Intelligence (AI) competition. Our proposal performed remarkably well, even when it faced state-of-the-art techniques that attempted to take into account future game states, such as Monte-Carlo Tree search. This outcome shows how evolutionary computation could represent a considerable advantage in developing AIs for collectible card games such as Hearthstone.
Memetic algorithms are techniques that orchestrate the interplay between population-based and tra... more Memetic algorithms are techniques that orchestrate the interplay between population-based and trajectory-based algorithmic components. In particular, some memetic models can be regarded under this broad interpretation as a group of autonomous basic optimization algorithms that interact among them in a cooperative way in order to deal with a specific optimization problem, aiming to obtain better results than the algorithms that constitute it separately. Going one step beyond this traditional view of cooperative optimization algorithms, this work tackles deep meta-cooperation, namely the use of cooperative optimization algorithms in which some components can in turn be cooperative methods themselves, thus exhibiting a deep algorithmic architecture. The objective of this paper is to demonstrate that such models can be considered as an efficient alternative to other traditional forms of cooperative algorithms. To validate this claim, different structural parameters, such as the communication topology between the agents, or the parameter that influences the depth of the cooperative effort (the depth of meta-cooperation), have been analyzed. To do this, a comparison with the state-of-the-art cooperative methods to solve a specific combinatorial problem, the Tool Switching Problem, has been performed. Results show that deep models are effective to solve this problem, outperforming metaheuristics proposed in the literature.
We consider the deployment of island-based memetic algorithms (MAs) endowed with self-properties ... more We consider the deployment of island-based memetic algorithms (MAs) endowed with self-properties on unstable computational environments composed of a collection of computing nodes whose availability fluctuates. In this context, these properties refer to the ability of the MA to work autonomously in order to optimize its performance and to react to the instability of computational resources. The main focus of this work is analyzing the performance of such MAs when the underlying computational substrate is not only volatile but also heterogeneous in terms of the computational power of each of its constituent nodes. We use for this purpose a simulated environment subject to different volatility rates, whose topology is modeled as scale-free networks and whose computing power is distributed among nodes following different distributions. We observe that in general computational homogeneity is preferable in scenarios with low instability; in case of high instability, MAs without self-scaling and self-healing perform better when the computational power follows a power law, but performance seems to be less sensitive to the distribution when these self-properties are used.
The use of volatile decentralized computational platforms such as, e.g., peer-to-peer networks, i... more The use of volatile decentralized computational platforms such as, e.g., peer-to-peer networks, is becoming an increasingly popular option to gain access to vast computing resources. Making an effective use of these resources requires algorithms adapted to such a changing environment, being resilient to resource volatility. We consider the use of a variant of evolutionary algorithms endowed with a classical faulttolerance technique, namely the creation of checkpoints in a safe external storage. We analyze the sensitivity of this approach on different kind of networks (scale-free and small-world) and under different volatility scenarios. We observe that while this strategy is robust under low volatility conditions, in cases of severe volatility performance degrades sharply unless a high checkpoint frequency is used. This suggest that other faulttolerance strategies are required in these situations.
This paper deals with the template design problem, a hard constrained combinatorial problem with ... more This paper deals with the template design problem, a hard constrained combinatorial problem with multiple applications. This problem is here formulated as a two-level combinatorial optimization problem whose solutions are integer matrices. In the higher level, a metaheuristic tackles the design of a collection of templates containing multiple instances of a set of components to be produced; in the lower level an integer linear programming solver is used to determine the optimal number of times each template has to be pressed in order to fulfill production requirements as closely as possible. Three metaheuristics, i.e., hill climbing, tabu search, and genetic algorithms, have been considered in the higher level, and LPSolve, a simplex-based software for linear and integer programming problems, in the lower level. An empirical evaluation on three scenarios of increasing complexity has been performed, indicating the better performance of genetic algorithms. These results are comparable to those shown by sequential ILP models, and hint the possibility of hybrid approaches.
Proceedings of the 7th International Joint Conference on Computational Intelligence, 2015
Computational devices with significant computing power are pervasive yet often under-exploited si... more Computational devices with significant computing power are pervasive yet often under-exploited since they are frequently idle or performing non-demanding tasks. Exploiting this power can be a cost-effective solution for solving complex computational tasks. Device-wise, this computational power can some times comprise a stable, long-lasting availability windows but it will more frequently take the form of brief, ephemeral bursts, mainly in the presence of devices "lent" voluntarily by their users. A highly dynamic and volatile computational landscape emerges from the collective contribution of numerous such devices. Algorithms consciously running on these environments require specific properties in terms of flexibility, plasticity and robustness. Bioinspired algorithms are particularly well suited to this endeavor, thanks to their intrinsic features: decentralized functioning, intrinsic parallelism, resilience, and adaptiveness. The latter is essential to exert advanced self-control on the functioning and/or structure of the algorithm. Much has been done in providing self-adaptation capabilities to these techniques, yet the science of self-bionspired algorithms is still nascent, in particular regarding to higher-level self-adaptation, and self-management in the context of large scale optimization problems and distributed ephemeral computing technologies. Deploying bioinspired techniques on this scenario will also pave the way for the application of other techniques on this computational domain.
Game design is a fundamental and critical part of the videogame development process, demanding a ... more Game design is a fundamental and critical part of the videogame development process, demanding a high cost in terms of time and effort from the team of designers. The availability of tools for assisting in this task is therefore of the foremost interest. These can not just speed up the process and reduce costs, but also improve the overall quality of the results by providing useful suggestions and hints. A conceptual system to approach the construction of this kind of tools is presented in this work. By using a learning component, the preferences and expertise of the designers can be modelled and to some extent simulated. This model is subsequently exploited by an optimization component that tries to create adequate game designs. A proof of concept of the system is provided in the context of level design in Metroidvania games. It is shown that the system can produce quality solutions and hints to the designer.
espanolEste trabajo estudia el remuestreo en algoritmos evolutivos aplicados a problemas de optim... more espanolEste trabajo estudia el remuestreo en algoritmos evolutivos aplicados a problemas de optimizacion combinatoria. Para ello se han escogido tres problemas diferentes: la construccion de la base de reglas de un controlador borroso, la planificacion de un sistema de produccion y la optimizacion de una funcion matematica. Los resultados obtenidos demuestran que las distintas tecnicas tienen un comportamiento coherente en dichos problemas, y que el remuestreo se reduce a medida que aumenta el tamano de la representacion. Adicionalmente, se muestra como el uso de un registro de la evolucion de los algoritmos puede reducir notablemente su tiempo de ejecucion cuando la funcion de evaluacion es costosa. EnglishThis work studies the retracing properties of evolutionary algorithms applied to combinatorial optimisation problems. For that purpose, three different problems have been chosen: the design of a fuzzy-controller rule-base, a flowshop scheduling and numerical optimisation. The obt...
IEEE Transactions on Computational Intelligence and AI in Games, 2016
The classical approach of Competitive Coevolution (CC) applied in games tries to exploit an arms ... more The classical approach of Competitive Coevolution (CC) applied in games tries to exploit an arms race between coevolving populations that belong to the same species (or at least to the same biotic niche), namely strategies, rules, tracks for racing, or any other. This paper proposes the co-evolution of entities belonging to different realms (namely biotic and abiotic) via a competitive approach. More precisely, we aim to coevolutionarily optimize both virtual players and game content. From a general perspective, our proposal can be viewed as a method of procedural content generation combined with a technique for generating game Artificial Intelligence (AI). This approach can not only help game designers in game creation but also generate content personalized to both specific players' profiles and game designer's objectives (e.g., create content that favors novice players over skillful players). As a case study we use Planet Wars, the Real Time Strategy (RTS) game associated with the 2010 Google AI Challenge contest, and demonstrate (via an empirical study) the validity of our approach.
Adaptive Tabu Tenure Computation in Local Search.- A Conflict Tabu Search Evolutionary Algorithm ... more Adaptive Tabu Tenure Computation in Local Search.- A Conflict Tabu Search Evolutionary Algorithm for Solving Constraint Satisfaction Problems.- Cooperative Particle Swarm Optimization for the Delay Constrained Least Cost Path Problem.- Effective Neighborhood Structures for the Generalized Traveling Salesman Problem.- Efficient Local Search Limitation Strategies for Vehicle Routing Problems.- Evolutionary Local Search for the Minimum Energy Broadcast Problem.- Exploring Multi-objective PSO and GRASP-PR for Rule Induction.- An Extended Beam-ACO Approach to the Time and Space Constrained Simple Assembly Line Balancing Problem.- Graph Colouring Heuristics Guided by Higher Order Graph Properties.- A Hybrid Column Generation Approach for the Berth Allocation Problem.- Hybrid Metaheuristic for the Prize Collecting Travelling Salesman Problem.- An ILS Based Heuristic for the Vehicle Routing Problem with Simultaneous Pickup and Delivery and Time Limit.- An Immune Genetic Algorithm Based on Bottleneck Jobs for the Job Shop Scheduling Problem.- Improved Construction Heuristics and Iterated Local Search for the Routing and Wavelength Assignment Problem.- Improving Metaheuristic Performance by Evolving a Variable Fitness Function.- Improving Query Expansion with Stemming Terms: A New Genetic Algorithm Approach.- Inc*: An Incremental Approach for Improving Local Search Heuristics.- Metaheuristics for the Bi-objective Ring Star Problem.- Multiobjective Prototype Optimization with Evolved Improvement Steps.- Optimising Multiple Kernels for SVM by Genetic Programming.- Optimization of Menu Layouts by Means of Genetic Algorithms.- A Path Relinking Approach with an Adaptive Mechanism to Control Parameters for the Vehicle Routing Problem with Time Windows.- Reactive Stochastic Local Search Algorithms for the Genomic Median Problem.- Solving Graph Coloring Problems Using Learning Automata.
The 2003 Congress on Evolutionary Computation, 2003. CEC '03.
Allelic representations are based on characterizing points of the search space as variable-size f... more Allelic representations are based on characterizing points of the search space as variable-size feature sets. Recombination processes are studied here from the point of view of this kind of representations. We focus on the structure of the information units manipulated during the process, and in the algorithmic aspects of this manipulation. In this sense, we provide a generic algorithmic template whose sufficiency is established. Moreover, the syntactic properties of the information units manipulated are analyzed and exemplified. This is done within the framework of Forma Analysis.
Uploads
Papers by Carlos Cotta