Evolutionary Computing - Assignment 1: Specialist Agent
Evolutionary Computing - Assignment 1: Specialist Agent
Evolutionary Computing - Assignment 1: Specialist Agent
Permission to make digital or hard copies of part or all of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full cita-
tion on the first page. Copyrights for third-party components of this work must be
honored. For all other uses, contact the owner/author(s).
Evolutionary Computing, Assignment 1, Specialist agent
© 2021 Copyright held by the owner/author(s).
ACM ISBN 978-x-xxxx-xxxx-x/YY/MM.
https://doi.org/
Evolutionary Computing, Assignment 1, Specialist agent Team 88
can talk about replacement). To perform mating selection, we used which has been proved to be capable of getting excellent results
a tournament selection. This type of selection allows us to only for our given task[2]. For this purpose, we made use of the NEAT-
consider the fitnesses of a small subset of individuals, then select Python library which is a pure Python implementation of NEAT.
the best among these. What happens is that we extract 𝑠𝑖𝑧𝑒𝑡𝑜𝑢𝑟𝑛𝑎𝑚𝑒𝑛𝑡
2.3.2 Representation. Every genome of the population is com-
random elements from the population, then pick the best one. We
posed of two lists that represent the complete structure of a net-
repeat this process for 𝑘 times, obtaining, consequently, 𝑘 indi-
work: the connection genes and node genes lists. Thus, every
viduals. To perform survival selection, two main choices could be
connection gene specifies both the in and out node genes, the weight
taken: the first is (𝜇+𝜆)-selection, which considers both the present
of the connections, a bit that indicates if the connection is enabled,
population and the offspring in the selection of the survivors. The
and an innovation number.
second approach is (𝜇, 𝜆)-selection, where all the parents are dis-
carded in spite of a sometimes bigger offspring. As seen in [3] at 2.3.3 Reproduction and Mutation. Mutation in NEAT can mod-
section 5.3.2, the latter is usually preferred, as it’s better at leaving ify both the structure of the network and the weights of the con-
local optima, forgetting outdated solutions, and the 𝜎 we’re using nections. The weights are modified as in any neuroevolution al-
for self-adaptive mutation. Because of this, we’re performing (𝜇, 𝜆)- gorithm, while the structure mutation expands the genome
selection to choose our offspring, using best selection: only the space in two different ways. On one hand, there is an add connec-
𝜇 best individuals are kept, whereas the other 𝜆 · 𝜇 are forgotten. tion mutation that adds a single connection between two node
Literature research suggested a 𝜆 between 5 and 10, and several genes previously unconnected. On the other hand, we have an
tests led us to choose 𝜆 = 7. add node mutation that replaces an existing connection, adding
a new node where the old connection used to be. In this way, the
2.2.5 Parameter tuning and control. Parameter setting is a cru-
genome space is expanded through the creation of new networks
cial part of every Artificial Intelligence algorithm, and Genetic Al-
with size-varying connections. The crossover operator out stands
gorithms make no exception. We have to distinguish between pa-
for its simplicity at the moment of matching networks with dif-
rameter control and parameter tuning: the latter happens offline,
ferent topologies. Every time a new gene is created through mu-
before a run, while the first modifies the parameters continuously
tation, a global innovation number is increased and assigned to
during a run. To perform parameter tuning, we exploited a Python
the new gene. The innovation number never changes for a spe-
library that allows users to explore the hyperparameter space, search-
cific gene so in this way the historical origin of a gene is well
ing for the highest performances: hyperopt [1]. To achieve faster
known through all the evolution process. Then, when crossover
testing, we integrated our hyperparameter testing into a distributed
occurs, the genes from the networks involved are lined up by their
computing platform, Apache Spark, then set up a cluster of cloud
innovation numbers, recasting the problem from topological
VPS to run experiments parallelly. The set that achieved the best
analysis to historical matching, which is significantly simpler.
results was the following:
2.3.4 Speciation, Fitness Sharing, and Minimized Dimensional-
Parameter Value ity. Speciation allows new structures, which usually have reduced
𝑎𝑙𝑝ℎ𝑎𝑐𝑟𝑜𝑠𝑠𝑜𝑣𝑒𝑟 0.45 fitness, to have time to evolve into better solutions. In this way,
𝑝𝑐𝑟𝑜𝑠𝑠𝑜𝑣𝑒𝑟 0.75 the diversity of the population is preserved and the innova-
𝑛𝑜𝑑𝑒𝑠 1 20 tion is protected, having individuals competing most with only
𝑛𝑜𝑑𝑒𝑠 2 24 other individuals from the same species. Innovation number is also
𝑝𝑚𝑢𝑡𝑎𝑡𝑖𝑜𝑛 0.62 used for this purpose. The more disjoint two genomes are (genes
𝑝𝑖𝑛𝑑_𝑚𝑢𝑡𝑎𝑡𝑖𝑜𝑛 0.70 with different innovation numbers), the less they have in com-
𝑠𝑖𝑧𝑒𝑛𝑖𝑐ℎ𝑒 8 mon, which permits the classification of those genomes in differ-
𝑠𝑖𝑧𝑒𝑝𝑜𝑝𝑢𝑙𝑎𝑡𝑖𝑜𝑛 75 ent species. NEAT also implements explicit fitness sharing as a
method to prevent an individual to take over the population. In-
𝑠𝑖𝑧𝑒𝑡𝑜𝑢𝑟𝑛𝑎𝑚𝑒𝑛𝑡 7
dividuals from the same species adjust their fitness depending on
the species size and the species can grow or shrink depending on
Parameter control was a concern too: we implemented a self- the average fitness from all species. Finally, having speciation and
adaptive mutation step size. This consists in adding the muta- fitness sharing allows NEAT to start the population with minimal
tion step size to the genome of an individual, making it part of networks that grow through structural mutation.
evolution. Ideally, we’ll want to have a high step size in the begin-
ning (fixed to a default value of 1.0), to better explore the search 2.3.5 NEAT-Python Framework and Implementation. For the im-
space, then start exploitation later in the evolution. We mutate plementation of the algorithm, we used the NEAT-Python frame-
the step-size as seen in section 4.4.1 of [3]: work. This gave us the chance of using pre-existent classes, built
with almost everything we needed for the evolutionary process.
𝜎 ′ = 𝜎 · 𝑒 ·𝑁 (0,𝜏) Classes such as neat.Population, which is initialized randomly
and is the base for the searching of new solutions. For our work
2.3 Approach 2: Implementation of NEAT purpose, we made a subclass named EvomanPopulation with the
2.3.1 Overview. For the second approach, we decided to study customizations that were needed. We also did the same for the class
and implement the Neuroevolution of Augmenting Topologies (NEAT) neat.BaseReporter, writing our own class EvomanReporter. For
algorithm created by Kenneth O. Stanley and Risto Miikkulainen[5], the rest of the implementation, we made use of the standard classes
Evolutionary Computing, Assignment 1, Specialist agent Team 88
that NEAT-python provides, such as the neat.Configuration class, fitnesses from the first generations already. Approach 2 learns al-
where all the parameters that are going to be involved in the evo- most as fast as approach 1, but we can notice that it explores
lution are specified, such as the population size, the node add muta- the search space more: the maximum fitness is oscillating across
tion probability or the compatibility weight coefficient, just to men- all generations, and the average is more distant from it than in 1.
tion some. We also needed a controller class that does the mapping Therefore, approach 1 has a smoother growth: the average fitness
through the NEAT neural network created by any solution, and looks like an increasing function. The results are very similar to
the actions taken by the EvoMan player at any given time. the ones seen in [2], which is not surprising: the paper uses similar
approaches and algorithms to solve the same problem. We can note
2.3.6 Parameter tuning and control. As we did in our first ap-
that the p-values are low, meaning that this comparison is statisti-
proach, we chose to use hyperopt, an automated strategy to find
cally significant. To have a deeper insight on these graphs, check
optimized values for the neat.Configuration file described be-
out the plots folder in the project.
fore. Some of those values are expressed in the next table:
Parameter Value
𝑝𝑜𝑝_𝑠𝑖𝑧𝑒 97
𝑎𝑐𝑡𝑖𝑣𝑎𝑡𝑖𝑜𝑛_𝑚𝑢𝑡𝑎𝑡𝑒_𝑟𝑎𝑡𝑒 0.38
𝑐𝑜𝑚𝑝𝑎𝑡𝑖𝑏𝑖𝑙𝑖𝑡𝑦_𝑑𝑖𝑠 𝑗𝑜𝑖𝑛𝑡_𝑐𝑜𝑒 𝑓 𝑓 𝑖𝑐𝑖𝑒𝑛𝑡 0.94
𝑐𝑜𝑛𝑛_𝑎𝑑𝑑_𝑝𝑟𝑜𝑏 0.92
... ...
𝑛𝑜𝑑𝑒_𝑎𝑑𝑑_𝑝𝑟𝑜𝑏 0.36
𝑛𝑢𝑚_ℎ𝑖𝑑𝑑𝑒𝑛 13
𝑤𝑒𝑖𝑔ℎ𝑡_𝑚𝑢𝑡𝑎𝑡𝑒_𝑟𝑎𝑡𝑒 0.67
3 Results
3.1 Results obtained with approach 1
Starting from the first generation, results look interesting: the ran-
dom initialization of the individuals achieves a low fitness (as ex-
pected), but as soon as an offspring is generated (note that 𝜆 = 7,
meaning that the size of the offspring is quite large) and selection
happens, good individuals start to emerge fast. The three ene-
mies achieved similar results, with enemy 5 achieving the high-
est score, followed by 2 and 5. A fitness higher than 90 (which was
reached with all the enemies) means that the player is winning,
and it’s doing so fast. It rarely happens that the player loses a game
by finding itself in uncommon situations.
References
[1] J. Bergstra, D. Yamins, and D. D. Cox. 2013. Making a Science of Model Search: Hy-
perparameter Optimization in Hundreds of Dimensions for Vision Architectures.
In Proceedings of the 30th International Conference on International Conference on
Machine Learning - Volume 28 (ICML’13). JMLR.org, I–115–I–123.
[2] Karine da Silva Miras de Araujo and Fabrício Olivetti de Franca. 2016. Evolving
a generalized strategy for an action-platformer video game framework. (2016),
1303-1310 pages. https://doi.org/10.1109/CEC.2016.7743938
[3] A.E. Eiben and J.E. Smith. 2015. Introduction to Evolutionary Computing. Springer.
https://doi.org/10.1007/978-3-662-44874-8 Gebeurtenis: 2nd edition.
[4] Larry J. Eshelman and J. David Schaffer. 1993. Real-Coded Genetic Algorithms and
Interval-Schemata. In Foundations of Genetic Algorithms, L. DARRELL WHITLEY
(Ed.). Foundations of Genetic Algorithms, Vol. 2. Elsevier, 187–202. https://doi.
org/10.1016/B978-0-08-094832-4.50018-0
[5] K.O. Stanley and R. Miikkulainen. 2002. Efficient evolution of neural network
topologies. (2002), 1757-1762 vol.2 pages. https://doi.org/10.1109/CEC.2002.
1004508