s11721-023-00227-2
s11721-023-00227-2
s11721-023-00227-2
https://doi.org/10.1007/s11721-023-00227-2
Received: 31 January 2023 / Accepted: 28 August 2023 / Published online: 9 October 2023
© The Author(s) 2023
Abstract
Research in multi-objective particle swarm optimizers (MOPSOs) progresses by proposing
one new MOPSO at a time. In spite of the commonalities among different MOPSOs, it is
often unclear which algorithmic components are crucial for explaining the performance of
a particular MOPSO design. Moreover, it is expected that different designs may perform
best on different problem families and identifying a best overall MOPSO is a challeng-
ing task. We tackle this challenge here by: (1) proposing AutoMOPSO, a flexible algorith-
mic template for designing MOPSOs with a design space that can instantiate thousands of
potential MOPSOs; and (2) searching for good-performing MOPSO designs given a family
of training problems by means of an automatic configuration tool (irace). We apply this
automatic design methodology to generate a MOPSO that significantly outperforms two
state-of-the-art MOPSOs on four well-known bi-objective problem families. We also iden-
tify the key design choices and parameters of the winning MOPSO by means of ablation.
AutoMOPSO is publicly available as part of the jMetal framework.
1 Introduction
13
Vol.:(0123456789)
106 Swarm Intelligence (2024) 18:105–139
optimum of these problems is usually not a single solution, but a set of trade-off solu-
tions collectively known as Pareto set, whose image in the objective space is called
Pareto front. Given a multi-objective problem, the main goal of using metaheuristics
is to find an accurate approximation of its Pareto front, where accuracy is measured in
terms of convergence, distribution and spread of the found solutions with respect to the
Pareto front.
The effectiveness of metaheuristics (both single and multi-objective) depends to a large
extent on the correct setting of their algorithmic parameters. The approach usually adopted
is to manually adjust the parameters by conducting pilot tests, which is a tedious and not
rigorous task. If the final user (i.e., the decision maker) is not an expert in metaheuristics,
such user will not have a minimum background to attempt the trial-and-error strategy, so
she/he will probably opt for using a widely known technique, such as NSGA-II, configured
with default settings taken from the paper where the algorithm was originally presented.
In this context, a line that is gaining momentum is the automatic algorithm configura-
tion and, as an extension, the automatic design of metaheuristics. While in the first case the
goal is, given a set of problems used as training set, to find a particular parameter setting
that solve them in an efficient way, the second case is a step forward in which not only the
control parameters are considered, but also the components of the algorithm that can be
combined to design a new ad-hoc algorithmic variant automatically.
In this paper, we are interested in the auto-design of multi-objective particle swarm
optimizers (MOPSOs), which constitute a family of nature-inspired metaheuristics that
have shown to be very effective to solve multi-objective problems (Durillo et al., 2009).
The inspiration of particle swarm optimization (PSO) algorithms, proposed by (Kennedy
& Eberhart, 1995) is the choreography of bird flocks when seeking for food. The imple-
mentation of the algorithm adopts a population of particles, called swarm, which are ini-
tialized by default with random solutions (i.e., random positions in the design space), and
whose behavior is affected by either their best local (i.e., within a certain neighborhood)
or the best global particle (also known as the leader). Over the generations, particles adapt
their beliefs to the most successful solutions found in their environment.
The approach we consider here is the automatic design of MOPSOs, following a meth-
odology already applied to the design of multi-objective ant colony optimization (López-
Ibáñez & Stützle, 2012) and multi-objective evolutionary algorithms (Bezerra et al., 2016,
2020). The main ingredients of this methodology are: (1) a flexible design space that
allows instantiating MOPSO algorithms by configuring the parameters of some software
framework and (2) the application of automatic configuration tools to search for designs
given a set of (training) problem instances and a unary quality metric, such as hypervol-
ume (Zitzler & Thiele, 1998). Therefore, one of our contributions is a flexible design space
for MOPSOs implemented in the jMetal framework (Durillo & Nebro, 2011). Our overall
goal is to illustrate the automatic design methodology in the case of MOPSOs for several
well-known problem families.
This paper is an extension of (Doblas et al., 2022), where we presented an approach
based on combining the jMetal framework with the irace package such that, given a design
space of MOPSO components, we could find the best configurations for a set of test suites.
The study carried out showed that the found AutoMOPSOs outperformed reference algo-
rithms including SMPSO (Nebro et al., 2009) and OMOPSO (Reyes Sierra & Coello
Coello, 2005), when applied to the solution of three problem families (for a total of 21
problems). Here, we increase the set of benchmark problems, reduce the budget of the eval-
uations to make them more challenging, and conduct a more in-depth analysis.
Concretely, our aim is to answer the following research questions (RQs):
13
Swarm Intelligence (2024) 18:105–139 107
RQ1. Is it possible to find designs of MOPSO algorithms using automatic design which
are able to outperform state-of-the-art algorithms such as SMPSO and OMOPSO?
RQ2. Is the auto-design process able to obtain a configuration that yields to accurate
results when a benchmark of realistic problems is added to the study?
RQ3. What are the design parameters that have the strongest effect on the performance
of MOPSO algorithms given the design space considered here?
RQ4. Can a configuration obtained for a given test suite lead the algorithm to avoid
overffiting and allow a generalization for many other problems?
These questions are addressed in the discussion section in the light of the results obtained
after performing a series of experiments. In this regard, a new set of experiments have been
conducted in the current study on an extended benchmark containing different problem
families, which indeed incorporate an enriched set of algorithmic components and parame-
ters. All this allows us to carry out an in-depth analysis of the designs found using ablation
analysis (Fawcett & Hoos, 2016) with the goal of determining which parameters have more
influence in the MOPSO variants under consideration.
This paper is organized according to the following structure of contents. The next sec-
tion contains a review of previous related work in order to provide the proper context to our
proposal. Section 3 describes the methods and tools employed, where the design and devel-
opment decisions taken to generate the AutoMOPSO approach are explained. Extended
experiments and comparisons are provided in Sect. 4. In Sect. 5, a series of discussions are
included with the aim of answering the research questions previously indicated. Finally,
our conclusions and some possible future steps in this research line are described in Sect. 6.
2 Related work
Automatic algorithm configuration (Birattari, 2009) is the idea of using optimization meth-
ods for automatically setting the parameter values of an algorithm to maximize a quality
metric, such as expected solution cost, over a given set of problem instances. To approach
this idea, automatic configuration methods, such as irace (López-Ibáñez et al., 2016), are
emerging as black-box optimization methods able to handle integer, real and categorical
parameters, which may be also conditional on the values of other parameters. In addition,
such methods must handle the stochasticity of the algorithm being tuned and intelligently
use the computational budget given by selecting how many and which problem instances
are used for evaluating the performance of a candidate configuration.
Automatic design extends this idea (López-Ibáñez & Stützle, 2012; KhudaBukhsh et al.,
2016; Stützle & López-Ibáñez, 2019) by considering flexible algorithmic frameworks from
which different algorithmic designs can be instantiated by appropriately setting parame-
ter values. In automatic design the challenge is the definition of such frameworks, which
may be flexible enough to define a large space of potentially useful algorithmic designs,
while at the same time reducing as much as possible redundancies between alternative
designs (Stützle & López-Ibáñez, 2019).
If we focus on PSO, Frankestein’s PSO (Montes de Oca et al., 2009) is a proposal to
combine different components of single-objective PSOs, but without using an automatic
configuration approach. The PSO-X framework (Camacho-Villalón et al., 2021) applies
automatic design to PSOs, but it is also focused on single-objective optimization algo-
rithms. Thus, many design decisions that are only relevant for multi-objective problems
13
108 Swarm Intelligence (2024) 18:105–139
(such as the role of archived particles in the velocity update) are not taken into account in
these proposals.
To the best of our knowledge, de Lima & Pozo (2017) is the only work that has studied
automatic design of MOPSO algorithms. In that work, the design space is given as a con-
text-free grammar which is tuned for the DTLZ (Deb–Thiele–Laumanns–Zitzler) problem
family (Deb et al., 2005) by using both grammatical evolution and irace. The automatically
designed MOPSOs are compared with respect to SMPSO. Although their methodology
is similar to ours, we focus here on a larger design space, 25 parameters, while ten were
taken in de Lima & Pozo (2017), and on the analysis of the algorithmic components that
contribute the most to the observed performance. In addition, we consider a larger set of
problem families. In our work her, we focus on bi-objective problems and we have chosen
four problem families with the goal of understanding which problem families lead to more
general MOPSO designs when used as training sets. In particular, we focus on bi-objective
problem families: the Zitzler–Deb–Thiele (ZDT) (Zitzler et al., 2000), Deb–Thiele–Lau-
manns–Zitzler (DTLZ) (Deb et al., 2005), Walking-Fish-Group (WFG) (Huband et al.,
2006) and the RE (Tanabe & Ishibuchi, 2020) problem families. We also compare the auto-
matically designed MOPSOs not only to SMPSO, but also to OMOPSO, since the latter
sometimes outperforms the former.
Finally, as an additional contribution, we make available the resulting framework as part
of jMetal, which will allow practitioners and researchers to apply automatic design and the
analysis shown here to other problem scenarios.
The proposed process to carry out the automatic design of a metaheuristic is depicted in
Fig. 1. The starting point is an algorithmic template defining the behavior of the baseline
algorithm to be designed and a parameter space which defines its parameters and com-
ponents, including their relationships that can be combined in the template to produce a
particular algorithm. Then, after selecting a number of optimization problems as the train-
ing set, an automatic configuration tool is executed to find a particular configuration that
yields an instantiated algorithm able of optimize efficiently the problems in the training set.
Depending on the diversity of the training problems and their representativity of a larger
family of problems, the resulting algorithm may either be biased to them, leading to over-
fitting, or can be more generic and perform well for other similar problems.
13
Swarm Intelligence (2024) 18:105–139 109
When the template is tailored to a specific algorithm and the parameter space merely
describes the usual (typically numerical) parameters of that algorithm, the above process
corresponds to automatic configuration. An example is the work presented by Nebro et al.,
(2019), where the specific algorithm is NSGA-II, and the goal was to automatically con-
figure its parameters, such as population size and mutation rate. The concept of automatic
design goes one step beyond and implies building a new meta-algorithm or a generic algo-
rithmic template where some (high-level) parameters represent design choices that dis-
tinguish between different algorithms. Setting those high-level parameters appropriately
allows instantiating specific algorithms. In addition, specific algorithms will have a number
of low-level parameters that are sometimes shared among different potential designs, but
are often disabled for specific high-level designs.
In this work, we start from a canonical MOPSO algorithm (described in Sect. 3.1) and,
from it, we propose an algorithmic template, called AutoMOPSO, for the automatic design
of multi-objective particle swarm optimizers, which is in turn detailed in Sect. 3.2. Some
elements of this template can be chosen from a number of alternative options, including
various numerical settings. Those configurable elements, together with their domains and
various dependencies that exist among them, give rise to a parameter space (Sect. 3.3). By
assigning values to the parameters of the AutoMOPSO template, we can instantiate and
run a specific MOPSO.
We implemented AutoMOPSO in the jMetal framework, which is a library for multi-
objective optimization with metaheuristics available in Java (Durillo & Nebro, 2011; Nebro
et al., 2015). Given a string representing pairs of parameters and values, our implementation
of AutoMOPSO creates the corresponding MOPSO algorithm. When running this MOPSO
algorithm on a particular problem up to a given termination criterion, the result is the value
of a unary indicator measuring the quality of the obtained Pareto front approximation.
To search in the parameter space of AutoMOPSO, the automatic configuration method
adopted was irace (López-Ibáñez et al., 2016), which is an R package implementing an
elitist iterated racing strategy for the automatic tuning of optimization algorithms. Given
the parameter space and a set of training problems, irace calls the jMetal implementation
of AutoMOPSO to generate specific MOPSO designs, then evaluates them on problems
selected from the training set and captures the returned quality indicator values to compare
alternative MOPSO designs.
A generic PSO works by iteratively generating new particles’ positions located in a given
problem search space. The position xi represents a set of Cartesian coordinates describing
a point in solution space. Each of these new particles’ positions are calculated at each itera-
tion t as follows:
where vit+1 is the velocity vector of particle i. This velocity is updated using the next
formula:
where pti is the personal best position that the particle i has ever stored, and bti is the posi-
tion found by the member of its neighborhood that has had the best performance so far
13
110 Swarm Intelligence (2024) 18:105–139
(also known as leader or global best). Acceleration coefficients 𝜑1 and 𝜑2 control the rela-
tive effect of the personal best, and U t is a diagonal matrix with elements distributed uni-
formly at random in the interval [0, 𝜑i ]. Finally, 𝜔 is called the inertia weight and influ-
ences the trade-off between exploitation and exploration.
An alternative version of the velocity equation was proposed by Clerc and Kennedy
(2002), where the constriction coefficient 𝜒 (which arises from the observation that the
velocity of some particles keeps growing unless restrained in some way) is adopted instead
of the inertia weight 𝜔:
( )
vit+1 = 𝜒 ⋅ vti + U t [0, 𝜑1 ] ⋅ (pti − xti ) + U t [0, 𝜑2 ] ⋅ (bti − xti ) (3)
where the constriction coefficient 𝜒 is calculated from the two acceleration coefficients 𝜑1
and 𝜑2 as:
2
𝜒= √ where 𝜑 = 𝜑1 + 𝜑2 and 𝜑>4 (4)
�2 − 𝜑 − 𝜑2 − 4𝜑�
This constriction method results in convergence over time, and the amplitude of particles’
oscillations decreases along the optimization process.
In order to adapt the canonical PSO to the multi-objective domain, we consider that the
global best, instead of being a single particle, is implemented as a bounded archive con-
taining the non-dominated solutions found so far. This means that a strategy for choosing
the concrete global best particle must be adopted to apply the velocity update formula. We
also assume a perturbation method that can be optionally applied, with the aim of improv-
ing some aspect of the search process (e.g., fostering diversification).
With these considerations, the pseudo-code defining the working of a generic MOPSO
is presented in Listing 1. The first steps are to create the initial swarm and evaluating it
(lines 1 and 2). Next, the velocity, local best and global best archive are initialized (lines
3–5), and then the main loop of the algorithm starts and is executed until a termination
condition is met (line 6). In the body of the loop, the position and velocity of the particles
are updated, and they can be possibly modified by a perturbation function; after that the
swarm is re-evaluated, and the local best and global best archives are updated. When the
loop ends, the algorithm returns the global best archive as a result.
13
Swarm Intelligence (2024) 18:105–139 111
Our proposed AutoMOPSO template is presented in Listing 2. It depicts the actual jMetal
code, and we can observe that it closely mimics the pseudo-code in Listing 1. A key point
in defining the template is that all the steps (e.g., create initial swarm, velocity initializa-
tion, etc.) are components that can be set at configuration time, thus leading to different
MOPSO variants. jMetal includes a catalog of components, providing one or more specific
implementations of each one. For example, the creation of the initial swarm assigns ran-
dom values to the position of the particles by default, but other strategies can be adopted,
as shown in the next section.
It is worth commenting the velocity update component. This component updates the
velocity vit+1 from vti by applying equations such as Eqs. 2 or 3. In our template, the strat-
egies for selecting the global best and to adjust the inertia weight are also parameters
that can configured.
About the perturbation, in our template, we assume that particles can be modified
by applying mutation operators (which are available in jMetal as variation operators of
genetic algorithms) with a given frequency.
The configurable components of the AutoMOPSO template and their available choices
define the design space of AutoMOPSO. These components may have their own param-
eters that also need to be configured; examples are the aforementioned velocity update
or the probability of the mutations used as perturbations. All these parameters lead to
the design space shown in Table 1. The table shows the domain of each parameter as
a range of integers (ℕ) or real values (ℝ ), in the case of numerical parameters, or as a
set of values in the case of categorical parameters. Some (conditional) parameters only
have an effect for certain values of other parameters; for example, tournamentSize
13
112 Swarm Intelligence (2024) 18:105–139
• random: the variable takes a uniform random value within the bounds.
• bounds: if the value is lower/higher than the lower/upper bound, the variable is
assigned the upper/lower bound (this is called saturation in Caraffini et al. (2019)).
• round: if the value is lower/higher than the lower/upper bound, the variable is assigned
the upper/lower bound (i.e., this could be named inverse-saturation or reverse-satura-
tion to follow the nomenclature of Caraffini et al. (2019)).
There are four strategies for computing the inertia weight 𝜔 in Eq. 2 (inertiaWeight-
Strategy): constant, random within the range [0.1, 1.0], and linearly increasing and lin-
early decreasing with minimum and maximum weight values in the ranges [0.1, 0.5] and
[0.5, 1.0], respectively.
As indicated in the previous section, there are two alternatives for updating the particles’
velocity: defaultVelocityUpdate (Eq. 2) and constrainedVelocityUpdate (Eq. 3). The 𝜑1 and
𝜑2 coefficients (named c1 and c2) take values within the ranges [1.0, 2.0] and [2.0, 3.0],
respectively. These ranges are set to consider most of possible values used in the literature
for setting these parameters (Clerc, 2012). The globalBestSelection defines how
the global best particle bti is selected from the leader archive for updating the velocity; the
two choices are random and N-ary tournament, where N is a parameter (tournament-
Size) between 2 (i.e., binary tournament) and 10.
When the default non-configurable positionUpdate component applies Eq. 1, the
resulting position can be out-of-bounds. In this case, the position is set to the bound value
(in the same way as the bounds scheme in the mutationRepairStrategy and the
13
Swarm Intelligence (2024) 18:105–139 113
localBestInitialization Default
localBestUpdate Default
globalBestInitialization Default
globalBestUpdate Default
positionUpdate Default
The leader archive has size 100. Non-configurable parameters currently have a single (default) implementa-
tion. Parameters and components in boldface are new additions with respect to Doblas et al. (2022)
13
114 Swarm Intelligence (2024) 18:105–139
In this section, we give some details about how irace is used to find AutoMOPSO configu-
rations. As shown by Fig. 1, irace require as inputs three elements: the parameter space, the
algorithm template (i.e., AutoMOPSO), and the training problems. jMetal automatically
generates the parameter space description required by irace that contains the same informa-
tion included in Table 1. This description is stored in a file which is already included in the
jMetal project.
Irace starts by sampling uniformly at random a number of valid configurations from the
design space and calling an executable program (AutoMOPSO in our case) with them on a
small number of training problems. Each execution must return a value that measures the
quality of the configuration on that particular problem instance. The main of goal of irace
is to find the configuration minimizing that expected value of that measure. Concretely,
in this work, we have used the hypervolume quality indicator (Zitzler & Thiele, 1998),
which takes into account both the convergence, distribution, and spread of the Pareto front
approximations found by the algorithms. The higher the hypervolume value, the better is
the front in terms of those properties; since irace assumes that the measure has to be mini-
mized, we multiply the hypervolume by −1. In principle, irace may use any unary quality
indicator, such as additive epsilon, inverted generational distance, etc., and previous studies
have investigated the combination of multiple indicators within irace (Bezerra et al., 2020).
Within irace, configurations are evaluated by means of statistical racing, which is a
classical method for lazy selection under uncertainty. Within each race, configurations are
evaluated on an increasing number of problem instances and poor performing configura-
tions are removed as soon as possible. The goal of racing is to compare good configura-
tions on a large number of (training) problem instances, while avoiding wasting evaluations
on poor performing ones. After a race, the surviving configurations are used to update a
probabilistic sampling model, which is then used to sample new configurations. The num-
ber of configurations sampled at each iteration is dynamically adjusted and depends on the
number of parameters and the remaining computational budget. Surviving and new con-
figurations are used together to start a new race. This process of sampling and racing is
repeated until a given computational budget is consumed, typically a maximum number
of evaluations. Full details about the internal algorithm used by irace are available in the
original publication (López-Ibáñez et al., 2016).
We give now some implementation details of AutoMOPSO. The main requirements we
considered when designing it was that it had to be flexible enough to allow to produce a
particular MOPSO algorithm from any configuration generated by irace. From our experi-
ences with jMetal, we were aware that the architecture presented in Nebro et al. (2015)
lacked the desirable flexibility we needed, so we designed a component-based architec-
ture (Nebro et al., 2019) that has been adopted to implement a template for generic MOPSO
13
Swarm Intelligence (2024) 18:105–139 115
This section is devoted to explain the experiments conducted in this study, as well as the
results obtained from them. It starts with the experimental setup and the different design
options of AutoMOPSO. A comparative study and ablation analysis are then carried out to
support the discussions and the final conclusions.
4.1 Experimental setup
Once we have the algorithm template and the parameter space, keeping in mind the auto-
design process in Fig. 1, the last required ingredient is to select the problems to be used as
the training set. As mentioned earlier, we focus on bi-objective problems and, in particular,
the four problem families, ZDT, DTLZ, WFG, and RE, which are briefly described next.
The ZDT test suite (Zitzler et al., 2000) is composed of six bi-objective problems (we
do not consider the ZDT5 problem, as it is binary) that can scale in the number of deci-
sion variables, keeping the Pareto optimal front in the same location. Both the DTLZ (Deb
et al., 2005) and WFG (Huband et al., 2006) test suites are scalable in both decision and
objective space, having seven and nine problems, respectively. The RE problem fam-
ily (Tanabe & Ishibuchi, 2020) is the result of collecting real-worlds problems from the
literature. RE contains 16 test instances, from which five have two objectives. A summary
of the main features of the problems is included in Table 2.
The methodology applied consists in running irace once per problem family, using such
problem family as the training set. As a result, we obtain four different MOPSO algorithms
instantiated from AutoMOPSO (AutoMOPSO designs). Since irace adjusts dynamically
the population size and the number of runs performed per problem instance, these values
are not fixed a priori. We give to irace the SMPSO and OMOPSO configurations, which
can be instantiated from our AutoMOPSO template (as shown in Table 3) as two initial
configurations, while irace generates additional initial configurations by sampling uni-
formly from the parameter space (as explained in Sect. 3.4). Each run of irace is given a
maximum budget of 100,000 runs of MOPSO algorithms instantiated from AutoMOPSO.
Each run of a MOPSO algorithm stops after 10,000 solution evaluations, as recommended
by Tanabe and Ishibuchi (2020),3 and returns up to 100 non-dominated solutions (due to
1
https://jmetal.readthedocs.io/en/latest/component.html.
2
https://jmetal.readthedocs.io/en/latest/autoconfiguration.html#automopso.
3
We reduced the number of solution evaluations from the 25,000 used by Doblas et al. (2022) to keep all
problems as challenging.
13
116 Swarm Intelligence (2024) 18:105–139
ZDT1 30 Convex
ZDT2 30 Concave
ZDT3 30 Disconnected
ZDT4 10 Convex, multi-modal
ZDT5 10 Concave
DTLZ1 7 Linear, multi-modal
DTLZ2 12 Concave
DTLZ3 12 Concave, multi-modal
DTLZ4 12 Concave, biased
DTLZ5 12 Degenerate
DTLZ6 12 Degenerate
DTLZ7 22 Disconnected
WFG1 6 Mixed, biased
WFG2 6 Convex, disconnected, multi-modal, non-separable
WFG3 6 Linear, degenerate, non-separable
WFG4 6 Concave, multi-modal
WFG5 6 Concave, deceptive
WFG6 6 Concave, non-separable
WFG7 6 Concave, biased
WFG8 6 Concave, biased, non-separable
WFG9 6 Concave, biased, multi-modal, deceptive, non-separable
RE21 4 Convex
RE22 3 Mixed
RE23 4 Mixed, disconnected
RE24 3 Convex
RE25 2 Mixed, disconnected
The number of variables (n) of the ZDT, DTLZ, and WFG problems are default values
the bounded size of the leaderArchive). Within irace, the quality of a MOPSO run is
evaluated according to the hypervolume value of the returned leaderArchive.
To run irace, we have used 64 cores of computers provided by the Supercomputing and
Bioinnovation Center (SCBI) of the University of Malaga. The total computing of each
irace execution has been roughly 8 h. Although this time may be considered high, it should
be borne in mind that this process only has to be done once, and trying to fine-tune the
algorithms manually usually takes considerably longer.
Table 3 includes the settings of SMPSO, OMOPSO, and the four AutoMOPSO designs
found by irace. We will use the term AutoMOPSOx , where the subscript x can take the
values z, d, w, and re to refer to the variant corresponding to the ZDT, DTLZ, WFG, and
RE problem families, respectively. A first observation at a glance shows that only two
parameters are common in all the AutoMOPSO variants: the hypervolume-based leader
13
Table 3 Settings of the MOPSO algorithms
Parameter SMPSO OMOPSO AutoMOPSO z AutoMOPSO d AutoMOPSOw AutoMOPSOre
uniformMutationPert – 0.5 – – – –
nonUniformMutationPert – – 0.1002 – – –
polynomialMut.Dist.Index 20.0 – – – 105.8681 –
linkedPol.Mut.Dist.Index – – – 151.3372 – 307.3590
mutationFrequency 6 3 10 6 9 6
inertiaWeightStrategy Constant Random Constant LinearDec LinearInc Constant
weight 0.1 – 0.12 – – 0.11
weightMin – 0.1 – 0.25 0.20 –
weightMax – 0.5 – 0.96 0.59 –
velocityUpdate Constrained Default Default Constrained Default Default
c1Min 1.5 1.5 1.89 1.75 1.29 1.29
c1Max 2.5 2.0 2.25 2.22 2.56 2.78
c2Min 1.5 1.5 1.50 1.46 1.33 1.12
c2Max 2.5 2.0 2.90 2.07 2.34 2.73
globalBestSelection Tournament Tournament Tournament Tournament Tournament Tournament
tournamentSize 2 2 9 5 8 2
velocityChangeLowerLimit −1.0 −1.0 0.02 0.86 −0.94 0.99
velocityChangeUpperLimit −1.0 −1.0 −0.90 0.06 −0.82 0.75
(CD: crowding distance, HV: hypervolume contribution, LHS: Latin hypercube sampling, SS: scatter search). The subscripts z, d, w, and re in AutoMOPSO stand, respec-
117
13
tively, for the designs generated from the ZDT, DTLZ, WFG, and RE problems. A value of ‘–’ means that the parameter is disabled
118 Swarm Intelligence (2024) 18:105–139
archive and the tournament global best selection. The hypervolume-based archive was also
selected in the experiments reported by Doblas et al. (2022), but all the other AutoMOPSO
parameters have different values now.
Besides the inclusion of new parameter values, such as linked polynomial-based muta-
tion (Zille et al., 2016) or the SPSO2007 and SPSO2011 velocity initialization strategies,
the fact that the stopping condition has been reduced from 25,000 function evaluations to
10,000 makes the two studies not directly comparable.
If we compare the AutoMOPSO designs with SMPSO, we see that only AutoMOPSOd
adopts the constrained velocity update, which is the main feature of SMPSO, but differs in
the rest of elements. Therefore, the velocity changes when getting to the lower and upper
boundaries are both −1.0 in SMPSO, while in AutoMOPSOd , the values are 0.86 in the
lower limit (the velocity remains almost unchanged) and 0.06 in the upper limit (the veloc-
ity is close to 0.0).
4.3 Comparative study
Once we have found four AutoMOPSO variants from different training sets, the next step
is to make a comparative study to assess their performances. We include NSGA-II in this
study to use it as a reference, and it is configured with commonly used settings: population
size of 100, simulated binary crossover SBX (probability = 0.9, distribution index = 20),
and polynomial-based mutation (probability = 1/n, distribution index = 20).
For this comparison, we perform 25 independent runs with different random seeds of
each algorithm per problem. The quality of the obtained Pareto front approximations is
evaluated using two indicators: unary additive epsilon ( I𝜖+) (Zitzler et al., 2003), which
gives a measure of the approximation distance to a reference set, typically a best-known
approximation to the Pareto front; and hypervolume ( IHV) (Zitzler & Thiele, 1998), which
measures convergence, distribution, and spread.
We report the median and interquartile range of the indicator values, as well as the
results of applying the Wilcoxon rank sum test to check whether the differences are sig-
nificant (with a confidence level of 95%) with respect to a reference algorithm.
Table 4 includes the I𝜖+ results. We include the variant AutoMOPSO z in the last col-
umn as the reference algorithm, as it is the best one according to Friedman’s statistical
ranking (see Table 6). Cells with bold and italic text highlight the best and second best
values, respectively. The symbols +, − and ≈ in the cells indicate that the differences
with the reference algorithm are significantly better, worse, or there is no difference
according to the pairwise Wilcoxon rank sum test.
At a glance, we observe in Table 4 that the representative SMPSO and OMOPSO
algorithms are outperformed by the AutoMOPSO variants, achieving the best result in
only one problem (OMOPSO on WFG5); NSGA-II, the only evolutionary algorithm in
the study, gets the lowest indicator values on instances WFG4 and RE23. If we focus
on the variants AutoMOPSO z , AutoMOPSOd , AutoMOPSOw , and AutoMOPSOre and
the values obtained by them on the ZDT, DTLZ, WFG, and RE problems, we see that,
in general, each of them performs best on the benchmark used as training set when gen-
erating them, although they do not achieve the best I𝜖+ values in all the problems. For
example, AutoMOPSO z finds the Pareto front approximations with best I𝜖+ values on
instances ZDT1, ZDT2, ZDT3, and ZDT6, but it is the worst performing algorithm on
ZDT4. This indicates that irace was not able to find a compromise design for the five
ZDT problems. A similar behavior is observed in the other AutoMOPSO variants.
13
Swarm Intelligence (2024) 18:105–139 119
The summary of the Wilcoxon rank sum test (last row of Table 4) confirms, as the
Friedman statistical ranking does, that AutoMOPSO z is the solver achieving the best
overall results. It gets the higher number of best results with significance in all the pair-
wise comparisons with respect to the rest of algorithms, being AutoMOPSOre the one
having the highest number of ties (11 out of the total of 26 problems).
The findings with I𝜖+ are confirmed when analyzing the values of the IHV quality indi-
cator in Table 5 and Friedman’s ranking in Table 7. We observe that some cells have a
hypervolume value of 0.0, which means that the obtained Pareto front approximations
are dominated by the reference point obtained from the reference Pareto fronts. Con-
sequently, they do not contribute to the hypervolume. In this sense, the three instances
where OMOPSO, AutoMOPSO z , AutoMOPSOw , and AutoMOPSOre get a IHV equal to
0.0 are ZDT4, DTLZ1, and DTLZ3, which are multi-modal problems (see Table 2).
Friedman’s test ranks the algorithms for each problem separately, so the lower the
ranking, the better the algorithm performs. Then, together with Friedman’s ranking, a
Holm’s post hoc for multiple comparisons has been applied to check if there are signifi-
cant differences between the resulting distributions. In this case, following the method-
ology proposed in Derrac et al., (2011), Tables 6 and 7 show the statistical results for
I𝜖+ and IHV , respectively, where symbol * indicates the best ranked technique accord-
ing to Friedman (then used as the control algorithm), and the fourth column contains
the Holm’s p value of each technique with regard to the control one. It is worth noting
that AutoMOPSO z is used as the control algorithm for the two indicators, and all the
remaining techniques reached ≤ 0.05 adjusted p value, hence rejecting the null hypoth-
eses of similar distributions (with regard to AutoMOPSO z ).
We have included two figures in “Appendix A,” Figs. 5 and 6, containing the box-
plots of the I𝜖+ and IHV quality indicator values, respectively. This way, the dispersion of
the data, including outliers, can be observed in a visual way. Additionally, we report the
tables resulting from applying the Kolmogorov–Smirnov test (Sheskin, 2011), a nonpar-
ametric test to determine whether two samples are derived from different distributions,
in “Appendix B.”
From a graphical viewpoint, Fig. 2 shows the fronts obtained by SMPSO, OMOPSO
and the four AutoMOPSO variants for problem ZDT3, to exemplify our results. We
observe that SMPSO, OMOPSO, and AutoMOPSOw are not able to find a front converging
to the reference Pareto front in 10,000 evaluations which was the value set as our stopping
condition. AutoMOPSOw has a lower I𝜖+ value than AutoMOPSOdand AutoMOPSOre with
statistical confidence, although the differences are not significant when looking at the plots.
We include in Fig. 3 the fronts obtained by the six MOPSO algorithms for problem
DTLZ1, where we observe that only SMPSO and AutoMOPSOd are able to obtain fronts
converging to the reference Pareto front on this problem. As it was commented above,
SMPSO and AutoMOPSOd adopt a velocity update scheme based on velocity constriction,
so this feature appears necessary to effectively solve some multi-modal problems.
4.4 Ablation analysis
We analyze AutoMOPSO z in more detail, which was the best overall configuration in
the comparison shown above. For this purpose, we perform an ablation analysis (Fawc-
ett & Hoos, 2016) as implemented by the irace package (López-Ibáñez et al., 2016). The
analysis starts from a source algorithmic configuration and generates all possible con-
figurations that change just one parameter to match the value in the target configuration.
13
Table 4 Median and interquartile range (IQR) of the results of the I𝜖+ quality indicator
120
13
ZDT1 2.58e−02(5.14e−03)− 1.26e−02(4.62e−02)− 1.08e−02(7.67e−03)− 5.56e−03(1.85e−04) − 3.69e−02(1.60e−01)− 3.77e−02(1.52e−01)− 5.22e−03(1.22e−04)
ZDT2 8.50e−02(7.28e−01)− 7.55e−03(2.36e−03)− 1.18e−02(6.52e−03)− 6.51e−01(9.94e−01)− 1.62e+00(6.17e−01)− 6.08e−03(3.33e−01) − 5.27e−03(2.67e−04)
ZDT3 2.19e−02(3.47e−03)− 1.39e−01(1.27e−01)− 7.00e−02(1.26e−01)− 4.18e−03(6.66e−04) − 1.39e−01(2.04e−01)− 2.13e−01(1.16e−01)− 2.92e−03(2.23e−04)
ZDT4 8.05e−01(3.14e−01)+ 9.04e−03(1.84e−03) + 8.90e+00(5.56e+00)+ 6.24e−03(6.41e−04) + 9.14e+00(5.58e+00)+ 1.17e+01(5.85e+00)+ 1.97e+01(1.19e+01)
ZDT6 3.61e−01(5.77e−02)− 7.08e−03(2.99e−03)− 6.50e−03(1.24e−03)− 6.39e−03(3.03e−04)− 5.87e−03(5.30e−04)− 5.82e−03(4.65e−04) ≈ 5.61e−03(2.78e−04)
DTLZ1 8.11e−01(9.22e−01)+ 8.77e−03(6.26e−03) + 3.24e+01(2.25e+01)+ 8.06e−03(4.66e−03) + 2.54e+01(3.17e+01)+ 3.80e+01(1.53e+01)+ 5.46e+01(2.22e+01)
DTLZ2 1.26e−02(2.62e−03)− 6.56e−03(6.49e−04)+ 6.41e−03(5.94e−04) + 4.99e−03(3.41e−04) + 7.60e−03(4.41e−03)≈ 6.44e−03(1.67e−03)+ 7.50e−03(1.63e−03)
DTLZ3 1.14e+01(6.69e+00)+ 7.07e−01(6.98e−01) + 8.17e+01(4.97e+01)+ 2.10e−02(3.35e−02) + 6.24e+01(6.06e+01)+ 9.42e+01(6.48e+01)+ 1.69e+02(2.00e+01)
DTLZ4 1.20e−02(3.51e−03)− 6.86e−03(6.72e−04) + 7.51e−03(9.06e−04)≈ 4.96e−03(2.11e−04) + 1.56e−02(9.89e−01)− 7.94e−03(4.44e−03)≈ 8.77e−03(4.64e−03)
DTLZ5 1.19e−02(4.23e−03)− 5.83e−03(7.83e−04) ≈ 5.92e−03(4.08e−04)≈ 4.59e−03(3.34e−04) + 9.11e−03(3.92e−03)− 6.35e−03(1.35e−03)≈ 6.80e−03(1.49e−03)
DTLZ6 1.25e+00(1.15e−01)− 9.80e−03(8.27e−01)− 6.30e−03(8.24e−01)− 4.64e−03(4.95e−04)− 2.00e+00(2.00e+00)− 4.57e−03(3.36e−04) ≈ 4.49e−03(2.33e−04)
DTLZ7 1.61e−02(2.19e−03)− 6.51e−03(1.76e−03)− 8.66e−03(7.41e−01)− 4.37e−03(7.05e−04) − 1.24e−02(7.45e−01)− 6.14e−03(7.43e−01)− 3.96e−03(1.84e−04)
WFG1 6.64e−01(8.93e−02)− 4.67e−01(6.03e−03)− 4.59e−01(8.65e−03)− 4.65e−01(9.83e−03)− 4.52e−01(3.81e−02)− 4.42e−01(2.59e−02) − 3.65e−01(6.04e−02)
WFG2 1.82e−01(1.73e−01)− 1.75e−02(5.30e−03)− 6.01e−03(1.70e−03)− 1.31e−02(5.69e−03)− 2.83e−03(5.96e−04) + 5.28e−03(4.40e−03)− 3.88e−03(2.00e−03)
WFG3 1.50e−02(2.50e−03)− 1.14e−02(2.32e−03)− 6.94e−03(6.07e−04)− 8.28e−03(7.72e−04)− 5.53e−03(2.34e−04) + 5.92e−03(3.90e−04)− 5.73e−03(2.88e−04)
WFG4 1.22e−02(1.69e−03) + 2.84e−02(5.04e−03)− 2.24e−02(3.15e−03)− 2.73e−02(3.59e−03)− 1.58e−02(3.67e−03) + 1.84e−02(3.06e−03)≈ 1.96e−02(3.68e−03)
WFG5 3.34e−02(2.58e−03)− 2.82e−02(1.49e−03) ≈ 2.79e−02(3.56e−04) + 2.84e−02(4.74e−04)≈ 2.83e−02(3.74e−04)≈ 2.84e−02(4.57e−04)≈ 2.83e−02(2.82e−04)
WFG6 1.73e−02(6.32e−03)− 9.52e−03(1.32e−03)− 6.07e−03(5.12e−04)− 7.51e−03(5.02e−03)− 4.57e−03(4.08e−04) ≈ 4.82e−03(4.35e−04)− 4.68e−03(3.46e−04)
WFG7 1.30e−02(2.53e−03)− 9.32e−03(7.26e−04)− 6.29e−03(3.09e−04)− 6.45e−03(4.77e−04)− 4.71e−03(9.70e−05) + 4.91e−03(2.72e−04)≈ 4.86e−03(2.41e−04)
WFG8 2.45e−01(7.53e−02)≈ 2.06e−01(4.26e−02) ≈ 2.46e−01(2.07e−03)≈ 1.91e−01(3.01e−02) + 2.46e−01(4.07e−03)≈ 2.45e−01(2.04e−03)≈ 2.46e−01(5.26e−02)
WFG9 1.63e−02(3.62e−03)− 1.29e−02(1.82e−03)− 1.09e−02(9.52e−04)− 1.17e−02(1.53e−03)− 8.26e−03(1.54e−03) + 9.43e−03(1.54e−03) ≈ 9.68e−03(2.98e−03)
RE21 1.45e−02(3.29e−03)− 6.27e−03(4.49e−04)− 6.00e−03(2.39e−04)− 6.07e−03(3.51e−04)− 5.55e−03(1.96e−04)≈ 5.38e−03(2.64e−04) ≈ 5.50e−03(1.90e−04)
RE22 1.58e−02(2.70e−03)− 7.89e−03(8.64e−04)− 7.00e−03(5.04e−04)− 8.68e−03(1.56e−03)− 6.60e−03(5.22e−04)− 6.16e−03(7.58e−04) + 6.28e−03(4.23e−04)
Swarm Intelligence (2024) 18:105–139
Table 4 (continued)
NSGAII SMPSO OMOPSO AutoMOPSO d AutoMOPSOw AutoMOPSOre AutoMOPSOz
Bold and italic values highlight respectively, the best and second best indicator values. The algorithm in the last column is the reference algorithm, and the symbols +, − and
≈ indicate that the differences with the reference algorithm are significantly better, worse, or there is no difference according to the Wilcoxon rank sum test (confidence level:
95%)
Swarm Intelligence (2024) 18:105–139
121
13
Table 5 Median and interquartile range (IQR) of the results of the IHV quality indicator
122
13
ZDT1 6.38e−01(5.97e−03)− 6.57e−01(4.88e−02)− 6.56e−01(1.04e−02)− 6.62e−01(1.74e−05) − 6.22e−01(2.07e−01)− 6.14e−01(2.05e−01)− 6.62e−01(9.64e−06)
ZDT2 2.92e−01(1.58e−01)− 3.28e−01(1.74e−03)− 3.22e−01(6.82e−03)− 4.21e−02(3.29e−01)− 0.00e+00(0.00e+00)− 3.29e−01(9.49e−02) − 3.29e−01(8.78e−06)
ZDT3 4.97e−01(2.82e−03)− 4.19e−01(1.74e−01)− 4.68e−01(1.23e−01)− 5.16e−01(2.70e−04) − 3.91e−01(2.24e−01)− 3.17e−01(1.24e−01)− 5.16e−01(7.38e−06)
ZDT4 1.39e−01(2.17e−01)+ 6.60e−01(8.68e−04) + 0.00e+00(0.00e+00)≈ 6.62e−01(9.03e−05) + 0.00e+00(0.00e+00)≈ 0.00e+00(0.00e+00)≈ 0.00e+00(0.00e+00)
ZDT6 1.63e−01(3.33e−02)− 4.01e−01(4.09e−04)− 4.01e−01(9.29e−05)− 4.01e−01(4.86e−05)− 4.01e−01(6.74e−05)− 4.01e−01(1.70e−05) ≈ 4.01e−01(6.75e−06)
DTLZ1 0.00e+00(1.38e−01)+ 4.93e−01(2.36e−03) + 0.00e+00(0.00e+00)≈ 4.95e−01(7.47e−04) + 0.00e+00(0.00e+00)≈ 0.00e+00(0.00e+00)≈ 0.00e+00(0.00e+00)
DTLZ2 2.09e−01(3.86e−04)+ 2.10e−01(3.29e−04) + 2.09e−01(4.47e−04)+ 2.11e−01(7.86e−06) + 2.06e−01(6.71e−03)≈ 2.09e−01(2.56e−03)+ 2.07e−01(2.04e−03)
DTLZ3 0.00e+00(0.00e+00)≈ 8.58e−02(1.23e−01) + 0.00e+00(0.00e+00)≈ 2.06e−01(1.50e−02) + 0.00e+00(0.00e+00)≈ 0.00e+00(0.00e+00)≈ 0.00e+00(0.00e+00)
DTLZ4 2.09e−01(6.30e−04)+ 2.10e−01(3.37e−04) + 2.08e−01(1.12e−03)+ 2.11e−01(1.49e−05) + 1.93e−01(2.00e−01)− 2.06e−01(8.00e−03)≈ 2.06e−01(7.56e−03)
DTLZ5 2.10e−01(3.51e−04)≈ 2.11e−01(3.42e−04) + 2.11e−01(5.52e−04)+ 2.13e−01(1.69e−05) + 2.06e−01(7.00e−03)− 2.11e−01(1.94e−03)≈ 2.09e−01(2.83e−03)
DTLZ6 0.00e+00(0.00e+00)− 2.12e−01(2.12e−01)− 2.12e−01(2.12e−01)− 2.13e−01(2.51e−05)− 0.00e+00(0.00e+00)− 2.13e−01(1.33e−05) − 2.13e−01(1.05e−05)
DTLZ7 3.22e−01(3.58e−03)− 3.33e−01(1.05e−03)− 3.30e−01(1.14e−01)− 3.35e−01(6.96e−05) − 3.32e−01(2.77e−01)− 3.34e−01(1.16e−01)− 3.35e−01(6.36e−06)
WFG1 2.45e−01(9.29e−02) + 1.06e−01(3.42e−03)− 1.10e−01(9.60e−03)− 1.07e−01(5.97e−03)− 1.19e−01(3.38e−02)− 1.29e−01(4.77e−02)− 1.90e−01(9.84e−02)
WFG2 5.60e−01(2.36e−03)− 5.54e−01(3.88e−03)− 5.62e−01(5.84e−04)− 5.57e−01(3.15e−03)− 5.64e−01(4.73e−04) + 5.62e−01(1.35e−03)− 5.64e−01(1.62e−03)
WFG3 4.90e−01(1.26e−03)− 4.88e−01(1.31e−03)− 4.93e−01(3.31e−04)− 4.93e−01(6.01e−04)− 4.95e−01(2.25e−05) + 4.95e−01(9.57e−05)− 4.95e−01(7.62e−05)
WFG4 2.16e−01(6.85e−04) + 1.96e−01(1.55e−03)− 2.02e−01(1.93e−03)− 1.96e−01(2.51e−03)− 2.09e−01(4.25e−03) + 2.07e−01(2.63e−03)≈ 2.06e−01(4.13e−03)
WFG5 1.95e−01(2.78e−04)− 1.96e−01(1.07e−04)− 1.96e−01(1.29e−04)− 1.96e−01(5.10e−05)− 1.97e−01(2.32e−05)≈ 1.97e−01(5.65e−05) ≈ 1.97e−01(2.98e−05)
WFG6 1.99e−01(8.30e−03)− 2.05e−01(1.33e−03)− 2.09e−01(2.73e−04)− 2.08e−01(1.80e−03)− 2.11e−01(8.06e−05) + 2.11e−01(3.04e−04)− 2.11e−01(8.62e−05)
WFG7 2.08e−01(5.78e−04)− 2.06e−01(1.14e−03)− 2.10e−01(3.00e−04)− 2.10e−01(4.97e−04)− 2.11e−01(1.36e−05) + 2.11e−01(3.65e−05)− 2.11e−01(3.12e−05)
WFG8 1.42e−01(1.64e−03)− 1.42e−01(3.36e−03)− 1.44e−01(1.35e−03)− 1.43e−01(2.41e−03)− 1.46e−01(2.13e−03)− 1.49e−01(1.28e−03) + 1.48e−01(1.58e−03)
WFG9 2.35e−01(2.85e−03)− 2.33e−01(6.61e−04)− 2.35e−01(5.43e−04)− 2.35e−01(5.16e−04)− 2.40e−01(2.00e−03) + 2.39e−01(1.13e−03) ≈ 2.38e−01(2.23e−03)
RE21 6.71e−01(3.74e−04)− 6.74e−01(1.05e−04)− 6.74e−01(1.17e−04)− 6.74e−01(3.69e−05)− 6.74e−01(1.99e−05)− 6.75e−01(8.80e−06) + 6.75e−01(3.60e−05)
RE22 5.44e−01(1.53e−03)− 5.47e−01(2.36e−04)− 5.48e−01(2.01e−04)− 5.48e−01(9.57e−05)− 5.49e−01(7.80e−05)≈ 5.49e−01(5.93e−05) + 5.49e−01(7.14e−05)
RE23 9.52e−01(1.27e−04)+ 9.53e−01(9.25e−05) + 9.53e−01(6.67e−05) + 9.52e−01(6.48e−04)+ 9.47e−01(3.85e−03)− 9.48e−01(3.74e−03)− 9.50e−01(1.58e−03)
Swarm Intelligence (2024) 18:105–139
Table 5 (continued)
NSGAII SMPSO OMOPSO AutoMOPSO d AutoMOPSOw AutoMOPSOre AutoMOPSO z
Bold and italic values highlight respectively, the best and second best indicator values. The algorithm in the last column is the reference algorithm, and the symbols +, − and
≈ indicate that the differences with the reference algorithm are significantly better, worse, or there is no difference according to the Wilcoxon rank sum test (confidence level:
95%)
Swarm Intelligence (2024) 18:105–139
123
13
124 Swarm Intelligence (2024) 18:105–139
Symbol * indicates the control algorithm and the column at the left
contains the overall ranking of positions with regard to I𝜖+
Symbol * indicates the control algorithm and the column at the left
contains the overall ranking of positions with regard to IHV
When the changed parameter is conditional on other parameters, all the parameters
required to fulfill the condition are changed simultaneously. All these configurations are
evaluated on a number of problem instances and the best one is kept. The next step
repeats the process but starting from the best configuration selected in the previous step.
The process stops when all parameters have the same value as in the target configura-
tion. The parameter changed at each step, and the impact of its change on solution qual-
ity measures the contribution of each individual parameter to the differences between
the source and target algorithm designs.
We performed ablation analysis using SMPSO as our source and AutoMOPSO z as
our target (see Table 3) adopting two different sets of problem instances. The analysis
on the left side of Fig. 4 evaluates every configuration on all the ZDT problems (five
repetitions per instance). Hypervolume values are transformed into ranks and lower
rank values indicate better quality. Starting from the source (SMPSO), the most signifi-
cant changes occur in the first 6 steps, when changing parameters leaderArchive,
velocityChangeLowerLimit, swarmSize, selectionTournamentSize,
velocityChangeUpperLimit and velocityUpdate. From the analysis of
other AutoMOPSO configurations, the change in leaderArchive from crowding
distance to hypervolume seems to be a critical improvement to the design of MOPSOs,
as well as the use of a larger tournament size and a smaller swarm size than SMPSO.
The right plot of Fig. 4 repeats the analysis using all problem families (ZDT,
DTLZ, WFG, and RE, with five repetitions per instance), which explains the higher
variance of the rank values. In this case, the most significant parameter changes
13
Swarm Intelligence (2024) 18:105–139 125
1.0
1.0
1.0
− Reference front − Reference front − Reference front
Obtained front Obtained front Obtained front
0.5
0.5
0.5
F2
F2
F2
0.0
0.0
0.0
−0.5
−0.5
−0.5
0.0 0.2 0.4 0.6 0.8 0.0 0.2 0.4 0.6 0.8 0.0 0.2 0.4 0.6 0.8
F1 F1 F1
1.0
− Reference front − Reference front − Reference front
Obtained front Obtained front Obtained front
1.0
0.5
0.5
0.5
F2
F2
F2
0.0
0.0
0.0
−0.5
−0.5
−0.5
0.0 0.2 0.4 0.6 0.8 0.0 0.2 0.4 0.6 0.8 0.0 0.2 0.4 0.6 0.8
F1 F1 F1
Fig. 2 Pareto front approximations obtained by the MOPSO algorithms corresponding to the median of the
IHV for problem ZDT3
5 Discussion
Once the initial experiments have been carried out, this section directly addresses the
research questions posed at the beginning of this study, which are answered as follows:
13
126 Swarm Intelligence (2024) 18:105–139
0.5
− Reference front − Reference front − Reference front
50
Obtained front Obtained front Obtained front
0.4
0.4
40
0.3
0.3
30
F2
F2
F2
0.2
0.2
20
0.1
0.1
10
0.0
0.0
0
0.0 0.1 0.2 0.3 0.4 0.5 0 10 20 30 40 50 0.0 0.1 0.2 0.3 0.4 0.5
F1 F1 F1
40
40
50
30
30
30
F2
F2
F2
20
20
20
10
10
10
0
0
0 10 20 30 40 50 0 10 20 30 40 0 10 20 30
F1 F1 F1
Fig. 3 Pareto front approximations obtained by the MOPSO algorithms corresponding to the median of the
IHV for problem DTLZ1
RQ1. Is it possible to find designs of MOPSO algorithms by automatic design that out-
perform state-of-the-art algorithms such as SMPSO and OMOPSO? Although this
question was already answered positively before (Doblas et al., 2022), here we confirm
it by considering additional problems and an extended design space.
RQ2. Is the auto-design process able of obtaining a configuration yielding accurate
results when a family of realistic problems are added to the study? The new problem
family RE has been added to the experimental study, which comprises real-world prob-
lems. For the sake of simplicity, we have restricted ourselves to bi-objective problems,
yet they show structural features of convexity, disconnection and mixed versions. Simi-
lar observations are made for the problems selected, which confirm our previous study.
RQ3. What are the design parameters that have the strongest effect on the performance
of MOPSO algorithms given the design space considered here? An ablation analysis of
AutoMOPSO z shows that the choice of leader archive, swarm size and tournament size
are the most crucial parameters.
RQ4. Can a configuration obtained for a given test suite lead the algorithm to avoid
overfitting and allow a generalization for many other problems? The comparative study
shows that when AutoMOPSO is specifically trained for a given problem family, it
13
Swarm Intelligence (2024) 18:105–139 127
Fig. 4 Ablation analysis of AutoMOPSO z on the ZDT family (left) and on all problem families (right). The
source is SMPSO
13
128 Swarm Intelligence (2024) 18:105–139
obtains the best performance for this family. Although in the scope of a general statisti-
cal comparison, it can be observed that AutoMOPSO trained with the ZDT test suite
obtains a salient performance for all the problem families, that is, it shows higher gen-
eralization capabilities than the remaining models. Nevertheless, the ablation analysis
also shows that some design choices of AutoMOPSO z work well for the ZDT family but
harm its performance on other families, which indicates that (unsurprisingly) ZDT is
not representative of other problem families and that (surprisingly) even better MOPSO
designs are possible.
6 Conclusions
This paper proposes AutoMOPSO, an algorithmic template for the design of MOPSOs.
State-of-the-art MOPSOs, such as SMPSO and OMOPSO, can be instantiated from the
proposed AutoMOPSO template. Besides key numerical parameters, such as the swarm
size, the categorical design choices available in AutoMOPSO give rise to thousands of
potential MOPSO designs.
We use an automatic configuration tool (irace) to search the design space of
AutoMOPSO given a problem family. In this way, we generate four different AutoMOPSO
designs. While each AutoMOPSO design performs best on the problem family that it was
designed for, not all of them perform well on the other problem families. When consider-
ing all problem families together, the AutoMOPSO design obtained from the ZDT family
(AutoMOPSO z ) significantly outperforms all other AutoMOPSOs as well as SMPSO and
OMOPSO.
The proposed AutoMOPSO was implemented in the jMetal framework and is publicly
available for further study and extension by researchers, as well as for its application to
other problem scenarios. Further extensions of AutoMOPSO with components taken from
other MOPSOs from the literature as well as novel ones would likely improve the results
reported here.
We believe that future MOPSO proposals should integrate their novel components
into AutoMOPSO, instead of analyzing just a single “novel” MOPSO design, and let
the automatic design approach decide how to combine and tune those components for
various problem scenarios (Stützle & López-Ibáñez, 2019). In this way, the automatic
design process can evaluate thousands of potential designs and an ablation analysis can
show the actual impact of any novel components, as illustrated in this work. We argue
that our proposed approach would lead to faster research progress, less duplication of
efforts and a clearer picture of the key algorithmic components of MOPSOs. Applying
AutoMOPSO to real-world problems and analyzing its performance on many-objective
problems are also lines of further research.
We include in this “Appendix” the boxplots of the values of the I𝜖+ and IHV quality
indicators in Figs. 5 and 6, respectively.
13
Swarm Intelligence (2024) 18:105–139 129
This “Appendix” includes the tables obtained when applying the Kolmogorov–Smirnov
statistical test to the results of the I𝜖+ (Table 8) and IHV (Table 9) quality indicators. The
approach we have followed is to apply the test to all the algorithms with regard to the
AutoMOPSO z variant, which is used as reference algorithm. The resulting tables report the
p values, and those cells where these values are less than 0.05 have been highlighted with
italic text .
Focusing on the I𝜖+ indicator, from Table 8, we observe that the p values allowing
to reject the null hypothesis are most of the problems concerning algorithms NSGA-
II, SMPSO, OMOPSO, and AutoMOPSOd . For the AutoMOPSOw and AutoMOPSOre
variants, there are 8 and 13 of the 25 problems, respectively, where the test indicates
that the data distribution is the same. These results are consistent with the boxplots of
Fig. 5, as the cells containing p values < 0.05 correspond to boxes that present a high
degree of overlapping. For example, if we focus on problem RE25, the p values of
AutoMOPSOw and AutoMOPSOre are, respectively, 7.10e01 and 1.58e14 and the boxplot
corresponding to that problem in Fig. 5 shows that there is an overlapping between the
boxes of AutoMOPSO z and AutoMOPSOre , while this does not happen in the case of
AutoMOPSOw.
The results of the test for the IHV indicator in Table 9 shows that number of cases where
the distributions are similar are reduced with respect to the I𝜖+ values, and they are again
consistent with the boxplots included in Fig. 6.
The question of the influence of the problems used as a training set is left open from the
experimentation carried out in the paper. To investigate this issue further, we have con-
ducted an additional experiment using a different approach to define the training set. Con-
cretely, we have divided the families of benchmarks into groups of three, remaining the
fourth group for validation; as a result, we have four new AutoMOPSO variants:
We have run irace to find the configurations of these variants, which are reported in
Table 10. We include the components of AutoMOPSO z , the version that showed the best
performance, to facilitate comparison with it. We show the results of the IHV quality indica-
tor and Friedman’s ranking in Tables 11 and 12, respectively. From the tables we observe
that AutoMOPSOzwr reaches the first position of the ranking together with AutoMOPSO z ,
thus confirming that the use of the ZDT for training leads to a MOPSO that provides the
best overall performance in the context of the experimentation carried out.
13
130 Swarm Intelligence (2024) 18:105–139
From the configurations reported in Table 10, we see that all the variants use the exter-
nal archive with the hypervolume-based density estimator and the tournament approach for
global best selection. If we compare the parameters of AutoMOPSO z and AutoMOPSOzwr ,
they only share those components as well as the default scheme for velocity update.
13
Swarm Intelligence (2024) 18:105–139 131
13
132 Swarm Intelligence (2024) 18:105–139
The algorithm in the last column is the reference algorithm and each cell contains the p value obtained
when applying the test with the reference algorithm. Values in italics highlight p values less than 0.05 (i.e.,
the null hypothesis—the two distributions are identical—is rejected)
13
Swarm Intelligence (2024) 18:105–139 133
The algorithm in the last column is the reference algorithm and each cell contains the p value obtained
when applying the test with the reference algorithm. Values in italics highlight p values less than 0.05 (i.e.,
the null hypothesis—the two distributions are identical—is rejected)
13
134 Swarm Intelligence (2024) 18:105–139
Table 10 Settings of the AutoMOPSO algorithms when using three benchmark families for training
Parameter AutoMOPSO z AutoMOPSOzdw AutoMOPSOzdr AutoMOPSO AutoMOPSOdwr
swarmSize 39 85 40 65 17
leaderArchive HV HV HV HV HV
swarmInitializa- Random LHS SS LHS SS
tion
velocityInitializa- SPSO2007 SPSO2007 Default SPSO2007 SPSO2011
tion
mutation NonUniform Polynomial Uniform Uniform Uniform
mutationProbabili- 1.65 0.8 1.71 0.35 0.75
tyFactor
mutationRepair- Round Random Round Round Round
Strategy
uniformMutation- – – 0.43 0.74 0.18
Pert
nonUniformMuta- 0.1002 – – – –
tionPert
polynomialMut. – 103.57 – – –
Dist.Index
linkedPol.Mut. – – – 151.3372 –
Dist.Index
mutationFre- 10 5 10 4 9
quency
inertiaWeight- Constant Random LinearInc Constant Random
Strategy
weight 0.12 – – 0.28 –
weightMin – 0.18 0.36 – 0.20
weightMax – 0.76 0.63 – 0.59
velocityUpdate Default Constrained Constrained Default Constrained
c1Min 1.89 1.33 1.04 1.55 1.26
c1Max 2.25 2.02 2.74 2.62 2.16
c2Min 1.50 1.38 1.64 1.15 1.01
c2Max 2.90 2.24 2.43 2.19 2.06
globalBestSelec- Tournament Tournament Tournament Tournament Tournament
tion
tournamentSize 9 8 8 2 3
veloci- 0.02 0.67 −0.52 0.32 −0.21
tyChangeLower-
Limit
velocityChange- −0.90 −0.95 −0.79 −0.08 0.94
UpperLimit
(CD: crowding distance, HV: hypervolume contribution, LHS: Latin hypercube sampling, SS: scatter
search). The characters in the subscripts zdw, zdr, zwr, and dwr in AutoMOPSO stand, respectively, for the
designs generated using combinations of the ZDT (z), DTLZ (d), WFG (w), and RE (r) problems. A value
of ‘–’ means that the parameter is disabled
13
Table 11 Median and interquartile range (IQR) of the results of the IHV quality indicator in the complementary study
AutoMOPSOzdw AutoMOPSOzdr AutoMOPSO AutoMOPSOdwr AutoMOPSO z
DTLZ3 0.00e + 00_8.6e−02 𝟐.𝟏𝟎𝐞−𝟎𝟏_𝟏.𝟐𝐞−𝟎𝟏 0.00e + 00_0.0e + 00 0.00e + 00_4.1e−02 0.00e + 00_0.0e + 00
DTLZ4 2.10e−01_8.4e−04 2.11e−01_ 2.0e−04 2.11e−01_1.9e−04 𝟐.𝟏𝟏𝐞−𝟎𝟏_𝟏.𝟐𝐞−𝟎𝟒 2.06e−01_7.4e−03
DTLZ5 2.12e−01_2.2e−04 𝟐.𝟏𝟐𝐞−𝟎𝟏_𝟖.𝟎𝐞−𝟎𝟓 2.12e−01_1.7e−04 2.12e−01_ 9.2e−05 2.09e−01_4.1e−03
DTLZ6 2.13e−01_1.3e−05 0.00e + 00_1.7e−01 𝟐.𝟏𝟑𝐞−𝟎𝟏_𝟔.𝟕𝐞−𝟎𝟔 2.13e−01_1.2e−05 2.13e−01_ 9.3e−06
DTLZ7 3.35e−01_ 9.0e−06 3.35e−01_2.0e−05 3.35e−01_3.7e−04 3.34e−01_5.8e−04 𝟑.𝟑𝟓𝐞−𝟎𝟏_𝟓.𝟒𝐞−𝟎𝟔
WFG1 1.13e−01_4.4e−03 1.05e−01_6.5e−03 1.38e−01_ 5.7e−02 1.16e−01_1.6e−02 𝟏.𝟑𝟗𝐞−𝟎𝟏_𝟗.𝟑𝐞−𝟎𝟐
WFG2 5.63e−01_7.4e−04 5.60e−01_1.9e−03 5.63e−01_1.3e−03 5.63e−01_ 4.5e−04 𝟓.𝟔𝟒𝐞−𝟎𝟏_𝟕.𝟒𝐞−𝟎𝟒
WFG3 4.95e−01_1.3e−04 4.93e−01_4.4e−04 4.95e−01_ 4.6e−05 4.95e−01_8.5e−05 𝟒.𝟗𝟓𝐞−𝟎𝟏_𝟔.𝟑𝐞−𝟎𝟓
WFG4 2.01e−01_2.3e−03 1.96e−01_3.1e−03 2.04e−01_ 2.5e−03 2.03e−01_1.8e−03 𝟐.𝟎𝟗𝐞−𝟎𝟏_𝟔.𝟔𝐞−𝟎𝟑
WFG5 1.97e−01_4.9e−05 1.97e−01_4.5e−05 𝟏.𝟗𝟕𝐞−𝟎𝟏_𝟔.𝟏𝐞−𝟎𝟓 1.97e−01_ 3.5e−05 1.97e−01_4.8e−05
WFG6 2.10e−01_2.5e−04 2.09e−01_8.6e−04 𝟐.𝟏𝟏𝐞−𝟎𝟏_𝟏.𝟏𝐞−𝟎𝟒 2.11e−01_9.4e−05 2.11e−01_ 1.1e−04
WFG7 2.11e−01_8.0e−05 2.10e−01_3.9e−04 2.11e−01_ 3.1e−05 2.11e−01_5.6e−05 𝟐.𝟏𝟏𝐞−𝟎𝟏_𝟓.𝟗𝐞−𝟎𝟓
WFG8 1.43e−01_2.4e−03 1.36e−01_3.0e−03 𝟏.𝟒𝟖𝐞−𝟎𝟏_𝟏.𝟔𝐞−𝟎𝟑 1.45e−01_1.8e−03 1.47e−01_ 1.5e−03
WFG9 2.37e−01_5.6e−04 2.35e−01_1.2e−03 2.38e−01_ 8.8e−04 2.37e−01_7.9e−04 𝟐.𝟑𝟗𝐞−𝟎𝟏_𝟐.𝟏𝐞−𝟎𝟑
RE21 6.74e−01_2.8e−05 6.74e−01_4.6e−05 𝟔.𝟕𝟓𝐞−𝟎𝟏_𝟏.𝟏𝐞−𝟎𝟓 6.74e−01_3.2e−05 6.75e−01_ 1.8e−05
RE22 5.48e−01_1.3e−04 5.48e−01_1.7e−04 𝟓.𝟒𝟗𝐞−𝟎𝟏_𝟔.𝟒𝐞−𝟎𝟓 5.49e−01_ 4.7e−05 5.49e−01_1.0e−04
RE23 9.51e−01_ 1.6e−03 𝟗.𝟓𝟏𝐞−𝟎𝟏_𝟏.𝟖𝐞−𝟎𝟑 9.43e−01_5.6e−03 9.46e−01_5.4e−03 9.50e−01_2.5e−03
RE24 9.60e−01_3.1e−06 9.60e−01_4.6e−06 𝟗.𝟔𝟎𝐞−𝟎𝟏_𝟑.𝟑𝐞−𝟎𝟔 9.60e−01_ 5.9e−06 9.60e−01_4.7e−06
RE25 8.71e−01_1.1e−08 8.71e−01_4.4e−10 𝟖.𝟕𝟏𝐞−𝟎𝟏_𝟓.𝟏𝐞−𝟏𝟏 8.71e−01_ 6.1e−11 8.71e−01_1.3e−08
135
13
Bold and italics values highlight, respectively, the best and second best indicator values
136 Swarm Intelligence (2024) 18:105–139
Symbol * indicates the control algorithm and the column at the left
contains the overall ranking of positions with regard to IHV
Acknowledgements The authors thank the Supercomputing and Bioinnovation Center (SCBI) of the Uni-
versity of Malaga for their provision of computational resources and technical support
Author Contributions Conceptualization was done by AJN, MLI, JGN, and CACC; methodology was done
by AJN, MLI, JGN, and CACC; software was done by AJN and MLI; validation was done by AJN, MLI,
and JGN; analysis was done by AJN, MLI, and JGN; writing—original draft preparation were done by AJN,
MLI, and JGN; writing—review and editing were done by AJN, MLI, JGN, and CACC. All authors have
read and agreed to the published version of the manuscript.
Funding Funding for open access publishing: Universidad Málaga/CBUA. This work has been partially
funded by the Spanish Ministry of Science and Innovation via Grant PID2020-112540RB-C41 (AEI/
FEDER, UE). Carlos A. Coello Coello gratefully acknowledges support from CONACyT Grant No. 2016-
01-1920 (Investigación en Fronteras de la Ciencia 2016).
Data availibility The source code used in this paper is freely available as part of the jMetal project: https://
github.com/jMetal/jMetal.
Declarations
Conflict of interest The authors declare that they have no conflict of interest.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License,
which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long
as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Com-
mons licence, and indicate if changes were made. The images or other third party material in this article
are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the
material. If material is not included in the article’s Creative Commons licence and your intended use is not
permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly
from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
References
Beume, N., Naujoks, B., & Emmerich, M. T. M. (2007). SMS-EMOA: Multiobjective selection based on
dominated hypervolume. European Journal of Operational Research, 181(3), 1653–1669. https://doi.
org/10.1016/j.ejor.2006.08.008
Bezerra, L. C. T., López-Ibáñez, M., & Stützle, T. (2016). Automatic component-wise design of multi-
objective evolutionary algorithms. IEEE Transactions on Evolutionary Computation, 20(3), 403–417.
https://doi.org/10.1109/TEVC.2015.2474158
Bezerra, L. C. T., López-Ibáñez, M., & Stützle, T. (2020). Automatically designing state-of-the-art
multi- and many-objective evolutionary algorithms. Evolutionary Computation, 28(2), 195–226.
https://doi.org/10.1162/evco_a_00263
Birattari, M. (2009). Tuning metaheuristics: A machine learning perspective. Studies in computational
intelligence (Vol. 197). Springer. https://doi.org/10.1007/978-3-642-00483-4
13
Swarm Intelligence (2024) 18:105–139 137
Camacho-Villalón, C. L., Stützle, T., & Dorigo, M. (2021). PSO-X: A component-based framework for
the automatic design of particle swarm optimization algorithms. IEEE Transactions on Evolution-
ary Computation, 26(3), 402–416. https://doi.org/10.1109/TEVC.2021.3102863
Caraffini, F., Kononova, A. V., & Corne, D. (2019). Infeasibility and structural bias in differential evolu-
tion. Information Sciences, 496, 161–179. https://doi.org/10.1016/j.ins.2019.05.019
Chugh, T., Jin, Y., Miettinen, K., Hakanen, J., & Sindhya, K. (2018). A surrogate-assisted reference
vector guided evolutionary algorithm for computationally expensive many-objective optimization.
IEEE Transactions on Evolutionary Computation, 22(1), 129–142.
Clerc, M. (2012). Standard particle swarm optimisation. https://hal.archives-ouvertes.fr/hal-00764996
Clerc, M., & Kennedy, J. (2002). The particle swarm–Explosion, stability, and convergence in a multidi-
mensional complex space. IEEE Transactions on Evolutionary Computation, 6(1), 58–73. https://
doi.org/10.1109/4235.985692
Coello Coello, C. A., Lamont, G. B., & Van Veldhuizen, D. A. (2007). Evolutionary Algorithms for
Solving Multi-Objective Problems (2nd ed.). Springer. https://doi.org/10.1007/978-0-387-36797-2
Deb, K. (2001). Multi-objective optimization using evolutionary algorithms. Wiley.
Deb, K., Pratap, A., Agarwal, S., & Meyarivan, T. (2002). A fast and elitist multi-objective genetic algo-
rithm: NSGA-II. IEEE Transactions on Evolutionary Computation, 6(2), 182–197. https://doi.org/
10.1109/4235.996017
Deb, K., Thiele, L., Laumanns, M., & Zitzler, E. (2005). Scalable test problems for evolutionary multi-
objective optimization. In: Abraham, A., Jain, L., Goldberg, R. (Eds.) Evolutionary multiobjective
optimization. Advanced information and knowledge processing (pp. 105–145). Springer, https://
doi.org/10.1007/1-84628-137-7_6
de Lima, R. H. R., & Pozo, A. T. R. (2017). A study on auto-configuration of multi-objective particle
swarm optimization algorithm. In Proceedings of the 2017 congress on evolutionary computation
(CEC 2017) (pp. 718–725). IEEE Press, https://doi.org/10.1109/CEC.2017.7969381
Derrac, J., García, S., Molina, D., & Herrera, F. (2011). A practical tutorial on the use of nonparametric
statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms.
Swarm and Evolutionary Computation, 1(1), 3–18. https://doi.org/10.1016/j.swevo.2011.02.002
Doblas, D., Nebro, A. J., López-Ibáñez, M., García-Nieto, J., & Coello Coello, C. A. (2022). Automatic
design of multi-objective particle swarm optimizers. In: Dorigo, M., Hamann, H., López-Ibáñez,
M., García-Nieto, J., Engelbrecht, A., Pinciroli, C., Strobel, V., & Camacho-Villalón, C. L. (Eds.)
Swarm intelligence, 13th international conference, ANTS 2022, Lecture notes in computer science
(Vol. 13491, pp. 28–40). Springer, https://doi.org/10.1007/978-3-031-20176-9_3
Durillo, J. J., García-Nieto, J., Nebro, A. J., Coello Coello, C. A., Luna, F., & Alba, E. (2009). Multi-
objective particle swarm optimizers: An experimental comparison. Lecture Notes in Computer Sci-
enceIn M. Ehrgott, C. M. Fonseca, X. Gandibleux, J. K. Hao, & M. Sevaux (Eds.), Evolutionary
multi-criterion optimization, EMO 2009 (Vol. 5467, pp. 495–509). Springer.
Durillo, J. J., & Nebro, A. J. (2011). jMetal: A Java framework for multi-objective optimization.
Advances in Engineering Software, 42(10), 760–771. https://doi.org/10.1016/j.advengsoft.2011.05.
014
Fawcett, C., & Hoos, H. H. (2016). Analysing differences between algorithm configurations through abla-
tion. Journal of Heuristics, 22(4), 431–458.
Huband, S., Hingston, P., Barone, L., & While, L. (2006). A review of multiobjective test problems and
a scalable test problem toolkit. IEEE Transactions on Evolutionary Computation, 10(5), 477–506.
https://doi.org/10.1109/TEVC.2005.861417
Jiang, S., Zou, J., Yang, S., & Yao, X. (2022). Evolutionary dynamic multi-objective optimisation: A
survey. ACM Computing Surveys. https://doi.org/10.1145/3524495
Kennedy, J., & Eberhart, R. C. (1995). Particle swarm optimization. In Proceedings of international
conference on neural networks (ICNN’95) (pp. 1942–1948). IEEE Press, https://doi.org/10.1109/
ICNN.1995.488968
KhudaBukhsh, A. R., Xu, L., Hoos, H. H., & Leyton-Brown, K. (2016). SATenstein: Automatically
building local search SAT Solvers from components. Artificial Intelligence, 232, 20–42. https://doi.
org/10.1016/j.artint.2015.11.002
Knowles, J. D. (2006). ParEGO: A hybrid algorithm with on-line landscape approximation for expensive
multiobjective optimization problems. IEEE Transactions on Evolutionary Computation, 10(1),
50–66. https://doi.org/10.1109/TEVC.2005.851274
Knowles, J. D., & Corne, D. (2003). Properties of an adaptive archiving algorithm for storing nondomi-
nated vectors. IEEE Transactions on Evolutionary Computation, 7(2), 100–116.
Li, B., Li, J., Tang, K., & Yao, X. (2015). Many-objective evolutionary algorithms: A survey. ACM
Computing Surveys, 48(1), 1–35. https://doi.org/10.1145/2792984
13
138 Swarm Intelligence (2024) 18:105–139
López-Ibáñez, M., Dubois-Lacoste, J., Pérez Cáceres, L., Stützle, T., & Birattari, M. (2016). The irace
package: Iterated racing for automatic algorithm configuration. Operations Research Perspectives,
3, 43–58. https://doi.org/10.1016/j.orp.2016.09.002
López-Ibáñez, M., & Stützle, T. (2012). The automatic design of multi-objective ant colony optimization
algorithms. IEEE Transactions on Evolutionary Computation, 16(6), 861–875. https://doi.org/10.
1109/TEVC.2011.2182651
Nebro, A. J., Durillo, J. J., García-Nieto, J., Coello Coello, C. A., Luna, F., Alba, E. (2009). SMPSO: A
new PSO-based metaheuristic for multi-objective optimization. In 2009 IEEE symposium on com-
putational intelligence in multi-criteria decision-making (MCDM) (pp. 66–73). https://doi.org/10.
1109/MCDM.2009.4938830
Nebro, A. J., Durillo, J. J., & Vergne, M. (2015). Redesigning the jMetal multi-objective optimization
framework. In Jiménez Laredo, J. L., Silva, S., & Esparcia-Alcázar, A. I. (Eds.) Proceedings of
the genetic and evolutionary computation conference, GECCO companion 2015 (pp. 1093–1100).
ACM Press, https://doi.org/10.1145/2739482.2768462
Nebro, A. J., López-Ibáñez, M., Barba-González, C., & García-Nieto, J. (2019). Automatic configura-
tion of NSGA-II with jMetal and irace. In López-Ibáñez, M., Auger, A., & Stützle, T. (Eds.) Pro-
ceedings of the genetic and evolutionary computation conference, GECCO companion 2019 (pp.
1374–1381). ACM Press, https://doi.org/10.1145/3319619.3326832
Nebro, A. J., Luna, F., Alba, E., Dorronsoro, B., Durillo, J. J., & Beham, A. (2008). AbYSS: Adapt-
ing scatter search to multiobjective optimization. IEEE Transactions on Evolutionary Computation,
12(4), 439–457. https://doi.org/10.1109/TEVC.2007.913109
Montes de Oca, M. A., Stutzle, T., Birattari, M., & Dorigo, M. (2009). Frankenstein’s PSO: A composite
particle swarm optimization algorithm. IEEE Transactions on Evolutionary Computation, 13(5),
1120–1132.
Reyes Sierra, M., & Coello Coello, C. A. (2005). Improving PSO-based multi-objective optimization
using crowding, mutation and 𝜖-dominance. Lecture notes in computer scienceIn C. A. Coello Coe-
llo, A. Hernández Aguirre, & E. Zitzler (Eds.), Evolutionary multi-criterion optimization, EMO
2005 (Vol. 3410, pp. 505–519). Springer.
Santiago, A., Dorronsoro, B., Nebro, A. J., Durillo, J. J., Castillo, O., & Fraire, H. J. (2019). A novel
multi-objective evolutionary algorithm with fuzzy logic based adaptive selection of operators:
FAME. Information Sciences, 471, 233–251. https://doi.org/10.1016/j.ins.2018.09.005
Sheskin, D. J. (2011). Handbook of parametric and nonparametric statistical procedures, 5th edn. CRC.
Stützle, T., & López-Ibáñez, M. (2019). Automated design of metaheuristic algorithms. In Gendreau,
M., & Potvin, J. Y. (Eds.) Handbook of metaheuristics, international series in operations research &
management science (Vol. 272, pp. 541–579). Springer, https://doi.org/10.1007/978-3-319-91086-
4_17
Tanabe, R., & Ishibuchi, H. (2020). An easy-to-use real-world multi-objective optimization problem
suite. Applied Soft Computing, 89(106), 078.
Zille, H., Ishibuchi, H., Mostaghim, S., & Nojima, Y. (2016). Mutation operators based on variable
grouping for multi-objective large-scale optimization. In Chen, X., & Stafylopatis, A. (Eds.) 2016
IEEE symposium series on computational intelligence (SSCI) (pp. 1–8). https://doi.org/10.1109/
SSCI.2016.7850214
Zitzler, E., Laumanns, M., & Thiele, L. (2002). SPEA2: Improving the strength Pareto evolutionary algo-
rithm for multiobjective optimization. In K. C. Giannakoglou, D. T. Tsahalis, J. Periaux, K. D. Papal-
iliou, & T. Fogarty (Eds.), Evolutionary methods for design, optimisation and control (pp. 95–100).
CIMNE.
Zitzler, E., & Thiele, L. (1998). Multiobjective optimization using evolutionary algorithms—A comparative
case study. In Eiben, A. E., Bäck, T., Schoenauer, M., Schwefel, H. P. (Eds.) Parallel problem solving
from nature—PPSN V, Lecture notes in computer science (Vol. 1498, pp. 292–301). Springer, https://
doi.org/10.1007/BFb0056872
Zitzler, E., Thiele, L., & Deb, K. (2000). Comparison of multiobjective evolutionary algorithms: Empirical
results. Evolutionary Computation, 8(2), 173–195. https://doi.org/10.1162/106365600568202
Zitzler, E., Thiele, L., Laumanns, M., Fonseca, C. M., & Grunert da Fonseca, V. (2003). Performance
assessment of multiobjective optimizers: An analysis and review. IEEE Transactions on Evolutionary
Computation, 7(2), 117–132. https://doi.org/10.1109/TEVC.2003.810758
Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and
institutional affiliations.
13
Swarm Intelligence (2024) 18:105–139 139
* Antonio J. Nebro
[email protected]
Manuel López‑Ibáñez
[email protected]
José García‑Nieto
[email protected]
Carlos A. Coello Coello
[email protected]
1
ITIS Software, University of Málaga, Ada Byron Research Building, 29071 Málaga, Málaga, Spain
2
Departamento de Lenguajes y Ciencias de la Computación, University of Málaga, E.T.S. de
Ingeniería Informática, 29071 Málaga, Málaga, Spain
3
Alliance Manchester Business School, University of Manchester, Booth Street West,
Manchester M15 6PB, UK
4
Evolutionary Computation Group, CINVESTAV-IPN, Av. IPN No. 2508, 07360 Mexico City,
Mexico
5
School of Engineering and Sciences, Tecnologico de Monterrey, Eugenio Garza Sada 2501,
64849 Monterrey, NL, Mexico
13