DragonflyAlgorithmTheory 2
DragonflyAlgorithmTheory 2
DragonflyAlgorithmTheory 2
net/publication/330832032
CITATIONS READS
50 455
5 authors, including:
Some of the authors of this publication are also working on these related projects:
CPyEFECT: Comprehensive Python E-mail Features Extraction and Classification Tool View project
Whale Optimization Algorithm (WOA): theories, variants, and applications View project
All content following this page was uploaded by Ali Asghar Heidari on 29 August 2021.
Majdi Mafarja, Ali Asghar Heidari, Hossam Faris, Seyedali Mirjalili, and
Ibrahim Aljarah
M. Mafarja
Department of Computer Science, Faculty of Engineering and Technology, Birzeit
University, PoBox 14, Birzeit, Palestine
e-mail: [email protected]
A. A. Heidari
School of Surveying and Geospatial Engineering, University of Tehran, Tehran, Iran
e-mail: as [email protected]
H. Faris and I. Aljarah
King Abdullah II School for Information Technology, The University of Jordan, Am-
man, Jordan
e-mail: {hossam.faris,i.aljarah}@ju.edu.jo
S. Mirjalili
Institute of Integrated and Intelligent Systems, Griffith University, Nathan, Brisbane,
QLD 4111, Australia
e-mail: [email protected]
1
2 Mafarja et. al
1 Introduction
Some creatures’ behaviors have been the inspiration source for many suc-
cessful optimization algorithms. The main behavior that inspired many re-
searchers to develop new algorithms was the strategy that those creatures
use to seek the food sources. Ant Colony Optimization (ACO) [21, 22] and
Artificial Bees Colony (ABC) [49] were originally inspired by the behavior of
ants and bees respectively in locating food sources and collecting their food.
Swarming behavior is another source of inspiration that was used to propose
new optimization algorithms. Particle Swarm Optimization (PSO) [23, 36]
is a primary swarm based optimization algorithm that mimics the swarming
behavior of birds.
The key issue about all the previously mentioned creatures is that they
live in groups or folks (called swarms) [51]. An individual in those swarms
usually makes a decision based on local information from itself and from the
interactions with the other swarm members, also from the environment. Such
interactions are the main reason that contribute to the improvement of the
social intelligence in these swarms. Most of the swarms contain different or-
ganisms from the same species (bees, ants, birds, etc.). By the intelligent col-
laboration (or swarm intelligence (SI)) between all individuals of the swarm,
they have been found to be successful in carrying out specific tasks [30, 91].
Nature-inspired algorithms are population-based metaheuristics algorithms,
that tempt to manipulate a set of solutions in each generation of the optimiza-
tion process [37, 38, 42, 51]. Recently, many such algorithms were proposed
and proved their ability to solve different complex optimization problems
such as global function optimization [11, 39, 40, 41, 43], clustering analysis
[3, 7, 9, 86], spam and intrusion detection systems [8, 10, 26, 28], optimiz-
ing neural networks [5, 6, 25, 27], link prediction [15], and software effort
estimation [31].
As Feature Selection is known to be an NP-hard problem [4], metaheuristic
algorithms in general and specially the population-based algorithms showed
a superior performance in solving this problem.
PSO has been widely used to tackle FS problems. An improved PSO with
local search strategy for FS was proposed in [77]. Another PSO approach with
two crossover mechanisms was presented in [19]. (author?) [79] proposed
a hybrid particle swarm optimization with shuffled frog leaping algorithm
feature selection approach. An improved ACO was also proposed in [50]. A
hybrid FS approach that combines differential evolution (DE) with ABC was
proposed in [98].
Grey Wolf Optimizer (GWO) [76] is a recent SI algorithm that mimics the
hierarchical organization of the grey wolves in nature. The GWO has been
widely used in FS methods with much success [56]. Antlion Optimizer (ALO)
was proposed by (author?) [71] in 2015, FS was one of the fields where ALO
was applied successfully [67]. Whale Optimization Algorithm (WOA), is an-
other SI algorithm that was recently proposed by (author?) [75]. WOA was
DA: Theory, Literature Review, and Application in Feature Selection 3
With the emergence of the high dimensional data in almost all of the real life
fields, the knowledge discovery (KDD) process becomes very complex [59].
The high dimensionality of datasets usually causes many problems for the
data mining techniques (e.g., classification), like over-fitting, and hence, de-
creasing the model performance, and increasing the required computational
time. Therefore, reducing data dimensionality becomes highly demanded [57].
FS is a preprocessing step that aims to enhance the performance of the data
mining and machine learning techniques by eliminating the redundant irrev-
erent features.
In the FS process, two major steps should be carefully considered when
designing a new FS method; selection (search) and evaluation [59]. In the se-
lection step, the feature space has to be searched for a feature subset with a
minimal number of features, and reveals the highest model performance (e.g.,
classification accuracy when considering a classification technique). In the
selection step, three different mechanisms can be considered; complete, ran-
dom, and heuristic search mechanisms. When dealing with high-dimensional
datasets, the complete search strategies (i.e., brute force) are impractical
since 2N feature subsets have to be generated and evaluated for a dataset
with N features. The random mechanisms are also impractical to be used
with FS methods since the search process is controlled by random factors
that may lead the search process to be as worst as the complete search, in
the worst case. Heuristic search algorithms are the most suitable mechanisms
to be used with FS methods since the search process is guided by a heuristic
that navigates the process towards finding the near-optimal solutions.
The evaluation criterion is another aspect that should be carefully consid-
ered when designing a FS method. In this regard, FS methods can be divided
4 Mafarja et. al
into three main types; filters, wrappers, and embedded methods. In filter ap-
proaches [61, 62, 63, 64, 65], feature subsets are evaluated depending on the
relations between features themselves, and their correlation with the class
attribute without considering any particular classifier [60]. Thus, a feature
with high score is selected to build the classification model in a further step.
In the wrapper based methods [1, 29, 70], each feature subset is evaluated
based on the performance of a particular classifier. Embedded methods (e.g.,
regularization) [17, 55, 59] learn which features enhance the performance of
the model (e.g., accuracy), while the model is being created.
In summary, filter methods are considered as the fastest FS methods,
since no external tools are required to be employed; nevertheless the model
performance is uncertain. Wrappers are slower than filters, while they usu-
ally obtain higher accuracy. Embedded methods come in between filters and
wrappers in terms of the model performance and the computational time.
3 Dragonfly Algorithm
• Alignment shows how an agent will set its velocity with respect to the
velocity vector of other adjacent dragonflies. This concept is modeled based
on Eq. (2):
PN
j=1 Vj
Ai = (2)
N
DA: Theory, Literature Review, and Application in Feature Selection 5
C1
DP
EBC P
A1
D1
1 1
1E OD1 D C
O
DP A B
C A B
E C P P
DP
1
A
A1
1E
PB
B
O 11
D
C
1
P
1
A
D
B1 P
O
B
AC
D
O
B
C
E BC 1 PP
A1 1
1 P
O 11 1E P D
O C
D P
1
A C A B
A1 E 1
1
PB
A1
DP
BC
1
A
D
O
1
P
O
B
C
P
O1
O
B
A1
A
P
1
C
1E OD1
O1
O
G
A C
2
HHG FEF
PB I I1 1 1 D E
GF J O1
HH G F J 1A BC D
1
HHG GFF E 1
I IJ1 1O
E GFF I I1 1 1 D E A11 1 1 C
DE HHG JA OC1 GGFBF
A
A
11
B
B
1C
11
1 I
J 1 1 D I J1 1O
C J 1A B1C
E
D
DE
1 D
1C
J 1 B
G A11 1 1 C
HHG FEF B
1
D H H E
I I1 1 1 D E
J O
J 1A BC11 D
B
A 11
B
11 I I1 1 1 D E A11 1 1 C
JA OC1 B
J1 B 1 D
A11 1 1 C
B
A
Fig. 1: Dynamic swarming behaviors (each geometric object shows a special
type of agents)
D1
D1 P
E
A1
DP
BC CA11 P
1
1E O 11
D D D
O C
O C
A C
PB A B
A1
A B P
P
1
E
1
DP
A1
O 1
1
BC
1
B1 P
1
D
P
AC
D
O
B
C
DP
A
1E
PB
D
C D A
O
B
PP C1
AC B1
1
E BC
D
P
C
A1 P 1
D
O C
1
O 11
1
DP A B
1E D E B
A C A1 O1 A1
P
PB 1
O 11
1 A
1E
PB
D
C O A
D1 1
A
B
D1
1
B A1
C G
HHG GFF C1 G
I IJ1 1O
J 1A BC
E
1D
1 D
E
1
BI HIJAHGBOFCEDF E I HIJAHGBOFCEDF E
1 1 1
1
1 1 1
A1 A11
B
11
1C
HHG GFF
I IJ1 1O
E
B
J1 1
A1 1 1 C
D J1 1
A11 1 1 C
D
1 D1
E G 1 B B
J 1A BC
A11
B
1 1
1 D
1C
HHG FEF
J O
D
I I1 1 1 D E O
C1
D J 1A BC11 D
P
A C A11 1 1 C
B
O
O
1
P
1
2
B
G
O A HHG FEF
B
I
A B
I 1 1 1 DE
J O
J 1A BC11 D
A11 1 1 C
B
C
B
Fig. 2: Static swarming behaviors.
6 Mafarja et. al
Fi = Floc − X (4)
Ei = Eloc + X (5)
Algorithm 1 Pseudocode of DA
Initialize the swarm Xi (i = 1, 2, . . . , n)
Initialize ∆Xi (i = 1, 2, . . . , n)
while (end condition is not met) do
Evaluate all dragonflies
Update (F ) and (E)
Update coefficients (i., e., w, s, a, c, f, and e)
Attain S, A, C, F , and E (based on Eqs. (1 to 5))
Update step vectors (∆Xt+1 ) by Eq. (6)
Update Xt+1 by Eq. (7)
Return the best agent
4 Literature Review
In this section, previous researches on DA and their core findings are reviewed
in detail. From the time of proposing DA, several works have focused on the
efficacy of DA or tried to improve its efficacy on tasks such as photovoltaic
systems [80], extension of RFID network lifetime [44], economic emission dis-
patch (EPD) problems [16]. (author?) [34] used the multi-objective version
of DA to solve the design problem of a nearly zero energy building (nZEB),
along with six other evolutionary methods. In [82], a hybrid DA approach
with extreme learning machine (ELM) was proposed. In this system, DA was
used to optimize the number of nodes and their associated weights in the
hidden layer of the ELM. (author?) [78] proposed a DA based approach to
solve the economic load dispatch (ELD) problem with valve point effect. DA
was used in [20] to estimate the location of the unknown wireless nodes that
are randomly deployed in a designated area. The binary version of DA was
used with a multi-class SVM classifier within an FS method [24]. (author?)
[46] employed DA to optimize the firing angle and the size of the thyristor
controlled series capacitor (TCSC). In [83], a self-adaptive DA was used to
tackle the multilevel segmentation problem.
In 2017, a memory-based hybrid DA with particle swarm optimization
(PSO) has been developed to deal with global optimization [53]. Song et
al. [87] developed an enhanced DA with elite opposition learning (EOL) to
tackle global optimization tasks. In [2], DA was used to solve the 0-1 knap-
sack problems. Another application that was solved using DA is the static
economic dispatch incorporating solar energy as in [90]. DA was used as a FS
search strategy in addition to tune the parameters of ELM in [97]. A multi
objective version of DA (MODA) was used as an adaptive engine calibration
algorithm to control the engine parameters [33]. (author?) [68] proposed
a FS method that employed the binary version of DA (BDA) as a search
strategy. In [92, 93], DA was used as a tool to optimize SVM parameters.
In software engineering field, DA was also applied. In [89], DA was used as
an optimization tool to select the most important test cases that satisfy all
8 Mafarja et. al
5 Binary DA (BDA)
|vdi (t)|
T (vdi (t)) = p (8)
1 + (vdi (t))2
The result T (vki (t)), obtained from Eq. (8) is then used to convert the i-th
element of the position vector to 0 or 1 according to Eq. 9
0.9
0.8
0.7
0.6
Probability
0.5
0.4
0.3
0.2
0.1
0
-4 -3 -2 -1 0 1 2 3 4
x
1 0 1 1 ... 0 1 0
|R|
F (X) = α ∗ γ(X) + β ∗ (1 − ) (10)
|N |
where γ(X) represents the classification accuracy while using X feature sub-
set, |R| is the number of selected features and |N | is the total number of
features in the original dataset. Therefore, α (a number in the interval [0,1])
and β (1-α) [66] parameters were used to represent the importance of classi-
fication quality and the reduction rate, respectively.
Pseudocode of BDA-based feature selection technique is shown in Algo-
rithm 2.
To test the proposed approach, 9 high dimensional datasets with low num-
ber of samples were used. As can be seen form Table 1, nine datasets have
226 distinct categories, 50308 samples (patients) and 2308-15009 variables
(genes). All datasets are accessible in a public source1 . A train/test model is
1
http://www.gems-system.org/
DA: Theory, Literature Review, and Application in Feature Selection 11
Parameter Value
Population size 10
Number of iterations 100
Dimension Number of features
Number of runs for each technique 30
α in fitness function 0.99
β in fitness function 0.01
a in bGWO [2 0]
Qmin Frequency minimum in BBA 0
Qmax Frequency maximum in BBA 2
A Loudness in BBA 0.5
r Pulse rate in BBA 0.5
G0 in BGSA 100
α in BGSA 20
in order to make a fair comparison. For all algorithms, the average classifica-
tion accuracy, average selected features, and average fitness values for the 30
independent runs are reported. Note that the best results in the subsequent
results are highlighted in boldface. Moreover, a Wilcoxon signed-rank test
is executed to state if there is a significant difference between the reported
results with the significance interval 95% (α = 0.05).
Table 3 shows the average classification accuracy over the 30 runs of dif-
ferent algorithms. As can be seen from Table 3, the proposed BDA shows
the best performance compared to all other approaches. BGSA comes in the
second place by obtaining the best results in three datasets, and then bGWO
outperformed other approaches in only one dataset. Comparing BDA with
BGSA; the second best approach, it can be obviously seen that BDA records
the best results in 6 datasets, while BGSA is outperforming other approaches
in 2 datasets, and both approaches obtained the same results in dealing with
Brain Tumor1 dataset. According to Table 4, which presents the p-values of
BDA versus other approaches, there is a significant difference in the perfor-
mance of the proposed approach and the other peers. These results verify
that the proposed BDA maintains a more stable balance between diversifica-
tion and intensification, and this leads to higher classification rates compared
to other peers.
Table 4: p-values between BDA and other approaches for the classification
accuracy
BDA vs.
Dataset
bGWO BGSA BBA BGA BPSO
11 Tumors 1.61E-05 1.86E-11 1.20E-11 1.27E-11 1.19E-11
14 Tumors 2.54E-11 1.53E-11 2.64E-11 2.52E-11 2.84E-11
Brain Tumor1 1.69E-14 1.69E-14 1.70E-01 7.24E-13 9.13E-13
Brain Tumor2 1.03E-10 1.43E-06 8.50E-12 2.53E-11 8.71E-12
DLBCL 4.16E-14 NaN 2.98E-04 1.75E-02 9.21E-13
Leukemia1 1.17E-13 1.57E-13 3.37E-13 9.38E-13 1.02E-12
Leukemia2 1.69E-14 1.69E-14 2.14E-02 7.65E-13 5.57E-13
Prostate Tumor 1.19E-04 2.98E-09 4.36E-03 9.49E-12 2.88E-12
SRBCT 1.69E-14 1.17E-13 3.48E-11 8.41E-13 8.09E-13
Inspecting the results in Table 5, it can be obviously seen that BDA selects
the minimal number of features on 55% of the datasets. It is followed by BBA
which came in the second place by outperforming other approaches in 45% of
the datasets. we see that no other approach is capable to compete with those
two approaches. P-values in Table 6 also show that BDA can significantly
outperform other approaches (namely bGWO, BGSA, BGA, and BPSO) in
all datasets, while on five datasets out of nine, it can significantly outperform
BBA approach.
Table 6: p-values between BDA and other approaches for the number of
selected features
BDA vs.
Dataset
bGWO BGSA BBA BGA BPSO
11 Tumors 3.02E-11 3.02E-11 3.52E-07 3.01E-11 3.02E-11
14 Tumors 3.46E-10 1.46E-10 6.68E-11 2.22E-09 4.36E-09
Brain Tumor1 3.02E-11 3.00E-11 3.37E-01 3.02E-11 3.01E-11
Brain Tumor2 3.01E-11 3.01E-11 1.02E-01 3.01E-11 3.01E-11
DLBCL 3.01E-11 3.00E-11 2.60E-08 3.01E-11 3.02E-11
Leukemia1 3.00E-11 3.01E-11 7.20E-05 3.01E-11 3.01E-11
Leukemia2 3.01E-11 3.01E-11 8.36E-01 3.02E-11 3.01E-11
Prostate Tumor 3.02E-11 3.01E-11 2.87E-02 3.01E-11 3.02E-11
SRBCT 3.00E-11 3.00E-11 6.41E-01 2.99E-11 3.00E-11
Table 8: p-values between BDA and other approaches for the fitness values
BDA vs.
Dataset
bGWO BGSA BBA BGA BPSO
11 Tumors 7.67E-09 2.99E-11 3.01E-11 2.91E-11 2.94E-11
14 Tumors 3.01E-11 3.00E-11 3.01E-11 2.98E-11 2.96E-11
Brain Tumor1 3.01E-11 3.00E-11 1.59E-07 2.82E-11 2.54E-11
Brain Tumor2 2.94E-11 2.95E-11 3.01E-11 2.83E-11 2.68E-11
DLBCL 2.92E-11 2.95E-11 1.20E-01 2.57E-11 2.77E-11
Leukemia1 2.95E-11 3.01E-11 3.00E-11 2.76E-11 2.69E-11
Leukemia2 2.91E-11 2.91E-11 1.67E-06 2.80E-11 2.63E-11
Prostate Tumor 1.28E-09 9.79E-05 4.97E-11 2.75E-11 2.80E-11
SRBCT 2.98E-11 2.96E-11 2.77E-05 2.79E-11 2.80E-11
0.25
GWO GWO
0.4
Fitness Value
Fitness Value
BGSA BGSA
BBA
0.2 BBA
0.3
BDA BDA
0.2
0.15
0.1
0 0.1
0 25 50 75 100 0 25 50 75 100
Iteration Number Iteration Number
(a) 11 Tumors (b) 14 Tumors
0.6 0.4
GWO GWO
Fitness Value
Fitness Value
BGSA BGSA
0.4 BBA
0.3 BBA
BDA BDA
0.2 0.2
0 0.1
0 25 50 75 100 0 25 50 75 100
Iteration Number Iteration Number
(c) Brain Tumor1 (d) Brain Tumor2
GWO GWO
0.4 0.4
Fitness Value
Fitness Value
BGSA BGSA
0.3 BBA 0.3 BBA
BDA BDA
0.2 0.2
0.1 0.1
0 0
0 25 50 75 100 0 25 50 75 100
Iteration Number Iteration Number
(e) DLBCL (f) Leukemia1
0.2 0.4
GWO GWO
Fitness Value
Fitness Value
0.2
0.05
0 0.1
0 25 50 75 100 0 25 50 75 100
Iteration Number Iteration Number
(g) Leukemia2 (h) Prostate Tumor
GWO
0.14
Fitness Value
BGSA
0.12 BBA
BDA
0.1
0.08
0.06
0 25 50 75 100
Iteration Number
(i) SRBCT
Fig. 5: Convergence curves for BGWO, BGSA, BBA, and BDA for all
datasets.
DA: Theory, Literature Review, and Application in Feature Selection 17
References
[1] An efficient binary salp swarm algorithm with crossover scheme for fea-
ture selection problems. Knowledge-Based Systems 154, 43 – 67 (2018).
[2] Abdel-Basset, M., Luo, Q., Miao, F., Zhou, Y.: Solving 0-1 knapsack
problems by binary dragonfly algorithm. In: International Conference
on Intelligent Computing. pp. 491–502. Springer (2017)
[3] Al-Madi, N., Aljarah, I., Ludwig, S.: Parallel glowworm swarm opti-
mization clustering algorithm based on mapreduce. In: IEEE Sympo-
sium Series on Computational Intelligence (IEEE SSCI 2014). IEEE
Xplore Digital Library (2014)
[4] Aljarah, I., AlaM, A.Z., Faris, H., Hassonah, M.A., Mirjalili, S., Saadeh,
H.: Simultaneous feature selection and support vector machine opti-
mization using the grasshopper optimization algorithm. Cognitive Com-
putation pp. 1–18 (2018)
[5] Aljarah, I., Faris, H., Mirjalili, S.: Optimizing connection weights in neu-
ral networks using the whale optimization algorithm. Soft Computing
22(1), 1–15 (2018)
[6] Aljarah, I., Faris, H., Mirjalili, S., Al-Madi, N.: Training radial basis
function networks using biogeography-based optimizer. Neural Com-
puting and Applications 29(7), 529–553 (2018)
[7] Aljarah, I., Ludwig, S.A.: Parallel particle swarm optimization cluster-
ing algorithm based on mapreduce methodology. In: In Proceedings of
the Fourth World Congress on Nature and Biologically Inspired Com-
puting (IEEE NaBIC12). IEEE Explore (2012)
[8] Aljarah, I., Ludwig, S.A.: A mapreduce based glowworm swarm op-
timization approach for multimodal functions. In: IEEE Symposium
Series on Computational Intelligence, IEEE SSCI 2013. IEEE Xplore
(2013)
[9] Aljarah, I., Ludwig, S.A.: A new clustering approach based on glowworm
swarm optimization. In: In Proceedings of 2013 IEEE Congress on Evo-
lutionary Computation Conference (IEEE CEC13), Cancun, Mexico.
IEEE Xplore (2013)
[10] Aljarah, I., Ludwig, S.A.: Towards a scalable intrusion detection system
based on parallel pso clustering using mapreduce. In: In Proceedings of
Genetic and Evolutionary Computation Conference (ACM GECCO13)
Amsterdam, July 2013. ACM (2013)
[11] Aljarah, I., Ludwig, S.A.: A scalable mapreduce-enabled glowworm
swarm optimization approach for high dimensional multimodal func-
18 Mafarja et. al
[49] Karaboga, D., Basturk, B.: A powerful and efficient algorithm for nu-
merical function optimization: artificial bee colony (abc) algorithm.
Journal of global optimization 39(3), 459–471 (2007)
[50] Kashef, S., Nezamabadi-pour, H.: An advanced aco algorithm for feature
subset selection 147, 271–279 (2015)
[51] Kennedy, J.: Swarm intelligence. In: Handbook of nature-inspired and
innovative computing, pp. 187–219. Springer (2006)
[52] Khadanga, R.K., Padhy, S., Panda, S., Kumar, A.: Design and analysis
of tilt integral derivative controller for frequency control in an islanded
microgrid: A novel hybrid dragonfly and pattern search algorithm ap-
proach. Arabian Journal for Science and Engineering pp. 1–12 (2018)
[53] KS, S.R., Murugan, S.: Memory based hybrid dragonfly algorithm for
numerical optimization problems. Expert Systems with Applications 83,
63–78 (2017)
[54] Kumar, C.A., Vimala, R., Britto, K.A., Devi, S.S.: Fdla: Fractional
dragonfly based load balancing algorithm in cluster cloud model. Cluster
Computing pp. 1–14 (2018)
[55] Langley, P., et al.: Selection of relevant features in machine learning.
In: Proceedings of the AAAI Fall symposium on relevance. vol. 184, pp.
245–271 (1994)
[56] Li, Q., Chen, H., Huang, H., Zhao, X., Cai, Z., Tong, C., Liu, W., Tian,
X.: An enhanced grey wolf optimization based feature selection wrapped
kernel extreme learning machine for medical diagnosis. Computational
and mathematical methods in medicine 2017 (2017)
[57] Li, Y., Li, T., Liu, H.: Recent advances in feature selection and its ap-
plications. Knowledge and Information Systems 53(3), 551–577 (2017)
[58] Liao, T., Kuo, R.: Five discrete symbiotic organisms search algorithms
for simultaneous optimization of feature subset and neighborhood size
of knn classification models. Applied Soft Computing 64, 581 – 595
(2018).
[59] Liu, H., Motoda, H.: Feature selection for knowledge discovery and data
mining, vol. 454. Springer Science & Business Media (2012)
[60] Liu, H., Setiono, R., et al.: A probabilistic approach to feature selection-
a filter solution. In: Thirteenth International Conference on Machine
Learning (ICML). vol. 96, pp. 319–327. Citeseer (1996)
[61] Mafarja, M., Abdullah, S.: Modified great deluge for attribute reduc-
tion in rough set theory. In: Fuzzy Systems and Knowledge Discovery
(FSKD), 2011 Eighth International Conference on. vol. 3, pp. 1464–
1469. IEEE (2011)
[62] Mafarja, M., Abdullah, S.: Investigating memetic algorithm in solving
rough set attribute reduction. International Journal of Computer Ap-
plications in Technology 48(3), 195–202 (2013)
[63] Mafarja, M., Abdullah, S.: Record-to-record travel algorithm for at-
tribute reduction in rough set theory. J Theor Appl Inf Technol 49(2),
507–513 (2013)
22 Mafarja et. al
[64] Mafarja, M., Abdullah, S.: Fuzzy modified great deluge algorithm for
attribute reduction. In: Recent Advances on Soft Computing and Data
Mining, pp. 195–203. Springer, Cham (2014)
[65] Mafarja, M., Abdullah, S.: A fuzzy record-to-record travel algorithm for
solving rough set attribute reduction. International Journal of Systems
Science 46(3), 503–512 (2015)
[66] Mafarja, M., Aljarah, I., Heidari, A.A., Hammouri, A.I., Faris, H.,
AlaM, A.Z., Mirjalili, S.: Evolutionary population dynamics and
grasshopper optimization approaches for feature selection problems.
Knowledge-Based Systems 145, 25 – 45 (2018).
[67] Mafarja, M., Eleyan, D., Abdullah, S., Mirjalili, S.: S-shaped vs. v-
shaped transfer functions for ant lion optimization algorithm in feature
selection problem. In: Proceedings of the International Conference on
Future Networks and Distributed Systems. p. 14. ACM (2017)
[68] Mafarja, M., Jaber, I., Eleyan, D., Hammouri, A., Mirjalili, S.: Binary
dragonfly algorithm for feature selection. In: 2017 International Confer-
ence on New Trends in Computing Sciences (ICTCS). pp. 12–17 (2017).
[69] Mafarja, M., Mirjalili, S.: Hybrid whale optimization algorithm with
simulated annealing for feature selection. Neurocomputing (2017)
[70] Mafarja, M., Mirjalili, S.: Whale optimization approaches for wrapper
feature selection. Applied Soft Computing 62, 441–453 (2017)
[71] Mirjalili, S.: The ant lion optimizer. Advances in Engineering Software
83, 80–98 (2015)
[72] Mirjalili, S.: Dragonfly algorithm: a new meta-heuristic optimization
technique for solving single-objective, discrete, and multi-objective
problems. Neural Computing and Applications 27(4), 1053–1073 (2016).
[73] Mirjalili, S., Gandomi, A.H., Mirjalili, S.Z., Saremi, S., Faris, H., Mir-
jalili, S.M.: Salp swarm algorithm: a bio-inspired optimizer for engineer-
ing design problems. Advances in Engineering Software 114, 163–191
(2017)
[74] Mirjalili, S., Lewis, A.: S-shaped versus v-shaped transfer functions for
binary particle swarm optimization. Swarm and Evolutionary Compu-
tation 9, 1–14 (2013).
[75] Mirjalili, S., Lewis, A.: The whale optimization algorithm. Advances in
Engineering Software 95, 51–67 (2016)
[76] Mirjalili, S., Mirjalili, S.M., Lewis, A.: Grey wolf optimizer. Advances
in engineering software 69, 46–61 (2014)
[77] Moradi, P., Gholampour, M.: A hybrid particle swarm optimization for
feature subset selection by integrating a novel local search strategy.
Applied Soft Computing 43, 117–130 (2016)
[78] Pathania, A.K., Mehta, S., Rza, C.: Economic load dispatch of wind
thermal integrated system using dragonfly algorithm. In: Power Elec-
tronics (IICPE), 2016 7th India International Conference on. pp. 1–6.
IEEE (2016)
DA: Theory, Literature Review, and Application in Feature Selection 23
machine. In: Hassanien, A.E., Shaalan, K., Gaber, T., Tolba, M.F.
(eds.) Proceedings of the International Conference on Advanced Intelli-
gent Systems and Informatics 2017. pp. 161–170. Springer International
Publishing, Cham (2017)
[93] Tharwat, A., Gabel, T., Hassanien, A.E.: Parameter optimization of
support vector machine using dragonfly algorithm. In: International
Conference on Advanced Intelligent Systems and Informatics. pp. 309–
319. Springer (2017)
[94] Vanishree, J., Ramesh, V.: Optimization of size and cost of static var
compensator using dragonfly algorithm for voltage profile improvement
in power transmission systems. International Journal of Renewable En-
ergy Research (IJRER) 8(1), 56–66 (2018)
[95] VeeraManickam, M., Mohanapriya, M., Pandey, B.K., Akhade, S., Kale,
S., Patil, R., Vigneshwar, M.: Map-reduce framework based cluster ar-
chitecture for academic students performance prediction using cumula-
tive dragonfly based neural network. Cluster Computing pp. 1–17 (2018)
[96] Vikram, K.A., Ratnam, C., Lakshmi, V., Kumar, A.S., Ramakanth, R.:
Application of dragonfly algorithm for optimal performance analysis of
process parameters in turn-mill operations-a case study. In: IOP Con-
ference Series: Materials Science and Engineering. vol. 310, p. 012154.
IOP Publishing (2018)
[97] Wu, J., Zhu, Y., Wang, Z., Song, Z., Liu, X., Wang, W., Zhang, Z., Yu,
Y., Xu, Z., Zhang, T., et al.: A novel ship classification approach for
high resolution sar images based on the bda-kelm classification model.
International Journal of Remote Sensing 38(23), 6457–6476 (2017)
[98] Zorarpacı, E., Özel, S.A.: A hybrid approach of differential evolution
and artificial bee colony for feature selection. Expert Systems with Ap-
plications 62, 91–103 (2016)