Artificial Neural Networks and Efficient Optimization Techniques For Applications in Engineering
Artificial Neural Networks and Efficient Optimization Techniques For Applications in Engineering
Artificial Neural Networks and Efficient Optimization Techniques For Applications in Engineering
net/publication/221911761
CITATIONS READS
5 397
3 authors:
Rafael Magalhães
Universidade Federal da Paraíba
11 PUBLICATIONS 34 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Rossana Moreno Santa Cruz on 23 September 2014.
Brazil
1. Introduction
This chapter proposal describes some artificial neural network (ANN) neuromodeling
techniques used in association with powerful optimization tools, such as natural
optimization algorithms and wavelet transforms, which can be used in a variety of
applications in Engineering, for example, Electromagnetism (Cruz, 2009), Signal Processing
(Peixoto et al., 2009b) and Pattern Recognition and Classification (Magalhães et al., 2008).
The application of ANN models associated with RF/microwave devices (Cruz et al., 2009a,
2009b; Silva et al., 2010a) and/or pattern recognition (Lopes et al., 2009) becomes usual. In
this chapter, we present neuromodeling techniques based on one or two hidden layer
feedforward neural network configurations and modular neural networks − trained with
efficient algorithms, such as Resilient Backpropagation (RPROP) (Riedmiller & Braun, 1993),
Levenberg-Marquardt (Hagan & Menhaj, 1999) and other hybrid learning algorithms
(Magalhães et al., 2008), in order to find the best training algorithm for such investigation, in
terms of convergence and computational cost. The mathematical formulation and
implementation details of neural network models, wavelet transforms and natural
optimization algorithms are also presented.
Natural optimization algorithms, which are stochastic population-based global search
methods inspired in nature, such as genetic algorithm (GA) and particle swarm
optimization (PSO) are effective for optimization problems with a large number of design
variables and inexpensive cost function evaluation (Kennedy & Eberhart, 1995; R. Haupt &
S. Haupt, 2004). However, the main computational drawback for optimization of nonlinear
devices relies on the repetitive evaluation of numerically expensive cost functions (Haupt &
Werner, 2007; Rahmat-Samii, 2003). Finding a way to shorten the optimization cycle is
highly desirable. In case of GA, for example, several schemes are available in order to
improve its performance, such as: the use of fast full-wave methods, micro-genetic
algorithm, which aims to reduce the population size, and parallel GA using parallel
computation (R. Haupt & S. Haupt, 2004; Haupt & Werner, 2007). Therefore, this chapter
46 Artificial Neural Networks
also describes some hybrid EM-optimization methods, using continuous-GA and PSO
algorithms, blended with multilayer perceptrons (MLP) artificial neural network models.
These methods are applied to design spatial mesh filters, such as frequency selective
surfaces (FSSs). Moreover, the MLP model is used for fast and accurate evaluation of cost
function into continuous GA and PSO simulations, in order to overcome the computational
requirements associated with full wave numerical simulations.
Wavelets and artificial neural networks (ANN) have generated enormous interest in recent
years , both in science and practical applications (Peixoto et al., 2009c). The big advantage of
using wavelets is the fact of these functions make a local behavior, not only in the frequency
domain but also in the field space and time. ANNs are capable of learning from a training
set, which makes it useful in many applications, especially in pattern recognition. The
purpose of using wavelet transforms is to find an easier way to compress and extract the
most important features present in images, thereby creating a vector of descriptors that
should be used to optimize the pattern recognition by a neural network. The array of
descriptors contains elements whose values accurately describe the image content, and
should take up less space than a simple pixel by pixel representation. The greatest difficulty
found in this process is the generation of this vector, where the image content of interest
should be very well described, in order to show really relevant features. Thus, this chapter
proposal also shows a way to use a multiresolution technique, such as the wavelet
transforms, to perform filtering, compression and extraction of image descriptors for a later
classification by an ANN.
This chapter is organized in six sections. Section 2 presents the most important
fundamentals of artificial neural networks and the methodology used for the investigated
applications. In section 3, artificial neural networks are optimized using wavelet transforms
for applications in image processing (extraction and compression). Section 4 presents an
EM-optimization using artificial neural networks and natural optimization algorithms for
the optimal synthesis of stop-band filters, such as frequency selective surfaces. Section 5
shows a modular artificial neural network implementation used for pattern recognition and
classification. Finally, section 6 presents important considerations about the artificial neural
network models used in association with efficient optimization tools for applications in
Engineering.
Ni
net j = ∑ w ji ⋅ xi (1)
i =0
y j = ϕ (net j ) (2)
For a feedforward neural network (FNN), the artificial neurons are set into layers. Each
neuron of a layer is connected to those of the previous layer, as illustrated in Fig. 1(b). Signal
propagation occurs from input to output layers, passing through the hidden layers of the
FNN. Hidden neurons represent the input characteristics, while output neurons generate
the neural network responses (Haykin, 1999).
Modular artificial neural network is based on a principle commonly used: divided to
conquer. This concept aims to divide a large and complex task in a set of sub-tasks that are
easier to be solved. The modular artificial neural network could be defined, in summary, as
a set of learning machines, also called experts, whose decisions are combined to achieve a
better answer than the answers achieved individually, that is, a machine with a better
performance.
In the past few years, one of the main areas of learning machine is the characterization of
methods capable to design this kind of machines. There are two types of these machines:
static and dynamic structures. Modular neural networks, as seen in Fig. 1(c), is a dynamic
type. The input signal is used by the gating network to design the global response. An
advantage of modular artificial neural networks, when compared with other neural
networks, is the learning speed. The machine learning process is accelerated in case of
problems where it is observed a natural decomposition of data at simple functions. To
develop the modular machine architecture and to implement the experts, it is usual to apply
multilayer perceptrons (MLP) neural networks.
2.2 Methodology
Generally, the design of a neural network is composed by three main steps: configuration −
how layers are organized and connected; learning – how information is stored;
generalization − how neural network produces reasonable outputs for inputs not found in
the training (Haykin, 1999). In this work, we use feedforward and modular neural networks
associated with supervised learning to develop neural network models.
In the computational simulation of supervised learning process, a training algorithm is used
for the adaptation of neural network synaptic weights. The instantaneous error e(n), as
defined in (3), represents the difference between the desired response, d(n), and the neural
network output, z(n), at the iteration n, corresponding to the presentation of the nth training
pattern [x(n);(d(n)] (Silva et al., 2010b).
Supervised learning can be illustrated through the block diagram of Fig. 2(a) and has as
objective the minimization of the mean square error E(t), given in (4), where the index t
represents the number of training epochs (one complete presentation of all training
examples, n = 1, 2,…, N, where N is the total number of examples, called an epoch) (Silva et
al., 2010b).
48 Artificial Neural Networks
1 N 1
E(t ) = ∑ e(n)2
N n=1 2
(4)
Currently, there are several algorithms for the training of neural networks that use different
optimization techniques (Peixoto et al., 2009a). The most popular training algorithms are
those derived from backpropagation algorithm (Rumelhart et al., 1986). Among the family
of backpropagation algorithms, the RPROP shows to be very efficient in the solution of
complex electromagnetic problems. In this work, the stop criteria are defined in terms of the
maximum number of epochs and/or the minimum error and the activation function used
was the sigmoid tangent (Haykin, 1999).
After training, the neural network is submitted to a test, in order to verify its capability of
generalizing to new values that do not belong to the training dataset, for example, parts of
the region of interest where there is not enough knowledge about the modeled
device/circuit. Therefore, the neural network operates like a “black box” model that is
illustrated in Fig. 2(b) (Silva et al., 2010b).
Resilient backpropagation algorithm is a first-order local adaptive learning scheme. The
basic principle of RPROP is to eliminate the harmful influence of the partial derivative size
in the weight update. Only the sign of the derivative is considered to indicate the direction
of the weight update, Δwhp , as given in (5).
⎧ ∂E
⎪ −Δ hp (t ), if ∂w (t ) > 0
⎪ hp
⎪⎪ ∂E
Δwhp (t ) = ⎨ +Δ hp (t ), if (t ) < 0 (5)
⎪ ∂whp
⎪0, elsewhere
⎪
⎪⎩
The second step of RPROP algorithm is to determine the new update-values Δhp(t). This is
based on a sign-dependent adaptation process, similar to the learning-rate adaptation
shown by Jacobs (1988). The changes in the weight size are exclusively determined by a
weight ‘update-value’, Δhp, as given in (6).
⎧ + ∂E ∂E
⎪η ⋅ Δ hp (t − 1), ∂w (t − 1) ⋅ ∂w (t ) > 0
⎪ hp hp
⎪⎪ − ∂E ∂E
Δ hp (t ) = ⎨η ⋅ Δ hp (t − 1), (t − 1) ⋅ (t ) < 0 (6)
⎪ ∂whp ∂whp
⎪Δ (t − 1), elsewhere
⎪ hp
⎪⎩
Here, the following RPROP parameters were employed: η+ = 1.2 and η− = 0.5. The update-
values were restricted to the range 10-6 ≤ Δhp ≤ 50.
Artificial Neural Networks and Efficient
Optimization Techniques for Applications in Engineering 49
Fig. 1. (a) Nonlinear model of an artificial neuron; (b) FNN configuration with two hidden
layers; (c) Extended modular neural network configuration with K experts
50 Artificial Neural Networks
Fig. 2. (a) Block diagram of supervised learning; (b) neural network “black box” model
+∞
f ( t )e (
− j 2π ft )
F( w ) = ∫ dt (7)
−∞
Knowing the spectrum F(w) of a signal, it is possible to obtain it in time domain, using the
inverse transform concept, according to (8):
+∞
1 ( − j 2π ft )
f (t ) = ∫ F ( w )e dw (8)
−∞ 2π
On the other hand, the Continuous Wavelet Transform (CWT) is given by:
+∞
1 ∗ ⎛ t −τ ⎞
CWT (τ , a ) = ∫ f (t ) ψ ⎜ ⎟ dt (9)
−∞ a ⎝ a ⎠
and its corresponding inverse can be expressed according to (10):
+∞ +∞
1 ⎧ 1 ⎛ t − τ ⎞ ⎫ dadb
f (t ) = ∫ ∫ CWT ( a , b ) ⎨ ψ ⎜ ⎟⎬ 2 (10)
Cψ −∞ −∞ ⎩ a ⎝ a ⎠⎭ a
where Ψ(t ) is the mother wavelet, and a are the translation and scale parameters,
respectively.
1 ⎛ t −τ ⎞
ψ s ,τ (t ) = ψ⎜ ⎟ (11)
s ⎝ s ⎠
1 ⎛ t − kτ 0 s0j ⎞
ψ j , k (t ) = ψ⎜ ⎟⎟ (12)
⎜ sj
s0j ⎝ 0 ⎠
where j and k are integers; s0 > 1 is a fixed dilation parameter; τ 0 is the translation factor,
which depends on the dilation factor.
52 Artificial Neural Networks
(
ψ j , k (t ) = 2 jψ 2 j t − k ) (13)
When discrete wavelets are used to analyze a signal, the result is a series of wavelet
coefficients, also called the series of wavelet decomposition (Oliveira, 2007). As a wavelet
can be viewed as a bandpass filter, the scaled wavelet series can be seen as a set of bandpass
filters, with a Q factor (set of filters fidelity factor). In practice, there is a discretized wavelet,
with upper and lower limits for translations and scales. The wavelet discretization,
associated with the idea of passing the signal through a filter set, results in the well known
subband coding (Oliveira, 2007).
+∞
∫ φ j , k ( x )ψ j , k ( x ) dx = 0 (14)
−∞
TT Error
PP CT GE
(min) (e-04)
Without
4096x50 4.27 1.8120 94%
wavelet
96%
With wavelet 1000x50 1.91 7.4728
the classification step, since the time of training until the generalization of results. It is worth
to say that not only the method, but also its ability of reducing the matrix of data, preserving
the meaning of the information, was of extreme importance to reduce the computational
cost more than twice.
Fig. 10. (a) The conventional geometry of a FSS; (b) some patch element shapes; (c) thin
dipole unit cell configuration
Artificial Neural Networks and Efficient
Optimization Techniques for Applications in Engineering 57
There is no closed form solution directly from a given desired frequency response to the
corresponding FSS (Hwang et al, 1990). The analysis of scattering characteristics from FSS
devices requires the application of rigorous full-wave techniques. Besides that, due to the
computational complexity of using a full-wave simulator to evaluate the FSS scattering
variables, many electromagnetic engineers still use a trial-and-error process until to achieve
a given design criterion. Obviously, this procedure is very laborious and human dependent.
Therefore, optimization techniques are required to design practical FSSs with desired filter
specifications. Some authors have employed neural networks, PSO and GA for FSS design
and optimization (Cruz et al., 2009b; Silva et al., 2010a; 2010b).
Based on the cost associated to each chromosome, the population evolves through
generations with the application of genetic operators, such as: selection, crossover and
mutation. Fig. 11(a) gives a “big picture” overview of continuous-GA. The mating step
includes roulette wheel selection presented in (Haupt & Werner, 2007; R. Haupt & S. Haupt,
2004). Population selection is performed after the Npop chromosomes are ranked from lowest
to highest costs. Then, the Nkeep most-fit chromosomes are selected to form the mating pool
and the rest are discarded to make room for the new offspring. Mothers and fathers pair in a
random fashion through the blending crossover method (R. Haupt & S. Haupt, 2004). Each
pair produces two offspring that contain traits from each parent. In addition, the parents
survive to be part of the next generation. After mating, a fraction of chromosomes in the
population will suffer mutation. Then, the chromosome variable selected for real-value
mutation is added to a normally distributed random number (Cruz, 2009; Silva et al., 2010b).
with a random population matrix. Unlike GA, PSO has no evolution operators such as
crossover and mutation. Each particle moves along the cost surface with an individual
velocity. A flow chart of the PSO algorithm is shown in Fig. 11(b). The implemented PSO
algorithm updates the velocities and positions of the particles based on the best local and
global solutions according to (17) and (18), respectively (Cruz, 2009; Silva et al., 2010b).
⎡
⎣
(
vmk +, n1 = C ⎢ r0 vmk ,n + Γ 1 ⋅ r1 ⋅ pm
local bes( k )
,n ) (
− pmk ,n + Γ 2 ⋅ r2 ⋅ pmglobal
,n
best ( k )
) ⎤
− pmk ,n ⎥
⎦
(17)
Here, vm,n is the particle velocity, pm,n is the particle variables, r0, r1 and r2 are independent
uniform random numbers; Γ1 and Γ2 are is the cognitive and social parameters, respectively,
localbest ( k )
pm ,n and pmglobalbest
,n
(k)
are the best local and global solutions, respectively; C is the
constriction parameter (Kennedy & Eberhart, 1995). If the best local solution has a cost less
than the best cost of the current global solution, then the best local solution replaces the best
global solution. PSO is a very simple natural optimization algorithm, easy to implement and
with few parameters to adjust.
Fig. 11. Flow charts of the natural optimization algorithms: (a) GA; (b) PSO
resonant frequency (fr) and –3dB bandwidth (BW) as a function of the substrate thickness (d)
and the periodicity of elements (t = tx = ty) corresponding to a square unit cell. The
dimensions of the thin dipole elements used in this work are shown in Fig. 10(c). The width
(W) and length (L) of the patch remain the same. The region of interest (or search space)
defined by desired input variables is a rectangle: 15.24 ≤ t ≤ 19.05 mm and 0.1 ≤ d ≤ 2.0 mm.
The FSS optimization problem consists of obtaining an optimal solution (d*; t*) in the search
space that results in a minimal thickness FSS with the desired specifications for resonant
frequency and bandwidth. After the definition of the FSS filter-type, design input variables
and search space, a parametric analysis is performed in order to observe the FSS EM-
behavior. At this step, the FSS transmission coefficients at 6.0−14.0 GHz band were obtained
by means of Method of Moments (MoM) simulations considering substrate anisotropy
(Campos et al., 2001). Representative EM-datasets must be obtained from the parametric
analysis for supervised learning of FSS synthesis MLP models.
The second step consists of modeling fr and BW responses by the synthesis MLP network
using the conventional neuromodeling technique (Zhang & Gupta, 2000). The third step just
consists of implementing the natural optimization algorithms for FSS optimal synthesis. In
particular, the continuous-GA and PSO algorithms were used.
Through the FSS neuromodeling, we avoid the very intensive application of the CPU for
MoM analysis in GA and PSO simulations. The ANN model was developed for FSS
synthesis using one hidden layer MLP configuration. The MLP learning processes were
carried out using the RPROP training algorithm, with standard parameters (Riedmiller &
Braun, 1993), in order to minimize the mean square error as defined in (4). In this case, the
RPROP training is performed until the mean square error reaches a minimum pre-
established value. The synthesis MLP network configuration (composed by three inputs: -1,
d, t; twenty hidden neurons and two outputs: fr, BW) simultaneously approaches both the
mappings fr(d,t) and BW(d,t). The MLP configuration is similar to that presented in Fig. 1(b).
To generate the synthesis training dataset were assumed the vectors given in (19) for the
design input variables d and t, adding up 72 training examples.
⎪⎧d = [ 0.1 0.3 0.5 0.7 0.9 1.0 1.2 1.4 1.5 1.6 1.8 2.0] mm
⎨ (19)
⎪⎩tx = ty = t = [ 15.24 15.87 16.51 17.14 18.41 19.05] mm
GA and PSO algorithms were implemented in Matlab® for optimal FSS synthesis. The
assumed design input variables (substrate thickness and periodicity) were symbolized by
k
dmk and t m to take into account the m-th individual at the k-th iteration. Given a desired FSS
filter specification, (frdesired, BWdesired), the aim is the minimization of the quadratic cost
function as defined in (20) in terms of absolute percent errors.
( ) ( ) ⎞⎟
2
⎛ fr k k
BWdesired − BW dmk , tmk
⎜ desired − fr dm , tm (20)
cos t( k , m) = ⎜ + ⎟ ,
⎜ f desired BWdesired ⎟
⎝ ⎠
m=1,2,…Npop
To evaluate the cost function shown in (20), the synthesis MLP approach the EM relations
( ) ( )
f r dmk , tmk and BW dmk , tmk . When the population evolves, each individual is constrained to
the region of interest using (21), where the dummy variable ξ can be replaced by d or t.
60 Artificial Neural Networks
( ( )
ξmk = min max ξmk , ξmin , ξmax ) (21)
The parameters used for continuous-GA and PSO simulations are shown in Table 2.
Continuous-GA PSO
initial population = 50 initial population = 25
maximum iteration number maximum iteration number = 200
(generations) = 500
constriction parameter C = 0.8
crossover probability = 0.9
cognitive parameter, Γ1 = 2
mutation rate = 0.01 cognitive parameter, Γ2 = 2
Table 2. GA and PSO parameters used in the simulations
Fig. 12. Tranmission coefficients of the thin dipole FSS: (a) first and (b) second examples
Artificial Neural Networks and Efficient
Optimization Techniques for Applications in Engineering 61
In this section, examples that use the hybrid EM-optimization technique for the optimal
synthesis of FSSs are presented. Random values for the resonant frequency and bandwidth
were chosen within the search space used for the MoM simulations of the synthesis MLP
network. The first example corresponds to output data (fr; BW) = (10.5; 1.5) GHz and the
second example corresponds to output data (fr; BW) = (11.0; 2.5) GHz.
Figs. 12(a) and 12(b) show the transmission coefficients obtained from MoM, GA and PSO
simulations for the first and second examples, respectively.
It is observed that although GA algorithm response is closer to MoM simulations, PSO
algorithm presents the best solutions of resonant frequency and bandwidth, compared to
the desired values. Table 3 shows the input/output values obtained from both algorithms.
The cost surface contours, the initial, intermediate, and final populations, as well as the best
path for both algorithms can be found in (Silva et al., 2010a; 2010b). One can also find a new
version of GA algorithm called improved GA in (Cruz, 2009).
gi ( n ) =
(
exp ui( l ) (n) ) (22)
∑
K
j =1
exp ( ui( l ) (n) )
where ui(l)(n) is the i-th output neuron of the l-th layer of the gating network and is giving
by:
(l)
uil (n) = ϕ pi (
v(pil ) (n) ) (23)
(i ) (i )
where ϕ pi is the derivative function from the activation potential υ pi , from the i-th neuron
of the l-th layer of the gating network, as shown in (24):
q ( l ) Pas
v(pil ) (n) = ∑ aij( l )uil − 1 (n) (24)
j =0
where aij is the weight associated to the j-th input neuron i of the l-th layer of the gating
network.
Artificial Neural Networks and Efficient
Optimization Techniques for Applications in Engineering 63
−1
||d(n) − y e( K ) (n)||2 )
gi (n)exp(
hi (n) = K 2 i
(25)
−1
∑ g j (n)exp( 2 ||d(n) − y ei( K ) (n)||2 )
j =1
where d(n) is the desired response and yei(k)(n) is the actual response of the i-th neuron of the
l-th layer of the K-th expert, as shown in (26):
where φ(l) is the activation potential derivative v(l) from the i-th neuron, l-th layer and k-th
expert.
where η is the learning rate, and the gradient δ ei( i()k ) for the output layer neurons is obtained
by:
where ei is the difference between di and ye(K). The gradient for the neurons of hidden layers
is computed as:
q( l +1) Espi
( l + 1)
δ e((lK) ) (n) = ϕ e'((lK)) ( v(el( K) ) (n))
i i i
∑ δ m( l + 1) (n)wme (K )(n) (29)
m=1 i
The synaptic weight update of the gating network is accomplished according to (30):
The error is the difference between hi and gi. The gradient of hidden layers is calculated by:
64 Artificial Neural Networks
q( l +1)Pas
( l + 1)
δ pi( l ) (n) = ϕ pi'( l ) ( v( l ) ∑ δ m( l + 1) (n)ampi (n)) (32)
m=1
Thus, the error backpropagates from the gating network to its hidden layers.
6. Conclusion
This chapter described some of the neuromodeling methodology used for applications in
various areas of Engineering, in particular, EM-optimization, Signal Processing and Pattern
Classification and Recognition. Some original contributions were shown, such as the hybrid
EM-optimization for optimal design of FSS, the artificial neural network optimization using
wavelet transforms and the modular neural network used to pattern classification and
recognition.
The choice of the activation function of a FNN or a modular neural network strongly
influences the neural model performance. However, there is not an universal activation
function that can be used to solve any kind of nonlinear modeling problem. Using
additional information from the hybrid neural models, as well as sharing the training
process with modular neural networks, increased the efficiency and therefore the
generalization ability and model reliability of the resulting neural models.
The fast and accurate obtained results for all the applications demonstrated the
improvements with the utilization of these models, and proved that the MLP network
global approximations are able to generalize. In addition, the idea of blending artificial
neural networks and natural optimization algorithms, as well as mathematical transforms
(for EM-optimization and pattern recognition, respectively) shows to be very interesting.
PSO algorithm needs less individuals and reaches the best solution at a few iterations,
compared to GA algorithm. The PSO implementation is simpler than GA, since it is not
required the presence of genetic operatiors such as crossover and mutation. Therefore, PSO
algorithm has proved to be an interesting EM-optimization tool, with few parameters to
adjust and low computational cost.
The characteristics of ANN models (precision, CPU efficiency and flexibility) can be
perfectly used in association with these optimization techniques in order to develop
powerful soft computing tools. A good point in combining these complementary tools is the
possibilty of multiprocessing application, using parallel processors and computers, not only
to increase performance and execution speed, but also to enable an improved type of
competition and collaboration. Each individual tool allows solving the problem or part of
the problem and, at the same time, they can collaborate one each other to improve the
solution in a higher level. This is the intelligent computing that this chapter wants to apply
in efficient engineering.
7. References
Beale, E. M. L. (1972). A derivation of conjugate gradients, In: Numerical Methods for
Nonlinear Optimization, F. A. Lootsma, (Ed.), Academic Press, London.
Bhatia, P.; Bharti, S. & Gupta, R. (2009). Comparative analysis of image compression
techniques: a case study on medical images, In: International Conference on Advances
in Recent Technologies in Communication and Computing, pp. 820-822.
Campos, A. L. P. de S. (2009). Superfícies seletivas em frequência: análise e projeto, IFRN, 198 p.,
Natal, RN, Brazil.
Campos, A. L. P. S.; D’Assunção, A. G. & Melo, M. A. B. (2001). Investigation of scattering
parameters for frequency selective surfaces using thin dipoles on anisotropic layers.
Proceedings of the SBMO/IEEE MTT-S International Microwave and Optoelectronics
Conference, Vol. 1, pp. 413-416.
66 Artificial Neural Networks
Castleman, K. R. (1996). Digital Imaging Processing. Prentice Hall, 1st Ed, Englewood Cliffs,
New Jersey.
Cruz, R. M. S. (2009). Análise e otimização de superfícies seletivas de frequência utilizando
redes neurais artificiais e algoritmos de otimização natural, Ph.D. Dissertation,
Federal University of Rio Grande do Norte, Natal, RN, Brazil.
Cruz, R. M. S.; Santos, A. F. & D’Assunção, A. G. (2010). Spectral analysis of electromagnetic
waves in uniaxial anisotropic dielectric materials, Proceedings of 14º SBMO –
Simpósio Brasileiro de Microondas e Optoeletrônica and 9º CBMag – Congresso
Brasileiro de Eletromagnetismo, MOMAG2010, pp. 440-444, Vila Velha, ES, Brazil.
Cruz, R. M. S.; Silva, P. H. da F. & D’Assunção, A. G. (2009a). Neuromodeling stop-band
properties of Koch island patch elements for FSS filter design. In: Microwave and
Optical Technology Letters, Vol. 51, No. 12, pp. 3014-3019.
__________. (2009b). Synthesis of crossed dipole frequency selective surfaces using genetic
algorithms and artificial neural networks, Proceedings of the 2009 International Joint
Conference on Neural Networks, pp. 627-633, ISBN 978-1-4244-3553-1, Atlanta, GA,
USA.
Dietterich, T. G. (1998). Machine-learning research: four current directions, The AI Magazine,
Vol. 4, pp. 97–136.
Dimitrakakis, C. & Bengio, S. (2005). Online adaptive policies for ensemble classifiers.
Neurocomputing, Vol. 64, pp. 211-221.
Genovesi, S. et al. (2006). Particle swarm optimization of frequency selective surfaces for the
design of artificial magnetic conductors, In: IEEE Antennas and Propagation Society
International Symposium, Vol. 9, No. 14, pp. 3519–3522.
Hagan, M. T. & Menhaj, M. (1999). Training feed-foward networks with the Marquardt
algorithm. IEEE Transactions on Neural Networks. Vol. 5, No. 6, pp. 989-993.
Haupt, R. L. & Haupt, S. (2004). Practical genetic algorithms, Wiley & Sons, Inc., 2nd Ed, New
Jersey.
Haupt, R. L. & Werner, D. H. (2007). Genetic algorithms in electromagnetic, John Willey & Sons,
Inc., 1st Ed, New York.
Hwang, J. N.; Chan, C. H. & Marks, R. J. (1990). Frequency selective surface design based on
iterative inversion of neural networks. Proceedings of the International Joint Conference
on Neural Networks, Vol. 1, pp. 39-44.
Haykin, S. (1999). Neural networks: a comprehensive foundation, Prentice-Hall, 2nd Ed, Upper
Saddle River, New Jersey.
Jacobs, R. A.; Jordan, M. I.; Nowlan, S. J. & Hinton, G. E. (1991). Adaptive mixture of local
experts. Neural Computation, MIT Press, Vol. 3, pp. 79-87.
Kennedy, L. & Eberhart, R. C. (1995). Particle swarm optimization. Proceedings of IV IEEE
Conference On Neural Networks, Piscataway, New Jersey.
Kuncheva, L. I. (2004). Combining pattern classifiers: methods and algorithms, John Wiley &
Sons, Inc., New Jersey.
Liu, J. C. et al. (2004). Implementation of broadband microwave absorber using FSS screens
coated with Ba(MnTi)Fe10O19 ferrite, In: Microwave and Optical Technology Letters,
Vol. 41, No. 4, pp. 323−326.
Lopes, D. C. et al. (2009). Implementation of a Modular Neural Network in a Multiple
Processor System on FPGA to Classify Electric Disturbance. In: Industrial Electronics,
2009. IECON '09. 35th Annual Conference of IEEE, Porto, Portugal.
Artificial Neural Networks and Efficient
Optimization Techniques for Applications in Engineering 67
http://www.ling.upenn.edu/courses/Fall_2007/cogs501/Rosenblatt1958.pdf.
Rumelhart, D. E.; Hinton, G. E. & Williams, R. J. (1986). Learning internal representations by
error propagation. In: Parallel Distributed Processing: Explorations in the
microstructure of cognition, D. E. Rumelhart & J. L. McClelland (Ed.), MIT Press, Vol.
1, pp. 318-362, Cambridge, MA.
Silva, P. H. da F. & Campos, A. L. P. S. (2008). Fast and accurate modeling of frequency
selective surfaces using a new modular neural network configuration of multilayer
perceptrons, Microwaves, Antennas e Propagation, IET, Vol. 2, No. 5, pp. 503–511.
Silva, P. H. da F.; Cruz, R. M. S & D’Assunção, A. G. (2010a). Blending PSO and ANN for
optimal design of FSS filters with Koch island patch elements, IEEE Transactions on
Magnetics, Vol. 46, No. 8.
__________. (2010b). Neuromodeling and natural optimization of nonlinear devices and
circuits, In: System and Circuit Design for Biologically-Inspired Intelligent Learning,
Turgay Temel (Ed.), IGI Global, ISBN 9781609600181, Hershey PA. No prelo.
Silva, P. H. da F.; Lacouth, P.; Fontgalland, G.; Campos, A. L. P. S. & D'Assunção, A. G.
(2007). Design of frequency selective surfaces using a novel MoM-ANN-GA
technique. Proceedings of the SBMO/IEEE MTT-S International Microwave and
Optoelectronics Conference, Vol. 1, pp. 275-279.
Valentini, G. & Masulli, F. (2002). Ensemble of learning machines, in Neural Nets WIRN
Vietri-02, Series Lectures Notes in Computer Sciences, Springer-Verlag.
Vimal, P. S.; Kumar, R. & Arumugabathan, C. (2009). Wavelet based ocular artifact removal
from EEG signals using arma method and adaptive filtering, pp. 667-671.
Wang, X. Q. G.; Liu, D. G. Z.; Li, Z. & Wang, H. (2009). Object categorization using
hierarchical wavelet packet texture descriptors, In: 11th IEEE International
Symposium on Multimedia, pp. 44-51.
Weeks, M. (2007). Digital Signal Processing Using MATLAB and Wavelets, Infinity Science
Press LLC Hingham, Georgia State University, Massachusetts.
Wu, T. K. (1995). Frequency selective surface and grid array, John Wiley & Sons, Inc., ISBN 0-
471-31189-8, New York, USA.
Zhang, Q. J. & Gupta, C. (2000). Neural networks for RF and microwaves design, Artech House,
Norwood, MA.