AreviewofHopfieldNNforsolvingMPP PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

ARTICLE IN PRESS

European Journal of Operational Research xxx (2008) xxxxxx

Contents lists available at ScienceDirect

European Journal of Operational Research


journal homepage: www.elsevier.com/locate/ejor

Invited Review

A review of Hopeld neural networks for solving mathematical


programming problems
Ue-Pyng Wen a,1, Kuen-Ming Lan a,1, Hsu-Shih Shih b,*
a
b

Department of Industrial Engineering & Engineering Management, National Tsing Hua University, Hsinchu 30013, Taiwan, ROC
Graduate Institute of Management Sciences, Tamkang University, Tamsui, Taipei 25137, Taiwan, ROC

a r t i c l e

i n f o

Article history:
Received 1 October 2007
Accepted 4 November 2008
Available online xxxx
Keywords:
Hopeld neural networks
Energy function
Mathematical programming
penalty function
Lagrange multiplier
Primal and dual functions

a b s t r a c t
The Hopeld neural network (HNN) is one major neural network (NN) for solving optimization or mathematical programming (MP) problems. The major advantage of HNN is in its structure can be realized on
an electronic circuit, possibly on a VLSI (very large-scale integration) circuit, for an on-line solver with a
parallel-distributed process. The structure of HNN utilizes three common methods, penalty functions,
Lagrange multipliers, and primal and dual methods to construct an energy function. When the function
reaches a steady state, an approximate solution of the problem is obtained. Under the classes of these
methods, we further organize HNNs by three types of MP problems: linear, non-linear, and mixed-integer. The essentials of each method are also discussed in details. Some remarks for utilizing HNN and difculties are then addressed for the benet of successive investigations. Finally, conclusions are drawn
and directions for future study are provided.
2008 Elsevier B.V. All rights reserved.

1. Introduction
In the past two decades, articial neural networks or neural networks (NNs) have been widely developed in solving mathematical
programming (MP) or optimization problems. Among various NNs,
a couple of networks have been utilized for optimization (Looi,
1992; Kumar, 2005), such as Hopeld neural netwoksHNNs) (Hopeld, 1982), self-organizing feature maps (Kohonen, 1982), and
Boltzmann machines (Ackley et al., 1985). As HNNs and general
NN structure have been developed for almost all classes of optimization problems, it is worth reviewing their inside techniques, benets and drawbacks, and future perspectives so that the
presentation will enrich the theories and applications of HNNs in
the future.
Research studies on neurons and their networks for information
processing have drawn much attention by scholars in the elds of
physics, electronics, and electricity. They have tried to formulate
the problems mimicking the behavior of biological NNs to abstract
principles of computation employed by the brain. Hopeld and
Tank made a momentous contribution for neural computation by
solving a traveling salesman problem (TSP) and a linear programming (LP) problem (Hopeld, 1982, 1984; Hopeld and Tank,
1985; and Tank and Hopeld, 1986). Although Shirazi and Yih
* Corresponding author. Tel.: +886 3 574 2653; fax: +886 3 572 2204.
E-mail addresses: [email protected] (U.-P. Wen), [email protected]
(H.-S. Shih).
1
Tel.: +886 2 8631 3221; fax: +886 2 8631 3214.

(1989) pointed out some drawbacks on the approach, further


improvements and verications have been made by later investigators (Ricanek et al., 1999).
Constrained optimization problems in general assume that the
minimization (or maximization) of some objective cost (or benet)
function is subject to various constraints with independent variables, and these problems are popular in various areas such as in
science, engineering, and business (Bazaraa et al., 2005, 2006).
Technically, the HNN approach to optimization is to handle a dynamic system in which its energy function, or a Lyapunov function
(see Appendix A), characterizes the behavior of the networks and
represents the problems to be solved. Its architecture can be realized by the use of an electronic circuit, possibly on a VLSI circuit, as
an on-line solution with a parallel-distributed process (Cichocki
and Unbehauen, 1993). These special characteristics are benecial
for real-time optimization and have been applied in many areas
such as pattern recognition, scheduling, manufacturing, and
numerous business applications (Looi, 1992; Sharda, 1994; Smith
and Gupta, 2000; Wong et al., 2000; Zhang and Huang, 1995).
Due to renewed interests in NNs by extending HNNs, various
NNs have been proposed. For instance, Kennedy and Chua (1988)
extended the Tank and Hopelds work to solve a non-linear programming (NLP) problem. Rodrguez-Vzquez et al. (1988, 1990)
presented switched-capacitor NNs for solving LP and NLP problems. Later, Zhang and Constantinides (1992) introduced a Lagrange NNs for solving general NLP problems, and Cichocki and
Unbehauen (1993) proposed the Lagrange multiplier methodbased NN in solving the NLP problem. Xia et al. (2005) offered a

0377-2217/$ - see front matter 2008 Elsevier B.V. All rights reserved.
doi:10.1016/j.ejor.2008.11.002

Please cite this article in press as: Wen, U.-P. et al., A review of Hopeld neural networks for solving mathematical ..., European Journal of
Operational Research (2008), doi:10.1016/j.ejor.2008.11.002

ARTICLE IN PRESS
2

U.-P. Wen et al. / European Journal of Operational Research xxx (2008) xxxxxx

primal and dual method for solving linear and quadric programming problems. Nonetheless, the studies on NN on optimization
have appeared in diversied journals. Interested readers can refer
to Computers and Operations Research (No.34, Vol. 19, 1992), Neurocomputing (No. 13, Vol. 8, 1995), and European Journal of Operational Research (No. 2, Vol. 93, 1996). Some complete
bibliography of this literature can be taken from books and papers
(Klimasauskas, 1989; Looi, 1992; Sharda, 1994; Widrow et al.,
1994; Wilson and Sharda, 1992).
Because there is no systematic review among plenty of articles
on HNNs for optimization up until now, we intend to classify the
essences of these HNNs for optimization as a guideline of future
studies. Their special structures such as penalty function methods,
Lagrange multiplier function methods, and primal and dual methods are also illustrated. In particular, energy functions of the networks characterize the behavior of the networks and support the
directions to search out solutions for real-time optimization. The
functions are the key to bridging the HNNs and the constraints
optimization problems. On account of this signicant importance,
we review the literature based on three technical categories: penalty functions, Lagrange functions, and primal and dual functions.
The contents are organized in the following. Section 2 discusses
general functions and basic aspects of HNNs. The penalty function
methods, Lagrange multiplier methods, and primal and dual functions are overviewed in Sections 35, respectively. The uses of
HNNs for LP, NLP, and mixed-integer linear programming (MILP)
are presented by each method. In Section 6, remarks are made
for illustrating the possible difculties when utilizing HNNs, and nally, conclusions and directions for future studies are given.
2. General function and basic aspects for HNNs
Neural networks have been characterized in various ways
according to many relevant features (Basheer and Hajmeer, 2000;
Sima and Orponen, 2001). In general, NNs have two kinds of structures (Zurada, 1992). Both have to be congured such that the
application of a set of inputs produces (either direct or via a relaxation process) the desired set of outputs, and various methods exist
to set out the strengths of the connections. One way is to place the
weights explicitly, using a priori knowledge. The other way is to
train the neural network by feeding it teaching patterns and letting
it change its weights according to some learning rule. HNNs belong
to non-training model. In particular, HNNs are considered as feedback (or recurrent) without learning from a pattern set (Zurada,
1992). The HNNs consist of a set of neurons and a corresponding
set of unit delays, forming a multiple loop feedback system. Each
feedback loops is equal to a neuron. Basically, the output of each
neuron is feedback, via a unit delay element, to each of the other
neurons in the system.
HNNs are not affected by additional inputs, and each neuron is
fully connected with the remaining neurons by weights and always
updates the connection weights. The associative memory or content-addressable memory is a memory organization that accesses
memory by its contents and it always searches for the closest
matching from all stored prototypes (Du and Swamy, 2006). This
is analogous to the storage of information in an associative memory as biological neurons (Kohonen, 1982). In other words, the process of association and information retrieval is simulated by the
dynamical behavior of a highly interconnected system of non-linear circuit elements with collective computational abilities (Hopeld, 1982).
Due to the above characteristics, HNNs are easily realized for
their capability of parallel computation as a result of being a fully
connected property. The networks use electric circuits to simulate
the actions of biological neurons, and the basic model can be

implemented by interconnecting an array or resistors, non-linear


ampliers with symmetrical output, and external bias current as
shown in Fig. 1. There are n neurons, and each neuron is represented by a resistor gi, a capacitance ci, and a non-linear amplier
with an activation function w(ui), i = 1, 2, . . . , n, respectively. The
resistancecapacitance charging equation determines the rate
change of ui, i = 1, 2, . . . , n, where ui is the input voltage of the amplier. The input is mapped to the output voltage vi or v i (normal or
invert) through the function w(ui), i = 1, 2, . . . , n, respectively. The
choice of connecting the normal or invert output is dependent on
the positive or negative value of the conductance or weight wij,
i = 1, 2, . . . , n, j = 1, 2, . . . , n. The input of each neuron comes from
two sources, external inputs Ii and inputs from other neurons with
the interconnection strength or weights wij from neuron j to neuron i (Hopeld, 1982). In addition, Hopeld has shown that the sufcient condition for a stable network is that its synaptic weights
are symmetric, i.e., wji = wij with wjj = 0, and, i.e. the system has a
Lyapunov function (Hopeld, 1982). Here, the weights are derived
from the function, and the neurons stay transit according to the local information (i.e., feedback) until a steady state is reached
(Burke and Ignizio, 1992).
After a NN model is formulated, Hopeld and Tank (1985) tried
to solve TSPs on a hardware circuit as illustrated above, and later
they (Tank and Hopeld, 1986) extended the concept to deal with
LP problems. However, the model lacks the material technologies
for further development (Hopeld, 1988). Most of the later works
are simulated on digital computers, not on the hardware implementation. From a practical viewpoint, HNNs have been applied
to many areas of MP problems, and we organize them in a systematic way. Interested readers can refer to Burke and Ignizio (1992),
Cichocki and Unbehauen (1993), Klimasauskas (1989), Looi (1992),
Sharda (1994), Smith (1999), Widrow et al. (1994), and Zhang
(2000) for details.
2.1. Classications of Hopeld networks
HNN is a recurrent NN, synaptic connection pattern, whereby
there is a Lyapunov function for the activity dynamics (Hopeld,
1982, 1984). In the original HNN model the state of the system
evolves from any initial state to a nal state where it is a (local)
minimum of the Lyapunov function. Based on the output functions,
HNNs can be classied into two popular forms: discrete and continuous-time models.
2.1.1. Discrete Hopeld networks
The actions of the neurons
HNN can be
hPin discrete/discrete-time
i
n
,
i
=
1,
.
.
.
,
n,
j = 1, . . . , n,
w
x
k

h
illustrated as yj k 1 w
ij
j
i
j1
as in Fig. 2, where the unit delay z1 is yj(k + 1) = w[vj(k + 1)],
P
v j k 1 nj1 wij xj k  hj , hi is an externally applied threshold,
xj is input neurons, and vj(k) = vj(kTs) (k = 0, 1, . . . , k denote the discrete-time and Ts is the sampling period). In general, we always assume the sample period is normalized to unity (Ts = 1).
In the discrete HNN, the units use a bipolar output function
where the states of the units or zero, i.e., the output of the units remain the same if the current state is equal to some threshold value:
P
P
xi = 0, if j wij xj < hi ; otherwise, xi = 1, if j wij xj > hi , and no change
otherwise. Here, wij is the connection weight between units i and j,
and hi is the threshold of unit i. It is obvious that the energy function is non-increasing with the updating of the neurons state
(Hopeld, 1982). In general, the property of a recurrent network
can be described as an energy function, which is used to improve
the stability of recurrent network.
Hopeld (1982) presented an energy function to demonstrate
its stability as

Please cite this article in press as: Wen, U.-P. et al., A review of Hopeld neural networks for solving mathematical ..., European Journal of
Operational Research (2008), doi:10.1016/j.ejor.2008.11.002

ARTICLE IN PRESS
3

U.-P. Wen et al. / European Journal of Operational Research xxx (2008) xxxxxx

(u1 )

(u2 )

(un )

v2

vn

v1

Fig. 1. Basic structure of HNNs.

local minimum of the energy function corresponds to the energy


of the stored patterns. According to the work of Bruck and San
(1988), the network can converge to a minimum from any initial
state. Thus, the characteristic is helpful for applications. In addition,
one famous NN for optimization, the Boltzmann machine, is an
extension of discrete HNN. It only replaces the deterministic local
search dynamics of the HNN by randomized local search dynamics
(Kumar, 2005; Kurita and Funahashi, 1996), but it has only been applied to a small number of problems.

- j

x1 (k)

x2 (k)

( )

wi 1
wi 2

yj (k+1)

vj (k+1)

win
xn (k)

yj (k)

Z -1

Fig. 2. Discrete-time Hopeld structure.

EDHNN 

n X
n
n
X
1X
xi wij xj 
xi Ii
2 i1 j1
i1

and showed that the energy function is a Lyapunov function, which


can lead the nal state into a stable state, if the neurons are, repetitively, updated one at a time in any order. On the other hand, the

2.1.2. Continuous Hopeld networks


The continuous or continuous-time HNN is a generalization of
the discrete case. The common output functions utilized in the networks are sigmoid and hyperbolic tangent functions. Fig. 3 shows
the common circuit of the continuous-time HNN, where Tcj = RjCj,
j = 1, . . . , n, is the integration time constant of the jth neuron, and
hj is an externally applied threshold. The integrator can be realized
by an operational amplier, capacitor Cj, and resistor Rj. Here, cj > 0
is called the forgetting factor of the integrator, which forces the
internal signal vj to zero for a zero input. To formulate the network,
we establish a differential equation for the activation potential vj(t)
P
as T cj dv j =dt cj v j ni1 wji xi  hj . The output of the neuron is

Fig. 3. Continuous-time Hopeld structure.

Please cite this article in press as: Wen, U.-P. et al., A review of Hopeld neural networks for solving mathematical ..., European Journal of
Operational Research (2008), doi:10.1016/j.ejor.2008.11.002

ARTICLE IN PRESS
4

U.-P. Wen et al. / European Journal of Operational Research xxx (2008) xxxxxx

given by yj = w(), which is a continuous output function such as a


sigmoid function.
Hopeld (1984) proposed an energy function for the continuous
HNN as

ECHNN 

n X
n
m
n   Z xi
X
X
1X
1
xi wij xj 
xi I i
g 1
i xi dx;
2 i1 j1
R
i
0
i1
i1

where the function g 1


i xi is a monotone increasing function. Hopeld and Tank (1985), Tank and Hopeld (1986) introduced the
continuous HNN to solve the TSP and LP problems. Afterwards,
many researchers implemented HNN to solve the optimization
problem, especially in MP problems. Hence, the continuous model
is our major concern. In general, the continuous model is superior
to the discrete one in terms of the local minimum problem, because
of its smoother energy surface. Hence, the continuous HNN has
dominated the solving techniques for optimization problems, especially for combinatorial problems.
2.2. Methods of Hopeld networks
HNN manipulates MP problems through approximation (Hopeld and Tank, 1985). It emulates the behavior of neurons through
an electronic circuit. After the circuit is established with many similar sets of components, a parallel computation can be realized for
an on-line solution. These special characteristics are benecial for
real-time optimization and have been applied to solve optimization problems such as LP, NLP, quadratic programming, MILP,
and multi-objective linear programming (Bouzerdoum and Pattison, 1993; Cichocki and Unbehauen, 1993; Ham and Kostanic,
2001; Hopeld and Tank, 1985; Kennedy and Chua, 1988; Looi,
1992; Rodrguez-Vzquez et al., 1988, 1990; Shih et al., 2004;
Smith et al., 1998; Smith, 1999; Wang, 1992, 1996; Xia, 1996).
From the computational aspect, the operation of HNN for an
optimization problem manages a dynamic system characterized
by an energy function, which is the combination of the objective
function and constraints of the original problem. Because there
are some similarities between the function and the formulation
of the NLP problem, many common techniques of NLP can thus
be exploited. In such a way, three common techniques to handle
NLP problems, i.e., penalty functions, Lagrange functions (or augmented Lagrange functions), and primal and dual functions, are
adapted. Firstly, penalty functions use penalty parameters to combine the constraints and the objective function, and then they construct an energy function to be minimized. Secondly, Lagrange
functions (or augmented Lagrange functions) take advantage of Lagrange multiples to construct an energy function to be operated.
Thirdly, primal and dual functions are made up of a primal function
and a dual function and try to reach the minimum gap between the
functions. These three techniques are suitable for solving various
MP problems such as LP, NLP, and MILP. The following sections review the literature of HNNs from a technical standpoint.
3. Penalty function methods
The penalty function method is a popular technique for optimization in which it is used to construct a single unconstrained problem or a sequence of unconstrained problems. A searching method
can be adopted to search for the solution in the decision space, and
the steepest descent approach is the popular technique for obtaining the searching direction (Cichocki and Unbehauen, 1993).
In our survey, many NN approaches utilize a rather basic penalty function to build the energy function and usually converge
to a stable point. In the following contents, we illustrate some
essential parts by the classes of LP, NLP, and MILP problems,
respectively.

3.1. Linear programming problems


The common LP problems are illustrated as the primal LP form
(LPF) and the dual LP form (DLPF), of which the formulations of
both can be

min f x c T x;

s:t: Ax P b g j x P bj ; j 1; 2; . . . ; m

and x P 0;

LPF
n1

mn

where x; c 2 R

; A2R

max gy b y;

m1

, and b 2 R

s:t: A y 6 c fi y 6 ci ; i 1; 2; . . . ; n

and y P 0;

DLPF

where c 2 Rn1 , A 2 Rmn , b 2 Rm1 , and y 2 Rm1 is a vector of dual


independent variables.
Tank and Hopeld (1986) rst proposed the NN structure to
solve the LP problem, and its energy function can be dened
as

ETH x f x

m
n
X
X
g j x2
x2i =2sRii ;
j1

i1

where R is an n  n diagonal matrix, s > 0 is a penalty parameter,

T
and g g
1 ; g 2 ; . . . ; g m  is a penalty vector when gj(x) < bj. Here,
the penalty parameter s must be sufciently large to lead the last
term to be neglected. However, this assumption might make their
model unreliable for solving the LP problem (Kennedy and Chua,
1988).
To improve Tank and Hopelds NN, Kennedy and Chua (1988)
proposed a kind of NN structure with an inexact penalty function,
where the energy function is

EKC x f x

m
s X
g x2 :
2 j1 j

Here, s > 0 is a penalty parameter, and for an inexact penalty rule,


the solution converges to LP as s ? 1. However, there is difculty
in choosing a huge number of parameters (Cichocki and Unbehauen,
1993). Rodrguez-Vzquez et al. (1988) directly use another penalty
method to transform the LP problem into an unconstraint optimization problem. Their energy function is illustrated as




m
X


ERV x; a f x a
minf0; g j x  bj g;

 j1

where a > 0. A non-negative function will satisfy it. In addition, for


each discrete-time step k, we have xi(k) = max{xi(k),0}. Once the feasible region is reached, the trajectory moves toward the minimal
direction of the objective function. Although Rodrguez-Vzquez
et al. (1990) pointed out that their network has no equilibrium
point in the classical sense, however, according to our investigation
(Lan et al., 2007), their network can converge to an optimal solution
of the corresponding convex programming problem from any initial
state.
Maa and Shanblatt (1992) used a two-phase NN structure for
solving the problem. In the rst phase, t < t1, t1 is randomly selected, and the structure is the same as in Kennedy and Chua.
The stability of the network is dependent on how to choose the
penalty parameter s and time parameter t1, but it is not easy to
choose t1. If the initial solution does not fall into a feasible region,
then the solution does not converge to the stable state in the nal.
In addition, Chong et al. (1999) analyzed a class of NN models for
solving LP problems by dynamic gradient approaches based on exact non-differentiable penalty functions. They developed an analytical tool helping the systems converge to a solution within a
nite time.

Please cite this article in press as: Wen, U.-P. et al., A review of Hopeld neural networks for solving mathematical ..., European Journal of
Operational Research (2008), doi:10.1016/j.ejor.2008.11.002

ARTICLE IN PRESS
5

U.-P. Wen et al. / European Journal of Operational Research xxx (2008) xxxxxx

3.2. Non-linear programming problems


Since non-linearity commonly exists everywhere, NLP problems
have huge applications in the real world and have been drawn
much attention in both theoretical and practical aspects. Although
NLP problems are complex in computation, NN approaches with a
penalty function offer a faster convergence (Hagan et al., 1996). A
general NLP form is shown as (Bazaraa et al., 2006):

min f x;
s:t:

hi x 0 for i 1; 2; . . . ; l;

NLP

g j x 6 0 for j 1; 2; . . . ; m;

which is simpler and more intuitive for solving convex NLP problems. They used a constraint set instead of penalty parameters to
overcome the difculty of selection. Their work is valuable for
tackling NLP problems.

3.3. Mixed-integer linear programming problem


MILP problems have a number of applications in the real world.
The most common description is with 01 variables which can be
represented as (Watta and Hassoun, 1996):

min

x 2 X;

s:t:

where f, g1, . . . , gm, h1, . . . , hl are functions dened on En, X is a subset


of, and x is a vector of n components x1, . . . , xn. In general, the penP
alty function fp(x) is represented as fp x f x m
j1 K j fj g j
Pl
x i1 K i fi hi x, where Kj and Ki are commonly referred to as
penalty parameters, and fj and fi are the terms of penalty function
terms. For instance, a typical NN for solving the NLP problem can
P
Pl
b
be written as fp x f x m
j1 K j maxf0; g j xg
i1 K i j hi
a
xj =a, where a, b P 0, and the penalty parameters are Kj, Ki P 0.
For the NLP problems, how to choose the penalty parameter is
rather critical. In our survey, the penalty function methods are usually achieved in either one of the two following ways:
(i) Penalty parameters Ki and Kj are increased for network
training.
(ii) Penalty parameters Ki and Kj are selected with a sufciently
large positive number that makes the unconstrained problem close to the approximation problem of NLP.
Many techniques use the steepest descent approach in searching for solutions by the NN dynamic system with the updating
equations x(k + 1) = x(k)  l(@fp(x)/@x), where the learning rate is
l > 0. To prevent the solution from falling into a local minimum,
a noise term to escape it will reach the global optimum. Hence, this
dynamic update equation can be derived as x(k + 1) = x(k)
 l(@fp(x)/@x) + r2n, where n is a noise vector that is randomly
generated with a standard normal distribution z (zero mean and
unity variance) and r2 is a parameter related to convergence
(Cichocki and Unbehauen, 1993).
Silva et al. (2005) introduced another recurrent NN to handle
optimization in which there are constrained terms in different
stages without any hindrance from each other. At the same time,
Effati and Baymain (2005) presented a new recurrent NN model

f x; v ;
hi x; v 0;

i 1; 2; . . . ; q;

g j x; v 6 0;

j 1; 2; . . . ; p;

ml 6 xl 6 M l ;

v k 2 f0; 1g;

MILP

l 1; 2; . . . ; n;
k 1; 2; . . . ; r;

where f, g1, . . . , gp, h1, . . . , hq are functions dened on Enr, x is a vector of n components x1, . . . , xn which satisfy boundaries between ml
and Ml, and v is the 01 integer vector of components, v1, . . . , vr.
Since, the problem with integer variables is difcult to solve
by an approximate technique, its development is rather slow.
Watta and Hassoun (1996) used a coupled gradient network approach for solving a MILP problem. Their energy function
EWH(x, v) is constructed by summing the objective function
f(x, v) and the penalty function P(x, v) to the constraints. The dynamic update is carried out by the steepest gradient descent
with a xed increment.
We can in general separate the constraints into inequality equations, equality equations, and bounded variables constraints as
P(x, v) = G(x, v) + H(x, v) + V(v). The rst penalty term of the rightP
hand side is Gx; v pj1 #g j x; v , where #(n) = n2 for all n > 0,
and #(n) = 0 for all n 6 0. If the solution is unsatisfactory with the
inequality constraints, then the penalty will be effective. The secP
2
ond term Hx; v qi1 hi x; v enforces the equality constraint.
Pr
The last term Vv k1 v k 1  v k is to keep the 01 variables
feasible. Note that the larger value of penalty functions is 0.5 and
falls to zero as it saturates at vi = 0 or vi = 1. They thus recommend
an energy function as

EWH x; v A  f x; v B

p
X

#g j x; v  C 

j1

r
X


q
X

hi x; v

i1

v k 1  v k :

k1

Table 1
Penalty function methods for optimization problems.
Proposed method

Problem
type

Activation function

Penalty

Learning rate

Initial state

Tank and Hopeld (1986)

LP

Sigmoid function

s must be sufciently large

N/A

Kennedy and Chua (1988)

LP and
NLP
LP and
NLP
LP and
NLP
MILP

Sigmoid function

s > 0 and will be convergent as s ? 1



a > 0 and Si 1 if g i x > 0
0 if g i x 6 0

C is a positive
diagonal matrix
C is a positive
diagonal matrix
0 < li < 1

N/A

Rodrguez-Vzquez et al.
(1988, 1990)
Maa and Shanblatt (1992)
Watta and Hassoun
(1996)
Aourid and Kaminska
(1996)
Chong et al. (1999)
Silva et al. (2005)
Effati and Baymain (2005)

xi(k + 1) = max{xi(k), 0}

Any initial state

Sigmoid function

e is a small positive constant and s > 0

N/A

Feasible variables

Sigmoid function

A, B, C, D > 0

0 < gx, gv < 1

MILP

Sigmoid function

N/A

LP
NLP
NLP

Sigmoid function
Sigmoid function
N/A

a > 0, l > a/2kmax,k = the eigenvalues of


constrained vector
Parameters chosen from a selection rule
N/A
Parameters chosen from the constrained
differential equations

A random initial condition near the


center of phase space
Feasible variables

N/A
N/A
N/A

Feasible variables
Feasible variables
Feasible variables

Note: N/A not available.

Please cite this article in press as: Wen, U.-P. et al., A review of Hopeld neural networks for solving mathematical ..., European Journal of
Operational Research (2008), doi:10.1016/j.ejor.2008.11.002

ARTICLE IN PRESS
6

U.-P. Wen et al. / European Journal of Operational Research xxx (2008) xxxxxx

Eq. (6) uses the two activation functions to t the constraints


x
x
v
xl ql hl ml M l  ml =1 ekl hl and v k rk hk 1=1
kk hvk
, where kl and kk are activation function slopes. Based on
e
the function, it is crucial to select an initial point. If the origin is
chosen, then the solution is easily restricted by the boundaries of
the decision variables with a non-optimal solution. In addition,
the computational load is much heavier due to the presence of
many parameters, i.e., A, B, C, D, ql (l = 1, 2, . . . , n), rk
(k = 1, 2, . . . , r), gx, gm, kx, and km.
Aourid and Kaminska (1996) proposed another penalty function
for solving a 01 LP problem with inequality constraints. They
transformed the general 01 LP problem to a concave minimization problem and then took advantage of a penalty function to
transform the concave problem into an unconstrained problem.
The model considers two kinds of penalty parameters for their energy function and provides a selection rule to choose the penalty
parameters for the concave function.
3.4. Summary
The category of penalty function methods for HNNs is the rst
category to deal with MP problems. It is easy to formulate an
approximating problem from the original problem with the energy
function, but the penalty parameters are difcult to pick and hard
to implement in the hardware. To overcome the disadvantages,
several other approaches have been applied to escape the local
optimal solution, including noise vector and increasing the training
numbers (El-Bouri et al., 2005; Qu et al., 2006). Although these approaches are imperfect, they still open up a new viewpoint for optimization. Furthermore, we organize the developments on penalty
function methods in terms of problem type, activation function,
penalty, learning rate, and initial state as shown in Table 1, in order
to understand how these parts are dened and utilized.
4. Lagrange multiplier related methods
Similar to the penalty function methods, Lagrange multiplier
and augmented Lagrange multiplier methods merge an objective
function and constraints into an energy function of the target networks. This category uses two kinds of dynamic structures to handle real variables and Lagrange multiplier (dual variables). We will
review these methods for solving the LP, NLP and MINLP problems,
respectively. Afterwards, we organize some essential parts of the
category in Table 2 for the benet of future research.
4.1. Linear programming problems
Aside from some earlier works, Zhang and Constantinides
(1992) proposed a Lagrange method and an augmented Lagrange

method for solving LP problems through HNNs. Their energy functions by Lagrange and augment Lagrange multipliers are listed as

EL x; k f x

m
X

kg j x

j1

and

EaL x; k; K f x

m
X

kg j x

j1

m
1X
Kjg j xj2 ;
2 j1

where constraints gj(x) should be modied as gj(x) = 0, and k and K


are Lagrange multiplier vectors and positive penalty parameters,
respectively. Both energy functions utilize two types of terms to
t the real and dual variables, resulting in a longer computational
time. It can be proven that the addition of the quadratic penalty
term in Eq. (8) increases the positive deniteness of Lagranges Hessian matrix (Gill et al., 1981). Furthermore, if the coefcients in K
are sufciently large, then Hessian matrix of the Lagrange can force
to all eigenvalues of its elements to be greater than zero. From the
standpoint of implementation, this characteristic is of great importance since the solution can be sought in iterations.
Gill et al. (1981) mentioned that if a function is convex, then
gradient-based searching methods are guaranteed to converge to
a local minimum. In addition, Zhang and Constantinides (1992)
also suggested that the penalty parameter K is no more than 5
for convergence. According to our experience (Shih et al., 2004),
K should be a very small number, e.g., 0.01, in order to obtain an
optimal solution.
Shih et al. (2004) introduced an augmented Lagrange multiplier
method to solve multi-objective and multi-level problems based
on the LP approach. Its energy function ES(x, k) is

1
K
ES x; k cT x kT Ax  b  kT k Ax  bT Ax  b:
2
2

The function includes penalty parameters, a Lagrange multiplier


and a regularization term. Each iteration requires the two steepest
gradients with x and k for the direction of searching for the stable
point, thus obtaining a rather close solution.
4.2. Non-linear programming problems
The Lagrange multiplier methods are commonly exploited in
solving NLP problems. The NN approach uses of Lagrange multipliers to obtain its energy function with a Lagrange multiplier vector k
P
as fL x; k f x li1 khi x where x 2 Rn1 and k k1 ; k2 ; . . . ; kl T
l1
2 R . The solving procedure then utilizes the dynamic system of
equations through the steepest descent approach (Ham and Kostanic, 2001). If its objective function is a non-convex function, then
the procedure is easily trapped in a local minimum region. In addi-

Table 2
Lagrange multiplier related methods for optimization problems.
Proposed method

Problem
type

Activation function

Penalty

Learning rate

Initial state

Zhang and Constantinides


(1992)
Gong et al. (1997)

LP and
NLP
NLP

N/A

0<c<5

N/A

Any initial state

N/A

N/A

qt 1qa0 t or

It usually uses the origin


point for initial state

Wu and Tam (1999)


Walsh et al. (1998)

NLP
MINLP

Sigmoid function
A hard-limit function for discrete
variables, and a sigmoid function
for continuous variables

k>0
bj > 0 and ri is a scaling
factor known as the slope in
the continuous neuron i

q(t) = q0  expat q0
is large, and a is a positive for
Lagrange multiplier vector; l > 0
for variables
N/A
0 < lc, lk < 1

Any initial state


N/A

Note: N/A not available.

Please cite this article in press as: Wen, U.-P. et al., A review of Hopeld neural networks for solving mathematical ..., European Journal of
Operational Research (2008), doi:10.1016/j.ejor.2008.11.002

ARTICLE IN PRESS
7

U.-P. Wen et al. / European Journal of Operational Research xxx (2008) xxxxxx

tion, if it does not guarantee that the Lagrange term is a convex


function, then the trajectory may exhibit oscillations near the local
minimum. Hence, Ham and Kostanic (2001) described that a noise
factor a could reduce the oscillation as k(k + 1) = k (k) + lk[@fL(x(k),
k(k))/@k  ak(k)], where 0 6 a 6 1.
Gong et al. (1997) provided a modied Lagrange method for
solving the convex optimization problem. The NNs dynamics demonstrate that the equilibrium point satises the KarushKuhn
Tucker (KKT) conditions of the original problem. They also provide
a rule for selecting the learning rate as q(t) = q0/1 + at or
q(t) = q0  expat, where q0 is a constant and a is a positive controlling parameter for the learning rate. Wu and Tam (1999) provided
a novel NN model for solving the quadratic programming problem
through a Lagrange multiplier.
The other relevant form is the augmented Lagrange multiplier
method with extra penalty terms. The form is faL x; k; K
P
P
f x li1 khi x li1 Khi x2 , where k and K are the vectors
of Lagrange multipliers and penalty parameters, respectively
(Ham and Kostanic, 2001). The algorithm forms a different optimization problem at every iterative step and it is closely related to the
so-called QP-based projected Lagrange method (Gill et al., 1981).
For simplicity, the augmented Lagrange multiplier can be extended
to NLP problems with inequality constraints of the form:

faL x; k; K f x

m
X

k maxf0; g j xg

j1

m
X
K
maxf0; g j xg2 ;
2
j1

10
where k and K are the vectors of Lagrange multipliers and the penalty parameters with 2 Rm1 , respectively. A compact form can be
expressed as (Ham and Kostanic, 2001):

faL x; k; K f x

m
X

Sj kg j x K=2g j x2 ;

11

j1

where Sj = 1 if gj(x) > 0, otherwise Sj = 0.


Eq. (11) is close to the structure proposed by Rodrguez-Vzquez et al. (1988) and can nd an optimal solution with x and k.
However, this solution is sensitive to the values of Lagrange multipliers and it must take a considerable number of iterations to be
convergent (Gill et al., 1981).
4.3. Mixed-integer non-linear programming problems
Walsh et al. (1998) introduced an augmented Lagrange multiplier function method to solve mixed-integer non-linear programming (MINLP) problems. Their form is similar to that in Watta and
Hassoun (1996). They solved large temporal unit commitment
problems in the power system. The MINLP problem is illustrated as

min
s:t:

ron cij and Idij for discrete neuron dij, respectively. The matrix T
is given a standard interconnection among all neurons, continuous
and discrete. Here, Tdij?ckm is the connection from the discrete neuron dij to the continuous neuron ckm. However, a new form of
interconnection, matrix W, between neuron pairs is introduced.
For example, Wij?km is the connection from neuron pair cij and
dij to neuron pair ckm and dkm. Their energy function of the augmented HNN is then proposed as

Ewalsh

1 XX

T ckm!cij V cij V ckm
2 i;j k;m



1 XX
T dkm!dij V dij V dkm
2 i;j k;m
X
i;j

XX
i;j

!


T dkm!cij V cij V dkm

k;m

Icij V cij

i;j

!
1 XX
Idij V dij 
W km!ij V dij V cij V dkm V ckm :
2 i;j k;m

12

It is proven that Eq. (12) should be (local) minimized for both the
continuous variables output Vcij and the discrete variables output
Vdij. Since the activation function of the discrete neuron is a hardlimit function, it will easily drop in a local optimum. If the discrete
variables are selected to be zero, then the binary variables are not
modied for later iterations. Moreover, the continuous activation
function takes advantage of a general sigmoid function tanh (output = (einput  einput)/(einput + einput)), whose output is between 1 and
1. Thus, the continuous variables do not stably converge to the
real variables which are greater than 1 or less than 1.
Dillon and OMalley (2002) proposed another augmented HNN to
solve MINLP problems. Their structure requires many variables and
parameters, and it will adds much computational burden. Hence, the
proposed network may be inappropriate for applications.
4.4. Summary
After surveying the above developments, we nd that the
parameter setting for Lagrange multipliers and penalties as well
as the learning rates have major inuences on the convergence
and stability of HNNs. Lagrange multiplier and augmented Lagrange multiplier methods require more variables and parameters
than do the penalty function methods. This increases the computational complexity and is difcult to control, but its structure can
provide a precise solution. Furthermore, we also collect the characteristics of this category of approach for NNs in terms of problem
type, activation function, penalty, learning rate, and initial state
as shown in Table 2, so as to understand in what way the key parts
are involved.

5. Primal and dual methods

f x; v
hi x; v 0;
xl P 0;

i 1; 2; . . . ; q;

l 1; 2; . . . ; n;

v k 2 f0; 1g;

MIN LP

k 1; 2; . . . ; r;

where f, h1, . . . , hq are functions dened on Enr, x is a vector of n


components x1, . . . , xn, and v is a 01 integer vector of components
v1, . . . , vr. They used an augmented HNN with two sets of neurons:
the continuous neurons xl, l = 1, 2, . . ., n, and the binary neurons vk,
k = 1, 2, . . . , r.
The activation function gc of the continuous neuron is a sigmoid
function Vcij = wc(Ucij) = 1/2(1 + tanh(kUcij)), where k is a scaling factor for its slope in the neuron cij, and Vcij and Ucij are the respective
output and input data. Additionally, the activation function gd for
the discrete neuron dij is Vdij = wd(Udij) = 1, if Udij > 0; Vdij = wd
(Udij) = 0, otherwise. Here, Vdij and Udij are the output and input,
respectively. All neurons have a bias input Icij for continuous neu-

The primal and dual methods provide another means to deal


with MP problems. According to the Theorem of Duality, the primal
solution is equal to the dual solution when reaching an optimal
solution in LP (Bazaraa et al., 2006). However, some researchers
rely on the primal and dual rules to construct the energy function.
However, the function is complicated if one considers both primal
and dual variables. Thus, the literature only discusses small-sized
problems (Wang, 1997, 1998; Xia and Wang, 1995; Xia, 1996). This
section presents the network for solving LP, networks ows, and
NLP problems. In addition, we also aggregate some essential parts
of HNNs in this category for the benet of future studies.
5.1. Linear programming problems
Xia and Wang (1995) rst used bounded variables to construct a
new NN approach to solve the LP problem with no penalty param-

Please cite this article in press as: Wen, U.-P. et al., A review of Hopeld neural networks for solving mathematical ..., European Journal of
Operational Research (2008), doi:10.1016/j.ejor.2008.11.002

ARTICLE IN PRESS
8

U.-P. Wen et al. / European Journal of Operational Research xxx (2008) xxxxxx

eters. They suggested that the equilibrium point is the same as the
exact solution when simultaneously solving the primal problem
and its dual problem. Hence, they dened an energy function for
solving LP problems as Exw(x, y):

EXW x; y

1 T
1
1
1
T
c x  b y2 xT x  jxj yT y  jyj kAT x
2
2
2
2
1 T
2
T
T
 bk2 A y  cA y  c  jA y  cj:
13
2

Their NN approach has three important advantages: (i) avoiding a


signicant amount of parameters, (ii) escaping from an infeasible
solution, and (iii) being able to estimate the duality gap indirectly.
Afterwards, Xia and Wang (1998) then showed many NNs structures which include penalty and primaldual forms for solving optimization problems. They proved that the approach improved the
Maa and Shanblatts parameters selection (Maa and Shanblatt,
1992), and demonstrated that it can be used in general MP
problems.
Recently, Malek and Tari (2005) represented two new methods
to solve the primal LP problem and found their optimal solution for
both primal and dual LP problems. They claimed that their methods give better solutions for dual variables and better optimal solutions. In addition, they realized that the new network can solve LP
problems with efcient convergence within a nite time.
5.2. Network ows
Wang (1997, 1998) proposed two primal and dual methods for
solving assignment problems (AP) and shortest path routing problems (SPP). The primal AP and dual AP can be formulated, respectively, as follows:

min

n X
n
X
i1
n
X

cij xij ;

n
X

s:t:

j1

8j 1; 2; . . . ; n;

xij 1;

i1

min

n
n
X
X

s:t:

8
if i 1
>
< 1;
xik 
xli 0;
if i1 & in
>
:
k1;ki
l1;li
1; if i n;
n
X

n
X

xij 2 f0; 1g;

xij 2 f0; 1g 8i; j 1; 2; . . . ; n:

The dual SPP can be formulated as

max yn  y1 ;
s:t:

max

ui

i1

max zn ;
s:t:

Dual SPP2

8i; j 1; 2; . . . ; n:

zj  zi 6 cij ; ij;

Here, z1  0. The value of the objective function at its maximum is


still the total cost of the shortest path.
The energy function for the primal SPP can be dened as

#2

n
X
xX

Ew98 t; xt

xik t  xki t  di1 din

i1

ki

n1 X
X

cij expt=sxij t;

16

i1 j2;ji

v j;

s:t: ui v j 6 cij

8i; j 1; 2; . . . n:

j1

DualAP
Here, cij and xij are the respective cost and decision variables, associated with assigning entity i to position j. Here, xij = 1 if entity i is
assigned to position j, and ui and vi are dened as the dual decision
variables. The energy function of the primal assignment problem is

Ew97 t; xt

Dual SPP1

8i; j 1; 2; . . . ; n:

yj  yi 6 cij ; ij;

The term yi denotes the dual decision variables associated


with vertex i. Since the objective function and constraints in
the dual problem involve differences in variables only, an
equivalent dual SPP with n  1 variables can be formulated
by dening zi = yi  y1, for i = 1, 2,. . ., n. Thus, the dual SPP
formulated is

AP
n
X

SPP

ij; i; j 1; 2; . . . ; n:

j1

n
X

cij xij

i1 j1;ji

"

8i 1; 2; . . . ; n;

xij 1;

Similar to Eq. (14), b exp(t/s) is a temperature parameter as


well. He claimed that its computational complexity is reduced
and can be implemented for a large-scale AP in a real-time
basis.
Wang (1998) presented another similar primal dual method for
solving SPP. Based on the edge path representation (Bazaraa et al.,
2005), the SPP can be formulated as an integer LP problem for solving the minimal-cost SPP. The mathematical formulation is

8
"
n
n
X
x <X
2 : i1

#2
xij t  1

j1

a exp 

"
n
n
X
X
j1

 n X
n
t X

i1

xij t  1

i1

cij xij t;

n X
n
xX

i1

Ew98d t; zt

#2 9
=

n X
xX

gzj t  zi t  cij 2  b expt=szn t;

i1

ji

17

;
14

j1

where x, a, and s are positive constants, and a exp(t/s) is a temperature parameter whose role is to balance the effect of cost minimization and constraint satisfaction (Wang, 1993). On the other
hand, the form of the dual AP energy function is formulated as

Ew97d t; ut; v t

where x, a, and s are positive constants, s and dpq is the Kronecker


delta function dened as dpq = 1 if p = q; otherwise dpq = 0. The role
of a exp(t/s) is explained in his previous research (Wang, 1993).
Similar to the primal SPP, the dual SPP energy function is formulated as

where x, b, and s are positive constants, w() is a non-negative and


non-decreasing activation function, and b exp(t/s) is the temperature parameter. He proposed the rules for choosing the parameter,
in which x must be large in order to expedite the convergence of
the dual routing network, and the role of b is usually set to be
b  xcmax to balance the effect of constraint satisfaction and objective maximization.
5.3. Non-linear programming problems

fgui t v j t  cij g2

j1

 b expt=s

n
X

ui t v i t;

15

i1

where b > 0, g() is a non-negative and non-decreasing activation function given by w(s) = 0, if s 6 0 and w(s) > 0, if s > 0.

Wu et al. (1996) presented a primaldual method to solve LP


and NLP problems and demonstrated that the solution is quickly
convergent with high accuracy. They dened the primal and the
dual methods for the quadratic programming (QP) problems as

min

f q x

1 T
x Ax ax;
2

s:t: Dx b and x P 0;

QP

Please cite this article in press as: Wen, U.-P. et al., A review of Hopeld neural networks for solving mathematical ..., European Journal of
Operational Research (2008), doi:10.1016/j.ejor.2008.11.002

ARTICLE IN PRESS
U.-P. Wen et al. / European Journal of Operational Research xxx (2008) xxxxxx

max

1
T
f q y b y  xT Ax;
2

s:t: DT y  Ax  a 6 0;

DQP

where A is an m  m symmetric positive semi-denite matrix, D is


an n  m matrix and rankD m; y; b 2 Rn and x; a 2 Rm .
According to the Theorem of Duality, x and y are optimal solutions to (QP) and (DQP), respectively, if and only if x and y satisfy
the following constraints:

Dx b;

DT y  Ax  a 6 0;

xDT y  Ax  a 0:

QP-DOP

These models need much computational effort in the NN dynamic


structure and takes up much memory space in each step. Hasam
and Nezam (2006) presented another primaldual NN structure
for solving LP and NLP problems. The network can obtain a robust
result from any initial solution. Its major advantages include no
parameter setting and low costs for implementation, which are
deemed to be better.
5.4. Summary
This category belongs to a discrete HNN structure so that the
network can be easily implemented in hardware. However, due
to its complexity, how to build a primal and dual network structure
is the major concern. In addition, we see that most networks dealing with network ow problems are classied into this category.
For better understanding, we organize the characteristics of the
methods for NN in terms of the problem type, activation functions,
penalty, learning rate, and initial state as shown in Table 3.
6. Remarks on hopeld networks
This section rst denes the traveling salesman problem (TSP)
solved by HNNs. Some drawbacks for the approach are then discussed. Later, remarks can be drawn on the use of HNNs.
Because Hopeld and Tank (1985) introduced a network model
to solve a TSP, HNNs have dominated the NN approach for optimization. For a TSP, they gave the following denition. A set of n cities, A, B, C, . . ., have pairwise distances of separation, dAB, dAC, . . .,
dBC, . . .. The problem is to nd a closed tour which visits each city
once, returns to the starting city, and has a minimum total travel
length. Here any individual city is indicated by the output states
of a set of n neurons, and can be in any one of the n positions in
the tour list. For n cities, a total of n independent sets of n neurons
are needed to describe a complete tour. Hence, this is a total of
N = n2 neurons which are displayed as an n  n square array for
the TSP network. Since the representation of neural outputs of
the network in terms of n rows of n neurons, the N symbols of outputs will be represented by double indices VXj. The row subscript
has the explanation of a city name, and the column subscript the
position of that city in a tour. To permit the N neurons in the TSP
network to compute a solution to the problem, the network must
be explained by an energy function in which the lowest energy
state corresponds to the best path. The space over which the energy function is minimized in this limit is the 2n corners of the
N-dimensional hypercube. Consider those corners of this space
which are the local minima of the energy function (Hopeld and
Tank, 1985)

ECHNN

TSP

A=2

XXX
X

C=2


ji

XX
X

V Xi V Xj B=2

XXX
i

!2
V Xi  n

D=2



dXY V Xi V Y;i1 V Y;i1 :

V Xi V Yi

XY

XX
X

YX

18

Here, A, B, C, and D are positive. The four terms in Eq. (18) are
dened as: (i) term 1 is zero if and only if each city row X

contains no more than one 1, i.e., the rest of the entries being
zero; (ii) term 2 is zero if and only if each position in tour
column contains no more than one 1, i.e., the rest of the entries being zero; (iii) term 3 is zero if and only if there are
n entries of one in the entire matrix; and (iv) term 4 describes
lower values corresponding to shorter tours. The rst three
terms describe the feasibility requirements and result in a valid
tour by zero (Hopeld and Tank, 1985). The last term represents the objective function of TSP. In addition, the four parameters A, B, C, and D must be chosen by trial and error for a
particular problem. Various results illustrate different strategies
for the actual implementation of HNNs, such as a sigmoid function with the value in the range (0, 1). Eq. (18) can be used to
derive the following expression for the strength of connection
between pairs of neurons Xi and Yj. Since the time for the system to converge is an arbitrary unit, the value of the time constant of the ampliers in the system, s, can be set to 1.0, and
gain u0 is a scalar factor.
Although HNN is the most popular model of NNs for optimization, it suffers from some drawbacks: (i) the minimal solution does
not guarantee global optimization; (ii) the articial variables A, B, C
and D are not easily implemented; and (iii) the energy function
may pose many unnecessary local solutions which are difcult to
avoid (Takahashi, 1998). Until now, many researchers have tried
to improve the drawbacks of HNNs, and we briey show offer
attention so that future studies could follow.
Wilson and Pawley (1988) rst focused on the effectiveness of
computation for the HNN dealing with TSPs. They examined the
original model of Hopeld and Tank (1985) through six aspects.
First, the parameters, i.e., penalty parameters, city sizes, and gain
u0, are rather sensitive to the operating points, as changing by as
little as 10% of their original values can prevent the system from
converging. Second, the input current is at best in randomization.
Third, the distance setting (for the cities) must not be the same
to make reorganization easily. Fourth, the xed city positions in
a tour can reduce computational complexity, and the quality of
the paths obtained is improved signicantly. Fifth the algorithm
of Durbin and Willshaw (1987) provides global information on
its initialization. Their idea introduces a bias into the initial values
to reect the cities located on opposite sides of the unit square that
are likely to be on opposite sides of a tour. Finally, the weights of an
inter-neural connection must be within a range of values, instead
of the same value initially, in order to avoid degeneracy in the
hardware implementation. After the modications, the HNNs can
reliably solve small-size TSPs (Wilson and Pawley, 1988).
According to our survey, continuous HNNs have four deciencies that need to be improved for solving large-size problems
(Takahashi, 1998). In the following four sections, we examine
these issues and try to provide some directions for future
research.
6.1. Parameter setting and the initial state
How to set the parameters and choose an initial state are the
two challenges in utilizing HNNs for beginners. We rst list these
two parts in our remarks.
6.1.1. How to set parameters
HNN could lead to an infeasible solution nally if the parameters are inadequately set for computation. In fact, its energy function often causes infeasible solutions in solving TSPs (Kennedy and
Chua, 1988). The investigation on the inter-relationship among the
parameters shows that TSPs do not have a scaling property and
only a small range of parameter combinations result in the stable
solution (Kennedy and Chua, 1988). In addition, HNNs commonly
take advantages of gradient and Lagrange multiplier methods for

Please cite this article in press as: Wen, U.-P. et al., A review of Hopeld neural networks for solving mathematical ..., European Journal of
Operational Research (2008), doi:10.1016/j.ejor.2008.11.002

ARTICLE IN PRESS
10

U.-P. Wen et al. / European Journal of Operational Research xxx (2008) xxxxxx

Table 3
Primal and dual methods for optimization problems.
Proposed method

Problem type

Activation function

Penalty

Learning
rate

Initial state

Xia and Wang


(1995)
Wu et al. (1996)

LP

N/A

N/A

N/A

LP and NLP

N/A

N/A

N/A

Feasible
variables
Feasible
variables

Assignment problem
(network ows)

f[uij(t)] = xmax/{1 + exp[fuij(t)]} or


f[uij(t)] = fuij(t)]

x is a large number, a P x/cmax(cmax is


maximum cost with the edge in the
graph)
b  xcmax
s is must sufcient large

N/A

Feasible
variables

N/A

Feasible
variables
Feasible
variables
Any initial
state

Wang (1997)

Wang (1998)

Malek and Tari


(2005)
Hasam and Nezam
(2006)

LP

N/A

x is a large number, a P x/cmax(cmax is


maximum cost with the edge in the
graph)
b  xcmax
s is must sufcient large s is must
sufcient large
N/A

LP and NLP

N/A

N/A

Shortest path problem


(network ows)

xij(t) = f[uij(t)]  primal


xij(t) = f[zj(t)  zi(t)  cij]  dual

g, g1, g2 > 0
N/A

Note: N/A not available.

solving problems. These methods always need many parameters to


build an approximate function. As the size of the problem grows,
its complexity increases signicantly. However, most current techniques control these parameters through a trial and error procedure. It would be a great advance if there is a guideline for
setting parameters in the future.
Aiyer et al. (1990) analyzed the behavior of the continuous HNN
based on the matrix expression and derived the parameter settings
for the TSP with a better solution. Talavn and Yez (2002) proposed some analytical conditions of continuous HNNs with the
problem size of up to 1000 cities. These parameters depend on
the number of cities and the magnitude of their distances. Their
parameter setting can be applied to other NN algorithms for
improving their convergence, such as a chaotic simulated annealing algorithm (Chen and Aihara, 1995), a local minima escape algorithm (Peng et al., 1996), and a simulated annealing algorithm
(Papageorgiou et al., 1998). Moreover, the setting procedure can
be implemented on other energy functions (Abe, 1993; Chen and
Aihara, 1995; Kamgar-Parsi et al., 1990). In the meaning time, Talavn and Yez (2002) also suggested another setting approach to
solve a generalized quadratic knapsack problem. Tan et al. (2005)
recommended an enhanced parametric formulation that maps
TSPs onto a continuous HNN and illustrated its result by simulation. Unfortunately, the parameter setting in the NNs for combinatorial optimization has not been studied. However, we believe that
further developments on the parameter setting of HNNs can derive
to more reliable cases.
6.1.2. Choosing the initial state
The choice of an initial state affects the convergence in searching for optimal solutions (Cichocki and Unbehauen, 1993; Ham and
Kostanic, 2001; Peng et al., 1996). We have collected the chosen
initial states and listed them in the last column of Tables 13. Aside
from arbitrarily choosing the states, some of them suggest that the
initial state must be chosen in a feasible region to keep the nal
solution from not dissipating (Cichocki and Unbehauen, 1993).
On the other hand, since it is uneasy to choose the state, some researches even point out that the initial state must be close to the
optimal solution (Maa and Shanblatt, 1992; Zak et al., 1995). No
matter how it is done, there should be a systematic way to choose
the initial state for networks in the future.

6.2. Infeasible solutions


As discussed before, an infeasible solution could be obtained if
one inadequately sets parameters or arbitrarily chooses initial
states. Takahashi (1998) queried about the original HNNs as: (i) a
solution can converge on non-optimal locally minimum solutions;
(ii) a solution may converge on infeasible solutions; and (iii) solutions are very sensitive by their parameters. Thus, he provided an
extended energy function to escape the infeasible solution through
carefully tuning the parameters for TSPs, after which the feasible
solution can be guaranteed. Furthermore, he modied a series of
theories to promote a feasible solution for the extended HNN so
that the solution eventually turns out to be TSP-feasible. We see
that his structure is more complex than the original one, and its
computational burden increases as the problem size does. It seems
that Takahashis study provides an optimistic direction for the future development, least to say, a feasible solution will be obtained
in the nal stage from his contribution.
6.3. Local versus global optimal solutions
It is common that the solution of HNNs is easily trapped at a local optimum (Kennedy and Chua, 1988). Various approaches are
contributed to avoid local optimal solutions (Martn-Valdivia
et al., 2000; Peng et al., 1996; Qu et al., 2006), but there has been
no perfect approach until now. Peng et al. (1996) improved the
HNNs with a local minimal searching algorithm, e.g., gradient
methods or RungeKutta methods, and its solution is stable at a local minimum state from any initial state. In fact, their network uses
the HNN for a local search and an auxiliary network for choosing
the suitable initial state. Although their development is difcult
to be theoretically analyzed (Qu et al., 2006), it still offers a guide
for solving large-scale optimization problems.
Yalcinoz and Short (1997) proposed another approach with a
dened rule to solve a large-scale economic dispatching problem,
and its solution can converge to a valid region. Martn-Valdivia
et al. (2000) suggested a local minima escape algorithm (Peng
et al., 1996) with an augmented Lagrange multiplier method to
solve the TSPs with 51 cities and 101 cities. Following this streamline, many researches studies have tries to modify HNNs or to combine with other heuristic algorithms so as to avoid the local optima

Please cite this article in press as: Wen, U.-P. et al., A review of Hopeld neural networks for solving mathematical ..., European Journal of
Operational Research (2008), doi:10.1016/j.ejor.2008.11.002

ARTICLE IN PRESS
U.-P. Wen et al. / European Journal of Operational Research xxx (2008) xxxxxx

(Sexton et al., 1998, 1999). This direction can help HNNs to solve
large-scale problems in the future.
6.4. Computational efciency
A noteworthy feature of the HNNs is their ability for massive
parallel computation (da Silva et al., 2006; Hopeld and Tank,
1985; Li, 1996; Lin et al., 2000; Pavlik, 1992; Wang, 1993). However, their complex structure for parallel computation may lead
to difculties in neuron updating (Looi, 1992; Smith, 1999). Therefore, how to design an effective parallel process is an important issue in the future.
Along with the trend of simulation procedure for solving problems on digital computers, Mida-Casermeiro et al. (2001) proposed a new concept for updating units in a discrete HNN for
solving TSPs. Two neuron updating rules, the one-path updating
rule and the two-path updating rule, which are similar to two-optimal and three-optimal tour improvement heuristics are respectively supplied (Croes, 1958). The computational results with city
numbers from 51 to 532 are demonstrated, and a direction to improve the computational efciency is also discussed. Munehisa
et al. (2001) presented another HNN structure for updating the
weights for a number of neurons. Four test problems from the
TSP library are examined by the proposed and show a rather good
performance (Reinelt, 1991). However, larger problems take up
signicant computational time by soft computing.
Hardware computing, on the other hand, is much faster than
soft computing and is the noteworthy advantage of HNNs. It needs
more elements to be implemented in the hardware for large-sized
problems. Moreover, the development of VLSI circuits has improved so tremendously that implementation will not be a problem later if the material problem can be overcome.
The HNN structure causes a logistic difculty (solutions
with several local minima) in computing (Gee and Prager,
1995). For this topic, Takahashi (1999) provided two structures
of the specic problems which are related to the formulation.
One is how to map a real world problem onto a discrete
HNN problem (Takahashi, 1999), and the other is how to organize a mathematical formulation to a continuous HNN problem
(Takahashi, 1999). The rst problem gives hope for future
applications of HNNs if the transformation can be interpreted
adequately. Although many researchers have been successful
in illustrating their HNNs to solve real problems such as
assignment problems, scheduling problems, shortest path problems, TSPs, and vehicle routing problems (Arajo et al., 2001;
Nygard et al., 1990; Gee and Prager, 1995; Smith et al.,
1996; Wang, 1996), more and more applications are expected
in the future. The second problem is difcult to be conquered
since it still lacks a general network, guaranteeing an optimal
solution for a general MP problem. Our review collects and
organizes a series of methods for solving the special cases of
MP problems. Some connections among these cases and discussed drawbacks might be helpful for blossoming more general cases in the future.
We have posted four issues in dealing with the use of HNNs for
optimization. These contents are practical when utilizing the networks, but we only discuss how to formulate the problems and
then solve the problems through mathematical software on a digital computer. In this paper we have not implemented HNNs by
hardware. This might be another story for later research.
7. Conclusions and future works
After reviewing the HNNs for optimization in the area of operations research, we can see that the networks solve various MP
problems. After respectively classifying the techniques of NNs,

11

penalty functions, Lagrange multipliers, and primaldual methods,


the various programming problems can be placed and solved by
the proposed techniques. Moreover, some essential parts of NN approaches are also organized in the three tables for ease of use by
beginners.
Based on the review, we have obtained some directions for future studies. First, the computational efciency of the HNN approaches does not perform consistently well. Therefore, it is
valuable to develop a more efcient NN for solving a generalized
problem. Second, the HNN provides another aspect for solving a
large-sized problem with a parallel process on a real-time basis.
However, material advancement for establishing the necessary
VLSI circuit is needed in the future. Third, although there are some
drawbacks in utilizing HNNs, we suggest that a hybrid algorithm,
combined with a special characteristic of other techniques, could
enhance the networks for advanced applications in many areas
(Lan et al., 2007). Fourth, we also believe that a creative thinking
on the network structure might lead to another tremendous advance in this area. It is hoped that the review can arouse more
interest on HNNs in the future.
Acknowledgements
The authors are really appreciate National Science Council,
Taiwan, ROC, for the nancial support under the grant number:
NSC 95-2221-E-032 -030-MY3.
Appendix A. The denition of the Lyapunov function (Haykin,
1999)
Functions which make use of a continuous scalar function of the
state vector are called a Lyapunov function. The Lyapunov function
proves the stability of a certain xed point in a dynamical system
or autonomous differential equation. One must be aware that the
basic Lyapunov Theorems for autonomous systems are a sufcient,
but not necessary tool to prove the stability of an equilibrium system. Finding a Lyapunov Function in a certain equilibrium situation might be a matter of luck. Trial and error is the common
method to apply when testing Lyapunov functions on some equilibrium systems.
Haykin (1999) provided a serious of theorems to dene and
supports us general formulation to illustrate the Lyapunov function as follows.
 is stable if in a small neighborTheorem 1. The equilibrium state x
 there exists a positive denite function V(x) such that its
hood of x
derivative with respect to time is negative semi-denite in that region.
 is asymptotically stable if in a
Theorem 2. The equilibrium state x
 there exists a positive denite function V(x)
small neighborhood of x
such that its derivative with respect to time is negative denite in that
region. A scalar function V(x) that satises these requirements is called
.
a Lyapunov function for the equilibrium state x
The function V(x) is positive denite in the state space S, and for
all x in S, it satises the following requirements:
(i) The function V(x) has continuous partial derivatives with
respect to the elements of the state vector x.
 0.
(ii) Vx
.
(iii) Vx > 0 if xx
 0 be an equilibrium state of the autonomous system
Let x
_
x_ f x and let Vx
@V=dxdx=dt rf x be the time derivative of the Lyapunov function V. There exist three types of stable
equilibrium, such as

Please cite this article in press as: Wen, U.-P. et al., A review of Hopeld neural networks for solving mathematical ..., European Journal of
Operational Research (2008), doi:10.1016/j.ejor.2008.11.002

ARTICLE IN PRESS
12

U.-P. Wen et al. / European Journal of Operational Research xxx (2008) xxxxxx

(1) Stable equilibrium:If the Lyapunov function V is locally positive denite and the time derivative of the Lyapunov func_
tion is locally negative semi-denite, where Vx
6 0;
8x 2 S for some neighborhood space S, then the equilibrium
is proven to be stable.
(2) Locally asymptotically stable equilibrium:If the Lyapunov
function V is locally positive denite and the time derivative
of the Lyapunov function is locally negative denite, where
_
Vx
6 0; 8x 2 S n f0g for some neighborhood space S, then
the equilibrium is proven to be locally asymptotically stable.
(3) Globally asymptotically stable equilibrium:If the Lyapunov
function V is globally positive denite and the time derivative of the Lyapunov function is globally negative denite,
_
where Vx
6 0 8x 2 Rn n f0g, then the equilibrium is proven
to be globally asymptotically stable.
In many problems of the HNN, the energy function can be considered as a Lyapunov function. However, the existence of a Lyapunov function is sufcient but not necessary for stability. The
Lyapunov function V(x) provides the mathematical basis for
the global stability analysis of the non-linear dynamical system.
The global stability analysis is much more powerful in its conclusion than local stability analysis that is, every globally stable
system is also locally stable, but not vice versa.
References
Abe, S., 1993. Global convergence and suppression of spurious states of the Hopeld
neural networks. IEEE Transactions on Circuits Systems 40, 246257.
Ackley, D.H., Hinton, G.E., Sejnowski, T.J., 1985. A learning algorithm for Boltzmann
machines. Cognitive Science 9, 147169.
Aiyer, S.V.B., Niranjan, M., Fallside, F., 1990. A theoretical investigation into the
performance of the Hopeld model. IEEE Transactions on Neural Networks 1,
204215.
Aourid, M., Kaminska, B., 1996. Minimization of the 01 linear programming
problem under linear constraints by using neural networks: Synthesis and
analysis. IEEE Transactions on Circuits and Systems I: Fundamental Theory
and Applications 43, 421425.
Arajo, F., Rebeiro, B., Rodrigues, L., 2001. A neural network for shortest path
computation. IEEE Transactions on Neural Network 12, 10671073.
Basheer, I.A., Hajmeer, M., 2000. Articial neural networks: Fundamentals,
computing, design, and application. Journal of Microbiological Methods 43, 3
31.
Bazaraa, M.S., Jarvis, J.J., Sherali, H.D., 2005. Programming and Network Flows, third
ed. John-Wiley, New York.
Bazaraa, M.S., Sherali, H.D., Shetty, C.M., 2006. Nonlinear Programming Theory and
Algorithms, third ed. John-Wiley, New York.
Bouzerdoum, A., Pattison, T.R., 1993. Neural network for quadratic optimization
with bound constraints. IEEE Transactions on Neural Networks 4, 293304.
Bruck, J., San, J., 1988. A study on neural networks. International Journal of
Intelligent Systems 3, 5975.
Burke, L.I., Ignizio, J.P., 1992. Neural network and operations research: An overview.
Computers and Operations Research 19, 179189.
Chen, L., Aihara, K., 1995. Chaotic simulated annealing by a neural network model
with transient chaos. Neural Networks 8, 915930.
_
Chong, E.K.P., Hui, S., Zak,
S.H., 1999. An analysis of a class of neural networks for
solving linear programming problems. IEEE Transactions on Automatic Control
44, 19952006.
Cichocki, A., Unbehauen, R., 1993. Neural Networks for Optimization and Signal
Processing. John Wiley, New York.
Croes, G.A., 1958. A method for solving traveling salesman problems. Operations
Research 6, 791812.
da Silva, I.N., Amaral, W.C., Arruda, L.V.R., 2006. Neural approach for solving several
types of optimization problems. Journal of Optimization Theory and
Applications 128, 563580.
Dillon, J.D., OMalley, M.J., 2002. A Lagrangian augmented Hopeld network for
mixed integer non-linear programming problems. Neurocomputing 42, 323
330.
Du, K.L., Swamy, M.N.S., 2006. Neural Networks in a Softcomputing Framework.
Springer, London, UK.
Durbin, R., Willshaw, D., 1987. An analogue approach to the traveling salesman
problem using an elastic net method. Nature 326, 689691.
Effati, S., Baymain, M., 2005. A new nonlinear neural network for solving convex
nonlinear programming problems. Applied Mathematics and Computation 168,
13701379.
El-Bouri, A., Balakrishnan, S., Popplewell, N., 2005. A neural network to enhance
local search in the permutation owshop. Computers and Industrial
Engineering 49, 182196.

Gee, A.H., Prager, R.W., 1995. Limitations of neural networks for solving travelings
salesman problems. IEEE Transactions on Neural Networks 6, 280282.
Gill, P.E., Murray, W., Wright, M.H., 1981. Practical Optimization. Academic, London.
Gong, D., Gen, M., Yamazaki, G., Xu, W., 1997. Lagrangian ANN for convex
programming with linear constraints. Computers and Industrial Engineering
32, 429443.
Hagan, M.T., Demuth, H.B., Beale, M., 1996. Neural Network Design. PWS Publishing
Co, Massachusetts.
Ham, F.M., Kostanic, I., 2001. Principles of Neurocomputing for Science and
Engineering. McGraw-Hill, New York.
Hasam, G.-O., Nezam, M.-A., 2006. An efcient simplied neural network for solving
linear and quadratic programming problems. Applied Mathematics and
Computation 175, 452464.
Haykin, S., 1999. Neural Networks: A Comprehensive Foundation, second ed.
Prentice-Hall, New Jersey.
Hopeld, J.J., 1982. Neural networks and physical systems with emergent collective
computational abilities. Proceedings of the National Academy of Sciences 79,
25542558.
Hopeld, J.J., 1984. Neurons with graded response have collective computational
properties like those of two-state neurons. Proceedings of the National
Academy of Sciences 81, 30883092.
Hopeld, J.J., 1988. Articial neural networks. IEEE Circuits and Devices Magazine
September, 310.
Hopeld, J.J., Tank, D.W., 1985. Neural computation of decisions in optimization
problems. Biological Cybernetics 52, 141152.
Kamgar-Parsi, Behzad, Kamgar-Parsi, Behrooz, 1990. On problem solving with
Hopeld neural networks. Biological Cybernet 62, 415423.
Kennedy, M.P., Chua, L.O., 1988. Neural networks for nonlinear programming. IEEE
Transportation Circuits System 35, 554562.
Klimasauskas, C.C., 1989. Neural networks: A new technology for information
processing. Data Base 20, 2123.
Kohonen, T., 1982. Self-organized formation of topologically correct feature maps.
Biological Cybernetics 43, 5969.
Kumar, S., 2005. Neural Networks: A Classroom Approach. International Edition,
McGraw-Hill, Singapore.
Kurita, N., Funahashi, K., 1996. On the Hopeld neural networks and mean eld
theory. Neural Networks 9, 15311540.
Lan, K.M., Wen, U.P., Shih, H.S., Lee, E.S., 2007. A hybrid neural network approach to
bilevel programming. Applied Mathematics Letters 20 (8), 880884.
Li, S.Z., 1996. Improving convergence and solution quality of Hopeld-type neural
networks with augmented Lagrange multipliers. IEEE Transactions on Neural
Networks 7, 15071516.
Lin, C.L., Lai, C.C., Huang, T.H., 2000. A neural network for linear matrix inequality
problems. IEEE Transactions on Neural Networks 11, 10781092.
Looi, C.K., 1992. Neural network methods in combinatorial optimization. Computers
and Operations Research 19 (3/4), 191208.
Maa, C.Y., Shanblatt, M.A., 1992. Linear and quadratic programming neural network
analysis. IEEE Transaction on Neural Networks 3, 380394.
Malek, A., Tari, A., 2005. Primaldual solution for the linear programming problems
using neural networks. Applied Mathematics and Computation 167, 198211.
Martn-Valdivia, M., Ruiz-Seplveda, A., Triguero-Ruiz, F., 2000. Improving local
minima of Hopeld networks with augmented Lagrange multipliers for large
scale TSPs. Neural Networks 13, 283285.
Mida-Casermeiro, E., Galn-Marn, G., Muoz-Perz, J., 2001. An efcient
multivalued Hopeld network for the traveling salesman problem. Neural
Processing Letters 14, 203216.
Munehisa, T., Kobayashi, M., Yamazaki, H., 2001. Cooperative updating in the
Hopeld model. IEEE Transactions on Neural Networks 12, 12431251.
Nygard, K.E., Jueli, P., Kadaba, N., 1990. Neural networks for selecting vehicle
routing heuristic. ORSA Journal of Computing 2, 353364.
Papageorgiou, G., Likas, A., Stafylopatis, A., 1998. Improved exploration in Hopeld
network state-space through parameter perturbation driven by simulated
annealing. European Journal of Operations Research 108, 283292.
Pavlik, P., 1992. Parallel Hopeld machine. Neurocomputing 4, 8991.
Peng, M.K., Gupta, N.K., Armitage, A.F., 1996. An investigation into the improvement
of local minima of the Hopeld network. Neural Networks 9, 12411253.
Qu, H., Yi, Z., Tang, H., 2006. Improving local minima of columnar
competitive model for TSPs. IEEE Transactions on Circuit and Systems I
53, 13531362.
Reinelt, G., 1991. TSPLIB A traveling salesman problem library. ORSA Journal on
Computing 3, 376384.
Ricanek, K., Lebby, G.L., Haywood, K., 1999. Hopeld like networks for pattern
recognition with application to face recognition. In: International Joint
Conference on Neural Network, vol. 5, pp. 32653269.
nguez-Castro, R., Rueda, A., Huertas, J.L., SnchezRodrguez-Vzquez, A., Dom
Sinencio, E., 1988. Switched-capacitor neural networks for linear programming.
Electronics Letters 24, 496498.
nguez-Castro, R., Rueda, A., Huertas, J.L., SnchezRodrguez-Vzquez, A., Dom
Sinencio, E., 1990. Nonlinear switched-capacitor neural networks for
optimization problems. IEEE Transactions on Circuits Systems 37, 384397.
Sexton, R.S., Aldaee, B., Dorsey, R.E., Johnson, J.D., 1998. Global optimization for
articial neural networks: A tabu search application. European Journal of
Operational Research 106, 570584.
Sexton, R.S., Dorsey, R.E., Johnson, J.D., 1999. Optimization of neural networks: A
comparative analysis of the genetic algorithm and simulated annealing.
European Journal of Operational Research 114, 589601.

Please cite this article in press as: Wen, U.-P. et al., A review of Hopeld neural networks for solving mathematical ..., European Journal of
Operational Research (2008), doi:10.1016/j.ejor.2008.11.002

ARTICLE IN PRESS
U.-P. Wen et al. / European Journal of Operational Research xxx (2008) xxxxxx
Sharda, R., 1994. Neural networks for the MS/OR analyst: An application
bibliography. Interfaces 24, 116130.
Shih, H.S., Wen, U.P., Lee, E.S., Lan, K.M., Hsiao, H.C., 2004. A neural network
approach to multiobjective and multilevel programming problems. Computers
and Mathematics with Applications 48, 95108.
Shirazi, B., Yih, S., 1989. Critical analysis of applying Hopeld neural net model to
optimization problems. In: IEEE International Conference on Systems, Man and
Cybernetics, vol. 1, 1417 November, pp. 210215.
Silva, I.N., Amaral, W.C., Arruda, L.V., 2005. Design and analysis of an efcient neural
network model for solving nonlinear optimization problems. International
Journal of Systems Science 36, 833843.
Sima, J., Orponen, P., 2001. A Computational Taxonomy and Survey of Neural
Network
Models.
Available
from:
<http://citeseer.ist.psu.edu/
sima01computational.html>.
Smith, K.A., 1999. Neural networks for combinatorial optimization: A review on
more than a decade of research. INFORMS Journal on Computing 11, 15
34.
Smith, K.A., Gupta, J.N.D., 2000. Neural networks in business: Techniques and
applications for the operations research. Computers and Operations Research
27, 10231044.
Smith, K., Palaniswami, M., Krishnamoorthy, M., 1996. Traditional heuristic versus
Hopeld neural network approaches to a car sequencing problem. European
Journal of Operational Research 93, 300316.
Smith, K., Palaniswami, M., Krishnamoorthy, M., 1998. Neural techniques for
combinatorial optimization with applications. IEEE Transactions on Neural
Networks 9, 13011318.
Takahashi, Y., 1998. Mathematical improvement of the Hopeld model for TSP
feasible solutions by synapse dynamic systems. IEEE Transactions on Systems,
Man, and Cybernetics, Part B 28, 906919.
Takahashi, Y., 1999. A neural network theory for constrained optimization.
Neurocomputing 24, 117161.
Talavn, P.M., Yez, J., 2002. Parameter setting of the Hopeld network applied to
TSP. Neural Networks 15, 363373.
Tan, K.C., Tang, H., Ge, S.S., 2005. On parameter settings of Hopeld networks
applied to traveling salesman problems. IEEE Transactions on Circuits and
Systems I 52, 9941002.
Tank, D.W., Hopeld, J.J., 1986. Simple neural optimization networks: An A/D
converter, signal decision network and a linear programming circuit. IEEE
Transactions on Circuits Systems 33, 533541.
Walsh, M.P., Flynn, M.E., OMalley, M.J., 1998. Augmented Hopeld network for
mixed integer programming. IEEE Transactions on Neural Networks 10, 456
458.
Wang, J., 1992. Analogue neural network for solving the assignment problem.
Electronics Letters 28, 10471050.
Wang, J., 1993. Analysis and design of a recurrent neural network for linear
programming. IEEE Transactions on Circuits and Systems I: Fundamental
Theory and Applications 40, 613618.

13

Wang, J., 1996. A recurrent neural network for solving the shortest path problem.
IEEE Transactions on Circuits and Systems I: Fundamental Theory and
Applications 43, 482486.
Wang, J., 1997. Primal and dual assignment networks. IEEE Transactions on Neural
Networks 8, 784790.
Wang, J., 1998. Primal and dual neural networks for shortest-path routing. IEEE
Transactions on Systems, Man, and Cybernetics Part A: Systems and Humans
28, 864869.
Watta, P.B., Hassoun, M.H., 1996. A coupled gradient network approach for static
and temporal mixed-integer optimization. IEEE Transactions on Neural
Networks 7, 578593.
Widrow, B., Rumelhart, D.E., Lehr, M.A., 1994. Neural networks: Applications in
industry, business and science. Communication of the ACM 37, 93105.
Wilson, G.V., Pawley, G.S., 1988. On the stability of the traveling salesman problem
algorithm of Hopeld and Tank. Biological Cybernetics 58, 6370.
Wilson, R.L., Sharda, R., 1992. Neural networks. OR/MS Today 19, 3642.
Wong, B.K., Lai, V.S., Lam, J., 2000. A bibliography of neural networks in business:
Techniques and applications research: 19941998. Computers and Operations
Research 27, 10451076.
Wu, A.I., Tam, P.K.S., 1999. A neural network methodology and strategy of quadratic
optimization. Neural Computing and Applications 8, 283289.
Wu, X.Y., Xia, Y.S., Li, J., Chen, W.K., 1996. A high-performance neural network for
solving linear and quadratic programming problems. IEEE Transactions on
Neural Network 7, 643651.
Xia, Y., 1996. A new neural network for solving linear programming and its
application. IEEE Transactions on Neural Networks 7, 525529.
Xia, Y., Wang, J., 1995. Neural network for solving linear programming problem
with bound variables. IEEE Transactions on Neural Networks 6, 515519.
Xia, Y., Wang, J., 1998. A general methodology for designing globally convergent
optimization neural networks. IEEE Transactions on Neural Networks 9, 1331
1334.
Xia, Y., Feng, G., Wang, J., 2005. A primaldual neural network for online resolving
constrained kinematic redundancy in robot motion control. IEEE Transactions
on Systems Man and Cybernetics Part B Cybernetics 35, 5464.
Yalcinoz, T., Short, M.J., 1997. Large-scale economic dispatch using an improved
Hopeld neural network. IEE Proceedings Generation, Transmission and
Distribution 144, 181185.
Zak, S.H., Upatising, V., Hui, S., 1995. Solving linear programming problems with
neural networks: A comparative study. IEEE Transactions on Neural Networks 6,
94104. The MatheWorks: <http://www.mathworks.com/>.
Zhang, X.S., 2000. Neural Networks in Optimization. Kluwer Academic, London.
Zhang, S., Constantinides, A.G., 1992. Lagrange programming neural networks. IEEE
Transactions on Circuits Systems 39, 441452.
Zhang, H.C., Huang, S.H., 1995. Applications of neural networks in manufacturing: A
state-of-the-art survey. International Journal of Production Research 33, 705
728.
Zurada, J.M., 1992. Introduction to Articial Neural Systems. West, New York.

Please cite this article in press as: Wen, U.-P. et al., A review of Hopeld neural networks for solving mathematical ..., European Journal of
Operational Research (2008), doi:10.1016/j.ejor.2008.11.002

You might also like